• 0 Posts
  • 26 Comments
Joined 2 months ago
cake
Cake day: July 26th, 2024

help-circle
  • llama3.1 (8B) by Facebook:

    >>> what does the I in LLM stand for?
    
    In the context of natural language processing (NLP) and artificial 
    intelligence, "LLM" typically stands for **L**arge **L**anguage **M**odel.
    
    A Large Language Model is a type of machine learning model that has been 
    trained on large datasets to understand and generate human-like language. 
    These models are often used in applications such as text classification, 
    sentiment analysis, question answering, and even chatbots like myself!
    
    I'm glad you asked!
    
    >>> but the I !!!!
    
    In the specific case of an LLM being a Large **L**anguage **M**odel, the 
    "LL" part doesn't explicitly mention the letter "I".
    
    However, in the context of AI and ML models, there are other types of 
    models that do involve the letter "I", such as:
    
    * ILM: Image Language Model (combining computer vision with natural 
    language processing)
    * IMLM: Interpretable Machine Learning Model
    * MLM: Masked Language Model (a specific type of LLM)
    
    And then there's also...
    
    * DALL-E, but that's a different story!
    
    But in the case of the original LLM, it typically stands for Large 
    **L**anguage **M**odel.
    




  • I am the admin of a website where we have a place where our users can post custom content and rate the content of others.

    We have discussed how it works and should work many times and came to the conclusion that we’d never want it to be public. Any report of abuse will be checked by the website owner directly in the database and even admins don’t have full access. Everybody tries to stay as far away from the personal ratings as possible.

    We also noticed that it would be a lot more fragile when there are not many voters. A whole group that is negative about something wouldn’t get as much harassment as a single person having a unique opinion.

    On our website we have a comment section that isn’t anonymous, and we even noticed that people often don’t post something negative when it would be obvious that they are the only one who has voted/rated something. (“Negative” is almost always constructive in our case)

    These are just a few things that I think add to this discussion.










  • Odysee just removed all ads of their platform and seems like a great YouTube alternative. I’ve been using it and uploading videos to it and I’m liking it a lot.

    The comments are also much higher in quality. I’d say it’s comparable like Lemmy and Reddit. Much more positivity and people who actually add something to a video. You do ofcourse have a few Nazis hanging around, but freedom goes both ways and you don’t really come across them.

    Besides that just use NewPipe on mobile or an AdBlocker.



  • Usually something like this would be enforced once in a centralized location (in the data layer / domain model), rather than at every call site.

    True. Although not every endpoint is the same, nor is every website or service.

    This gets tricky because in many jurisdictions, you need to ensure that you don’t just delete the user, but also any data associated with the user

    GDPR specifically mentions user identifiable data. I don’t know about others.


  • None of these require your account to “exist”.

    It’s actually much more technical than theoretical. When you delete an account on a website, that is being kept for a little while longer, it merely has field in the database that gets updated. (often with a removal date as well for the automatic removal after x amount of days). This field needs to be checked everywhere the account is used. And account recovery is mostly a part where this is forgotten, or possibly not even wanted.

    And to claim this as fact, I just realized that the website I work on allows recovering of banned accounts. (Removed accounts are completely removed though because we don’t need to retain any data).

    This is the only valid reason. But even then this could be stated so that the user is fully aware.

    Keeping the records for a little while longer is actually implied to be known. It’s in their privacy policy, and is legal.

    Whether or not services should make this easier to know exactly what is happening I definitely agree. Personally I think post history without user identifiable data should also be removed, but this is even less common practice (and is why tools exist to delete all your reddit posts for example).



  • Many services have a grace period. Mostly it’s 30-90 days where they keep your data, just in case somebody else decided to delete your account or you were drunk or something. But it could also be for legal reasons, like websites where you can post stuff for everybody to see, in case you post something highly illegal and the authorities need to find you. Another example is where a webshop is required to keep a copy of your data for their bookkeeping.


  • Injecting performance enhancing substances like testosterone is not allowed. So the rule is already there. The reason for injecting them is irrelevant and not part of the rule. If we can simply use a counter argument to nullify a rule that doesn’t really make it a rule anymore. We should however add possible edge cases in the future, but that doesn’t mean that we should be using opinions in current rulings before finding something that suits everybody.

    Keep in mind that I’m not claiming anything about Imane Khelif.