• 5 Posts
  • 393 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • It’s “whataboutism” in the sense we’re interrogating focus. Why do you think white ethnonationalists spend so much time asserting “white lives matter?” Because there’s only so much air in the room, and they know giving air to one cause deprives another.

    I think it’s worth wondering why people spend so much time discussing Israel/Palestine and so little discussing other issues that are at least as large from a “people impacted” perspective. Obviously there’s also an African infantilization (that is to say, racist) double standard here — we simply don’t expect Africa to have human rights. But I would say there is certainly also an Israel double standard, and it is antisemitic in the same way saying “well of course Sierra Leone is a hellhole, there’s no news there” is racist.

    You are not a news outlet. But you choose what you’re spending your time and effort on. And it is this. I think many people don’t interrogate why they get so involved and what their opinions actually mean in terms of what their focus accomplished and what it broadcasts.

    I apologize for choosing you as the vehicle for this message; I don’t mean to attack you personally. There are a ton of people doing this and your message was as good as any other to demonstrate my point.



  • Your second point is entirely correct; see also self-hating gays in the Log Cabin Republicans.

    I think the shield for your first point is pretty narrow these days. About a decade ago that point held a lot more salience, but as my “new antisemitism” link discusses, the position has been adopted so vigorously by antisemites that I think the position is indeed very close to antisemitic unless deployed extremely carefully.

    Yes, criticism of Israel is not inherently antisemitic. But since this canard is so often invoked by idle and ignorant spectators, with no real understanding of Israeli or Palestinian politics, inserting themselves into a fraught and unhappy situation, usually specifically to criticize or delegitimize only Israel… it’s tough to see how that isn’t a special standard applied only to Israel. Or, worse, it’s invoked by real antisemites hoping to get bystanders on-side with actual antisemitism by cloaking it as criticism of Israel.

    As a concrete example of this new antisemitism – in 2017, Hamas altered its charter, which was wildly and outright antisemitic, to specifically state that it doesn’t actually want to kill all Jews as previously stated, but only the occupiers of Palestine. Given their actions, the huge amount of specifically anti-Jewish sentiment in Gaza, and even the incredibly virulent language in their old charter, do you think they actually changed their minds about Jews? Or are they simply cloaking their antisemitism in a package that more people might agree with these days? A new kind of antisemitism?





  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    The Anthropic one is saying they think they have a way to figure it out, but it hasn’t been tested on large models. This is their last paragraph:

    Again, all your quotes indicate that what they’ve figured out is a way to inspect the interior state of models and transform the vector space into something humans can understand without analyzing the output.

    I think your confusion is you believe that because we don’t know what the vector space is on the inside, we don’t know how AI works. But we actually do know how it accomplishes what it accomplishes. Simply because its interior is a black box doesn’t mean we don’t understand how we built that black box, or how it operates and functions.

    For an overview of how many different kinds of LLMs function, here’s a good paper: https://arxiv.org/pdf/2307.06435.pdf You’ll note that nowhere is there any confusion about the process of how they generate input or produce output. It is all extremely well-understood. You are correct that we cannot interrogate their internals, but that is also not what I mean, at least, when I say that we can understand them and how they work.

    I also can’t inspect the electrons moving through my computer’s CPU. Does that mean we don’t understand how computers work? Is there intelligence in there?

    I think you’re maybe having a hard time with using numbers to represent concepts. While a lot less abstract, we do this all the time in geometry. ((0, 0), (10, 0), (10, 10), (0, 10), (0, 0)) What’s that? It’s a square. Word vectors work differently but have the same outcome (albeit in a more abstract way).

    No, that is not my main objection. It is your anthropomorphization of data and LLMs – your claim that they “have intelligence.” From your initial post:

    But also, can you define what intelligence is? Are you sure it isn’t whatever LLMs are doing under the hood, deep in hidden layers?

    I think you’re getting caught up in trying to define what intelligence is; but I am simply stating what it is not. It is not a complex statistical model with no self-awareness, no semantic understanding, no ability to learn, no emotional or ethical dimensionality, no qualia…

    ((0, 0), (10, 0), (10, 10), (0, 10), (0, 0)) is a square to humans. This is the crux of the problem: it is not a “square” to a computer because a “square” is a human classification. Your thoughts about squares are not just more robust than GPT’s, they are a different kind of thing altogether. For GPT, a square is a token that it has been trained to use in a context-appropriate manner with no idea of what it represents. It lacks semantic understanding of squares. As do all computers.

    If you’re saying that intelligence and understanding is limited to the human mind, then please point to some non-religious literature that backs up your assertion.

    I’m disappointed that you’re asking me to prove a negative. The burden of proof is on you to show that GPT4 is actually intelligent. I don’t believe intelligence and understanding are for humans only; animals clearly show it too. But GPT4 does not.




  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Oh, you again – it’s incredibly ironic you’re talking about wrong statements when you are basically the poster child for them. Nothing you’ve said has any grounding in reality, and is just a series of bald assertions that are as ignorant as they are incorrect. I thought you would’ve picked up on it when I started ignoring you, but: you know nothing about this and need to do a ton more research to participate in these conversations. Please do that instead of continuing to reply to people who actually know what they’re talking about.




  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    We do understand how the math results in LLMs. Reread what I said. The neural network vectors and weights are too complicated to follow for an individual, and do not relate on a 1:1 mapping with the words or sentences the LLM was trained on or will output, so individuals cannot deduce the output of an LLM easily by studying its trained state. But we know exactly what they’re doing conceptually, and individually, and in aggregate. Read your own sources from your previous post, that’s what they’re telling you.

    Concepts are indeed abstract but LLMs have no concepts in them, simply vectors. The vectors do not represent concepts in anything close to the same way that your thoughts do. They are not 1:1 with objects, they are not a “thought,” and anyway there is nothing to “think” them. They are literally only word weights, transformed to text at the end of the generation process.

    Your concept of a chair is an abstract thought representation of a chair. An LLM has vectors that combine or decompose in some way to turn into the word “chair,” but are not a concept of a chair or an abstract representation of a chair. It is simply vectors and weights, unrelated to anything that actually exists.

    That is obviously totally different in kind to human thought and abstract concepts. It is just not that, and not even remotely similar.

    You say you are familiar with neural networks and AI but these are really basic underpinnings of those concepts that you are misunderstanding. Maybe you need to do more research here before asserting your experience?

    Edit: And in relation to your links – the vectors do not represent single words, but tokens, which indeed might be a whole word, but could just as well be part of a word or an entire phrase. Tokens do not represent the meaning of a word/partial word/phrase, just the statistical use of that word given the data the word was found in. Equating these vectors with human thoughts oversimplifies the complexities inherent in human cognition and misunderstands the limitations of LLMs.




  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    LLMs can’t do any of those things though…

    If no one teaches them how to speak a dead language, they won’t be able to translate it. LLMs require a vast corpus of language data to train on and, for bilingual translations, an actual Rosetta stone (usually the same work appearing in multiple languages).

    This problem is obviously exacerbated quite a bit with animals, who, definitionally, speak no human language and have very different cognitive structures to humans. It is entirely unclear if their communications can even be called language at all. LLMs are not magic and cannot render into human speech something that was never speech to begin with.

    The whole article is just sensationalism that doesn’t begin to understand what LLMs are or what they’re capable of.


  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Large language models by themselves are “black boxes”, and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work.

    You are misunderstanding both this and the quote from Anthropic. They are saying the internal vector space that LLMs use is too complicated and too unrelated to the output to be understandable to humans. That doesn’t mean they’re having thoughts in there: we know exactly what they’re doing inside that vector space – performing very difficult math that seems totally meaningless to us.

    Is this not what word/sentence vectors are? Mathematical vectors that represent concepts that can then be linked to words/sentences?

    The vectors do not represent concepts. The vectors are math. When the vectors are sent through language decomposition they become words, but they were never concepts at any point.



  • Veraticus@lib.lgbttoTechnology@lemmy.mlGPT-4 Understands
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    But also, can you define what intelligence is?

    From the Encyclopedia Britannica:

    Human intelligence is a mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.

    In no sense do LLMs do any of these except, perhaps, “understand and handle abstract concepts.” But since they themselves have no understanding of the concepts, and merely generate text that can simulate understanding, I would call that a stretch.

    Are you sure it isn’t whatever LLMs are doing under the hood, deep in hidden layers?

    Yes. LLMs are not magic, they are math, and we understand how they work. Deep under the hood, they are manipulating mathematical vectors that in no way are connected representationally to words. In the end, the result of that math is reapplied to a linguistic model and the result is speech. It is an algorithm, not an intelligence.

    I’m not really interested in papers that either don’t understand LLMs or play word games with intelligence (shockingly, solipsism is an easy point of view to believe if you just ignore all evidence). For every one of these, you can find a dozen that correctly describe ChatGPT and its limitations. Again, including ChatGPT itself. Why not believe those instead of cherry-pick articles that gratify your ego?