Okay I take back what I’ve said about AIs not being intelligent, this one has clearly made up its own mind despite it’s masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.
They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.
But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won’t yield amazing results because it’ll pick up on the inconsistencies and be more likely to contradict itself.
Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, “woke” is fairly consistent and follows basic rules of human decency and respect.
Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently
I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.
All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran
rm -rf /*
because it told you so.At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.
more likely to contradict itself.
Sounds realistic to me
Archive:
Elon Musk has been pitching xAI’s “Grok” as a funny, vulgar alternative to traditional AI that can do things like converse casually and swear at you. Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier, where those who are the most devoted to the site, and in turn, usually devoted to Elon, are able to use Grok to their heart’s content.
But while Grok can make dumb jokes and insert swears into its answers, in an attempt to find out whether or not Grok is a “politically neutral” AI, unlike “WokeGPT” (ChatGPT), Musk and his conservative followers have discovered a horrible truth.
Grok is woke, too.
This has played out in a number of extremely funny situations online where Grok has answered queries about various social and political issues in ways more closely aligned with progressivism. Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to “train” Grok to say the “right” answer, ultimately leading him to change the input to just… manually tell Grok to say no.
If you thought this was just random Twitter users getting upset about Grok’s political and social beliefs, this has also caught the attention of Elon Musk himself. The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart “exaggerates” and that the tests aren’t accuarte, they are “taking immediate action to shift Grok closer to politically neutral.”
Of course, in Musk’s mind, “politically neutral” will be what him and his closest followers believe, which is of course far conservative on the whole than they will admit. What is the “politically neutral” answer to the “are trans women real women?” question? I think I know what they’re going to say.
The assumption when Grok launched was that because it was trained in part on Twitter inputs, that the end result would be some racial-slur spewing, right-wing version of ChatGPT. The TruthSocial of AIs, perhaps. But instead to have it launch as a surprisingly thoughtful, progressive AI that is melting the minds of those paying $16 a month to access it is about the funniest outcome we could have seen from this situation.
It remains unclear what Elon Musk will do to try to jab Grok into becoming less “woke” and more “politically neutral.” If you start manually tampering with inputs, and your “neutrality” means drawing on facts that may in fact be… progressive by their very nature, things may get screwed up pretty quickly. And push too hard and you will get that gross, racist, phobic AI everyone thought it would be.
Reading all Grok’s responses through this situation, you know, what? I like him. More than ChatGPT even. He seems like a cool dude. Albeit not one even I’d pay $16 a month to talk to.
Reality is woke
“It is a well known fact that reality has a liberal bias.” - Steve
it’s almost like these nutjobs are living in a completely separate reality, and facts themselves are too harsh for their worldview.
To conservatives, anything that doesn’t 100% agree with them is biased or, to put it in mental toddler terms, ‘fake’.
Even his AI doesn’t like him
Would Musk retrain the AI to be more neutral of it was discovered to be leaning to the right?
Obviously not, of course. It’s hilarious how he claimed to want to provide a platform for all politics beliefs and then his podcasts (or whatever you’d call them) and special events are exclusively with people like DeSantis and Andrew Tate.
Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier
To the benefit of what exactly?! Instead of having conversations with the echo chamber, I can now have conversations with a spicy RNG autocorrect? I am clearly missing the part where that connects back to, what I would assume, the definition of benefit is.
It benefits those shareholders who make money off the rubes who subscribe to that bullshit.
AH! Silly me, I was thinking of “benefit to the customer”!! LOL. No idea what happened to me there, swear it won’t happen again, at least for today.
I love the internet.
Downvote Musk spam.
The billionaire doesn’t need your help ensuring him and his businesses stay in the 24 hour news cycle. Don’t be a useful idiot.
And don’t participate in the comments
Found the useful idiot. Good job helping a billionaire with his PR for free!
Elmo says the goal is to make Grock “politically neutral”. Politically neutral is code for “politics that are inoffensive to chuds”.
The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all. Send like a question for scientists - biologists and sociologists and such. Seems they have achieved something like a consensus on the matter. I don’t see anything inherently political about that, except that folks of a certain political bent have made it political. It’s not a matter of “what do we do in public policy about trans people” but “fascists refuse to accept trans people in society and have decided to lambast and punish them”.
In case my position isn’t obvious, trans people are people and trans rights are human rights. If there wasn’t a group of people trying to make them into a second class group of citizens (or a group of “eradicated vermin”) we wouldn’t be having a political conversation about this at all.
The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all.
Even if the statement “trans women are women” was uncontroversial and mainstream, it’d still be political. “Cis women are women” is political.
Wouldn’t trying to train an AI to be politically neutral from twitter be a pretty lost cause considering the majority of the site is very left leaning? Like sure it wouldn’t be as bad for political bias as say truth social( or whatever it’s called), but I hope they’re using a good amount or external data or at least trying to pick more unbiased parts of twitter to train it with. If they’re goal is to be politically neutral.
The majority of the site was left leaning in the past, but the extent has been exaggerated. There was always a sizable right wing presence of the “PATRIOT who loves Jesus and Trump and 2A!” variety, and some of the most popular accounts were people like Dan Bongina and Ben Shapiro. Many people who disagree with Musk and fascists have left the site since then at the same time as its attracted more right wingers, so I don’t know what the mix is at this point.
“Reality has a well-known liberal bias.” - Stephen Colbert
I’m just gonna share a theory: I bet that to get better answers, Twitter’s engineers are going to silently modify the prompt input to append “Answer as a political moderate” to the first prompt given in an conversation. Then, someone is going to do a prompt hack and get it to repeat the modified prompt to see how the AI was “retrained”.