Artificial intelligence is already advancing at a worrying pace. What if we don’t slam on the brakes? Experts explain what keeps them up at night
Large Language Models are nothing but very advanced regurgitation machines. That’s the AI these articles are hand wringing about - not a real Artificial Intelligence.
These articles remind me of the bitcoin articles we used to see.
What ability do you think that they are currently missing that makes them ‘regurgitation machines’ rather than just limited and dumb but genuine early AI?
AI is the be-all-end-all worst idea humans ever conceived.
It is both the best and worst idea humans ever conceived.
Calling LLMs intelligent is what caused this mass hysteria.
Shit i dunno, everyone dying the same instant doesnt sound so bad. Quick and painless is certainly better than the options most of us face ¯_(ツ)_/¯
And monkeys could fly out of my ass. Before we start hand wringing about AI someone would probably need to actually invent one. We’re probably closer to actual room temperature fusion at this point than we are an actual general purpose AI.
Instead of wasting time worrying about a thing that doesn’t even exist and probably won’t in any of our lifetimes, we should probably do something about the things actually killing us like global warming and unchecked corporate greed.
I absolutely hate this craze. Most of the questions I get about AI are just facepalming because everyone is feeding off each other with these absurd things that could hypothetically happen. Clearly because actually explaining it doesn’t generate clicks and controversy
Clearly because actually explaining it doesn’t generate clicks and controversy
Solving real problems is hard because if it wasn’t they would be solved already, but making up fake problems is really easy.
Exactly. There was an article floating around just a couple of days ago that from what I recall was saying that billionaires were funding these AI-scare studies in top universities, I presume to distract the public from the very real and near scare of climate disaster, economic inequality, etc. Here, unfortunately paywalled: https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/
There is this concept called “crityhype”. It’s a type of marketing mascarading as criticism. “Careful, AI might become too powerful” is exactly that
A lot of the folks worried about AI x-risk are also worried about climate, and pandemics, and lots of other things too. It’s not like there’s only one threat.
like global warming and unchecked corporate greed.
And the unnecessary cruelty @[email protected] puts poor monkeys through.
Come on man let the poor things out. No matter what they did to you. They don’t deserve that.
Amen. The “AI” everyone is freaking out about is good at a narrow range of things, but either dumb as shit or completely incapable otherwise
As long as AI doesn’t go around financing wars around the globe and sanctioning opposing or outright bombing them, I’m fine with it.
I’m not worried at all. I look forward to our AGI overlords.
Covering your bases I see
Thou shalt not make a machine in the likeness of a human mind.
Another good article to read for balance and more background on why some people may be trying to restrict “AI”: https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614