Whentember juneteenth
Whentember juneteenth
When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.
Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.
Agreed. Reddit now has ai generated echo chambers.
Echo chambers were literally the biggest issue with that site. Now it chatGPTs itself?
QA brought to you by ChatGPT.
Okay but are they delicious?
Found the round-earther.
Incorrect. 15 years in the industry here. Good day.
I said good day.
I’m not going to bother arguing with you but for anyone reading this: the poster above is making a bad faith semantic argument.
In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications.
This insufferable asshat is arguing that since they all use fuel, fire and air they are all engines. Which’s isn’t wrong but it’s also not the argument we are having.
@OP good day.
You have inadvertently made an excellent argument for freedom of / unregulated speech online and in other spaces.
I know however that in practice people think the bad thing, say it and then find a million voices to echo it and instead of learning they become radicalised.
But your post outlines the idealistic view.
A tale as old as time. The old analyst developer with cobwebs behind his ears gets sacked because of CIOs shiny new materia. Only to be rehired within the quarter at a consultant fee the time his previous salary.
That’s not what I said.
What I typed there is not my opinion.
This the technical, industry distinction between AI and things like ML and Neural networks.
“Mimicking living things” is obviously not exclusive to AI. It is exclusive to AI as compared to ML, for instance.
Technically speaking AI is any effort on the part of machines to mimic living things. So computer vision for instance. This is distinct from ML and Deep Learning which use historical statistical data to train on and then forecast or simulate.
LLMs (the models that “hallucinate” is most often used in conjunction with) are not Deep Learning normie.
Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.
I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.
We have reached the winter of my discontent. Modern life is rubbish.
Only if we can listen to a Judas Priest record backwards.
Data modeling and warehousing would like a little chat with you.
Imagine being an early adopter and every time you close your eyes it’s the same fucking ad where Kanye chants: “Elon macht frei” for hours.
Is this real though? Does ChatGPT just literally take whole snippets of texts like that? I thought it used some aggregate or probability based on the whole corpus of text it was trained on.