• 0 Posts
  • 152 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle










  • krashmo@lemmy.worldtomemes@lemmy.worldI swear it was real!
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    24 days ago

    You’re certainly entitled to your own opinion on the artistic merit of that kind of comedy but I don’t think there’s any denying that’s what it is. Nevertheless, why would you look to any artist, especially comedians, for balanced political discussion? That’s not what they do. Even the ones that talk about politics are doing it more for the laughs than the nuanced discussion. If any comes from their jokes that is a side benefit, not the primary purpose.


  • What sort of respect are you referring to when you say stuff like this? It feels like people these days judge everyone by the same standards they would a politician and that seems really odd to me. Comedians are supposed to say edgy stupid shit. That’s quite literally their job.







  • My kid regularly gets a little bit of shit in his underwear and I’ll catch him sneakily changing his pants. When I ask him what happened he always says he was having too much fun to stop and go to the bathroom. If a kid was sick and excited about something, like the Hornet’s mascot showing up to school, I could easily see something like this happening.



  • Current gen AI is pretty mediocre. It’s not much more than the bastard child of a search engine and every voice assistant that has been around for the last ten years. It has the potential to be a stepping stone to fantastic future tech, but that’s been true of tons of different technologies for basically as long as we’ve been inventing things.

    AI is not good enough to replace the majority of workers yet. It summarizes information pretty well and can be helpful with drafting any sort of document, but so was Clippy. When it doesn’t know something it can lie confidently. Lie isn’t really the right word but I’ll come back to that concept in a second. Incorrect information is frustrating in most cases but it can be deadly when presented by a source that is viewed as trustworthy, and what could be more trustworthy than an AI with access to the collective knowledge of mankind? Well, unfortunately for us AI as we know it isn’t really intelligent and the databases they’re trained on also contain the collective stupidity of mankind.

    That brings us back to the concept of lying and what I view as the fundamental flaw of current AI; namely that any sort of data interpretation can only be as good as the data it describes. ChatGPT isn’t lying to you when it says you can put glue on your cheese pizza, it’s just pointing out that someone who said that got a lot of attention. Unfortunately it leaves out all the context which could have told you that pizza would not be fit to consume and presents the fact that it was a popular answer as if that is the only thing that defines the best answer. There’s so much more that needs to be taken into account, so much unconscious human experience being drawn from when an actual human looks at something and tries to categorize or describe it. All of that necessary context is really difficult to impart to a computer and right now we’re not very good at that essential piece of the puzzle.

    If we could assume that all datasets analyzed by AI were free from human error, AI would be taking over the world right now. However, that’s not the world we live in. All data has errors. Some are easy to spot but many are not. AI firms are getting companies to salivate at the idea of easy manipulation of data in one form or another. They aren’t worried about the errors in the data because they view that as someone else’s problem and the companies all think their data is good enough that it won’t be an issue. Both are wrong. That’s exactly why you hear a lot of talk about AI right now and not all that much practical application beyond replacing customer service reps, especially in the business world. Companies are finding out that years of bad practices have left them with a dataset full of errors. Can they find a way to get AI to correct those errors? In some cases yes, in others no. In either case the missing piece preventing a full scale AI takeover is all that human background context necessary for relevant data interpretation. If we find a way to teach that to an AI then the world is going to look vastly different than it does today, but we’re not there yet.