• nectar@lemmy.world
    link
    fedilink
    arrow-up
    31
    arrow-down
    2
    ·
    1 day ago

    I generally agree (especially with the current critique of using up water/power just for one image)

    But I can’t get behind “this tool will make people who don’t use it feel bad”. The same arguments were levied against Photoshop and now it’s a tool in the arsenal. The same arguments were levied against the camera. And I could see the same argument against the printing press (save those poor monks doing calligraphy)

    The goal of “everything shall be AI” is fucked and clearly wrong. That doesn’t mean there isn’t any use for it. People who wanna crank out slop will give up when there’s no money in it and it doesn’t grant them attention.

    And I say this as someone who despises how every website has an AI chatbot popping up when I visit their site and every search engine is offloading actually visiting and reading pages to AI summaries

    • TotallynotJessica@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      1 day ago

      This is where I’m coming from. Generative AI is pretty cool and useful, but it has severe limitations that most people don’t comprehend. Machine learning can automate countless time consuming tasks. This is especially true in the entertainment industry, where it’s just another tool for production to use.

      Businesses fail to understand is that it cannot perform deductive tasks without necessarily making errors. It can only give probable outputs, not outputs that must be correct based on the input. It goes against the very assumptions we make about computer logic, as it doesn’t work on deductive reasoning.

      Generative AI works by emulating biological intelligence, taking principles of neuroscience to solve problems quickly and efficiently. However, this gives AI similar weaknesses to our own minds, imagining things and baking in bias. It can never give the accurate summaries Google hopes it can, as it will only ever tell us what it thinks we want to hear. They keep misusing it in ways that either waste everyone’s time, or do serious harm.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        edit-2
        1 day ago

        Im sorry but if your arguments is that “AI is doomed because current LLMs are only good at fuzzy, probabilistic, outcomes”, then you do not understand current AI or computer science or why computer scientists are impressed by modern AI.

        Discrete concrete logic is what computers have always been good at. That is easy. What has been difficult, is finding a way for computers to address fuzzy, pattern matching, probabilistic problems. The fact that Neural Networks are good at those is precisely what has Computer Scientists excited about AI.

        • TotallynotJessica@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          1 day ago

          I’m not saying it’s doomed! I literally said that it’s cool and useful. It’s a revolutionary technology in many respects, but not for everything. It cannot replace the things computers have always been good at, but business people don’t seem to realize that. They assume that it can fix anything, not understanding that it will only make certain things worse. The trade-off is counterproductive for tasks where you need consistent indexing.

          For instance, Google’s search AI turns primary sources into secondary or tertiary sources by trying to cut corners. I have zero trust in anything it tries to tell me, while all the problems it had before AI have continued to worsen. They could’ve used machine learning to better understand search queries, or diversify results to compensate for vagueness in language, or to fucking combat SEO, but they instead clog up the results with even more bullshit! It’s a war against curiosity at this point! 😫