Over half of all tech industry workers view AI as overrated::undefined

  • eestileib@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    8 months ago

    Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.

      • VintageTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 months ago

        Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.

        • Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          Sounds like we need to align on triple underscoring the double-bottom line for all stakeholders. Let’s hammer a steak in the ground here and craft a narrative that drives contingency through the process space for F24 while synthesising synergy from a cloudshaping standooint in a parallel tranche. This journey is really all about the art of the possible after all.

    • SineSwiper@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      No SQL, block chain, crypto, metaverse, just to name a few recent examples.

      AI is overhyped, but it is, so far, more useful than any of those other examples, though.

  • ParsnipWitch@feddit.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 months ago

    It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.

    My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.

    He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he than provides don’t work.

    Too many people see LLMs as authorities they just aren’t…

  • milkjug@lemmy.wildfyre.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    I have a doctorate in computer engineering, and yeah it’s overhyped to the moon.

    I’m oversimplifying it and some one will ackchyually me but once you understand the core mechanics the magic is somewhat diminished. It’s linear algebra and matrices all the way down.

    We got really good at parallelizing matrix operations and storing large matrices and the end result is essentially “AI”.

  • Ibaudia@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    There is a lot of marketing about how it’s going to disrupt every possible industry, but I don’t think that’s reasonable. Generative AI has uses, but I’m not totally convinced it’s going to be this insane omni-tool just yet.

    • Bri Guy @sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      whenever we have new technology, there will always be folks flinging shit on the walls to see what sticks. AI is no exception and you’re most likely correct that not every problem needs an AI-powered solution.

  • thorbot@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    That’s because it is overrated and the people in the tech industry are actually qualified to make that determination. It’s a glorified assistant, nothing more. we’ve had these for years, they’re just getting a little bit better. it’s not gonna replace a network stack admin or a programmer anytime soon.

  • shirro@aussie.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 months ago

    Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.

    The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.

    I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    8 months ago

    That is a terrible graph. There’s no y axis, there’s no indication of what the scale is, and I don’t know how many people they asked or who these people were or what tech company they worked in.

    Just over 23% believe it is rated fairly, while a quarter of respondents were presumably proponents of the tech as they said it was underrated. However, 51.6% of people said it was overrated.

    That sentence is a fantastic demonstration of how bad this article is. The article says that a quarter say the technology is underrated, but it looks more like half to me. Not that it matters because, as I said the scale is useless. Also they are lumping 51.6% I don’t know how they came up with that number because again we don’t know what the total was, just that it was more than 1,500. You can’t calculate a percentage without knowing the total.

    • SCB@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      It’s just a clickbait article for people who know next to nothing about AI but have personal investment in calling new technology bad.

  • IchNichtenLichten@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    8 months ago

    On one hard there’s the emergence of the best chat bot we’ve ever created. Neat, I guess.

    On the other hand, there’s VC capital scurrying around for the next big thing to invest in, lazy journalism looking for a source of new content to write about, talentless middle management looking for something to latch on to so they can justify their existence through cost cutting, and FOMO from people who don’t understand that it’s just a fancy chat bot.

    • PlexSheep@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I agree, but you make it sound like a fancy chat bot can’t do amazing things. I don’t use any openAI products for moral reasons, but LLMs in general are amazing tools, and good entertainment.

      • IchNichtenLichten@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        They’re impressive, no doubt but the jury is still out on how useful they actually are given their ability to be confidently incorrect about all kinds of things.

  • Blue and Orange@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    The best use I’ve found for AI is getting it to write me covering letters for job applications. Even then I still need to make a few small adjustments. But it saves a bit of time and typing effort.

    Other than that, I just have fun with it making stupid images and funny stories based on inside jokes.

    • Humanius@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      8 months ago

      As someone who works in the tech industry and has used AI tools (or more accurately machine learning models), I do think it is overrated.
      That doesn’t mean that I don’t think it can be useful, just that it’s not going to live up to the immense hype surrounding it right now.

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 months ago

        I work in tech and have used the tools. I am mostly neutral on its prospects. I think it’s somewhat overrated right now for many purposes, but just seeing how rapidly things are progressing gives me pause to outright dismiss its potential for immense utility.

        We have to consider that few saw ChatGPT coming so soon and even fewer predicting ahead of time for it to work as well as it does. Now that Microsoft is fully bankrolling its development-- providing their newly acquired former-OpenAI team virtually unlimited resources with bleeding edge hardware custom built for its models–I really have no idea how far and quickly they’ll progress their AGI tech. For all we know right know, in 5+ years LLMs and their ilk could be heralding another tech revolution.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          They probably won’t advance much because currently it has two opposite but equally difficult problems. On the one hand, AI still hasn’t achieved sensor integration, or creating an ontologically sound world model that includes more than one sensor data stream at a time. Right now it can model based on one sensor or one multidimensional array of sensors. But it can’t model in-between models. So you can’t have, let’s say, one single model that can hear, see light and radar at the same time. The same way that animal intelligence can self-correct their world model when one sensor says A but another sensor disagrees and says B. Current models just hallucinate and go off the deep end catastrophically.

          On the opposite end, if we want them to be products, as seems to be MS and Altman fixation. Then it cannot be a black box, at least not for the implementers. Only in this past year there have been actual efforts to really see WTF is going on inside the models after they’ve been trained and how to interpret and manipulate that inner world to effective and intentional results. Even then, the progress is difficult because it’s all abstract mathematics and we haven’t found a translation layer to parse the model’s internal world into something humans can easily interpret.

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      I’ve had an AI bot trained on our company’s knowledge base literally make up links to nonexistent articles out of whole cloth. It’s so useless I just stopped bothering to ask it anything, I save more time looking it up myself.

  • Dewded@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    I work in an AI company. 99% of our tech relies on tried and true standard computer vision solutions instead of machine-learning based. It’s just that unreliable when production use requires pixel precision.

    We might throw a gradient descent here or there, but not for any learning ops.