I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • saigot@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    18 days ago

    Here’s some uses:

    • skin cancer diagnoses with llms has a high success rate with a low cost. This is something that was starting to exist with older ai models, but llms do improve the success rate. source
    • VLC recently unveiled a new feature of using ai to generate subtitles, i haven’t used it but if it delivers then it’s pretty nice
    • for code generation, I agree it’s more harmful than useful for generating full programs or functions, but i find it quite useful as a predictive text generator, it saves a few keystrokes. Not a game changer but nice. It’s also pretty useful at generating test data so long as it’s hard to create but easy (for a human) to validate.
  • Gravitwell@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    18 days ago

    I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style… and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.

    Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.

    Basically they are good at parsing information and reformatting it in a way that works better for me.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    14
    ·
    19 days ago

    I treat it as a newish employee. I don’t let it do important tasks without supervision, but it does help building something rough that I can work on.

  • Pup Biru@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 days ago

    i’ve written bots that filter things for me, or change something to machine-readable formats

    the most successful thing i’ve done is have a bot that parses a web page and figures out the date/time in standard format, gets a location if it’s listed in the description and geocodes it, and a few other fields to make an ical for pretty much any page

    i think the important thing is that gen ai is good at low risk tasks that reduce but don’t eliminate human effort - changing something from having to do a bunch of data entry to skimming for correctness

  • peppers_ghost@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    19 days ago

    “at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.”

    I’ve not experienced this. Debugging for me is always faster than writing something entirely from scratch.

    • Archr@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      19 days ago

      100% agree with this.

      It is so much faster for me to give the ai the api/library documentation than it would be for me to figure out how that api works. Is it a perfect drop-in, finished piece of code? No. But that is not what I ask the ai for. I ask it for a simple example which I can then take, modify, and rework into my own code.

  • bobbyfiend@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    18 days ago

    I have a very good friend who is brilliant and has slogged away slowly shifting the sometimes-shitty politics of a swing state’s drug and alcohol and youth corrections policies from within. She is amazing, but she has a reading disorder and is a bit neuroatypical. Social niceties and honest emails that don’t piss her bosses or colleagues off are difficult for her. She jumped on ChatGPT to write her emails as soon is it was available, and has never looked back. It’s been a complete game changer for her. She no longer spends hours every week trying to craft emails that strike that just-right balance. She uses that time to do her job, now.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      18 days ago

      I hope it pluralizes ‘email’ like it does ‘traffic’ and not like ‘failure’.

  • w3dd1e@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    18 days ago

    I need help getting started. I’m not an idea person. I can make anything you come up with but I can’t come up with the ideas on my own.

    I’ve used it for an outline and then I rewrite it with my input.

    Also, I used it to generate a basic UI for a project once. I struggle with the design part of programming so I generated a UI and then drew over the top of the images to make what I wanted.

    I tried to use Figma but when you’re staring at a blank canvas it doesn’t feel any better.

    I don’t think these things are worth the cost of AI ( ethically, financially, socially, environmentally, etc). Theoretically I could partner with someone who is good at that stuff or practice till I felt better about it.

  • CaptainBlagbird@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    19 days ago

    I generate D&D characters and NPCs with it, but that’s not really a strong argument.

    For programming though it’s quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it’s a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn’t mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.

  • whome@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    ·
    19 days ago

    I use it to sort days and create tables which is really helpful. And the other thing that really helped me and I would have never tried to figure out on my own:

    I work with the open source GIS software qgis. I’m not a cartographer or a programmer but a designer. I had a world map and wanted to create geojson files for each country. So I asked chatgpt if there was a way to automate this within qgis and sure thing it recommend to create a Python script that could run in the software, to do just that and after a few tweaks it did work. that saved me a lot of time and annoyances. Would it be good to know Python? Sure but I know my brain has a really hard time with code and script. It never clicked and likely never will. So I’m very happy with this use case. Creative work could be supported in a drafting phase but I’m not so sure about this.

  • solomon42069@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    19 days ago

    There was a legitimate use case in art to draw on generative AI for concepts and a stopgap for smaller tasks that don’t need to be perfect. While art is art, not every designer out there is putting work out for a gallery - sometimes it’s just an ad for a burger.

    However, as time has gone on for the industry to react I think that the business reality of generative AI currently puts it out of reach as a useful tool for artists. Profit hungry people in charge will always look to cut corners and will lack the nuance of context that a worker would have when deciding when or not to use AI in the work.

    But you could provide this argument about any tool given how fucked up capitalism is. So I guess that my 2c - generative AI is a promising tool but capitalism prevents it from being truly useful anytime soon.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    15 days ago

    In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

    I’d actually challenge both of these. The property of “soulessness” is very subjective, and AI art has won blind competitions. On programming, it’s empirically made faster by half again, even with the intrinsic requirement for debugging.

    It’s good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn’t impact anyone’s bottom line directly.

    • nairui@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      15 days ago

      To win a competition isn’t speaking to the purpose of art really, whose purpose is for communication. AI has nothing to communicate and approximates a mish mash of its dataset to mimic to great success the things it’s seen, but is ultimately meaningless in intention. It would be a disservice to muddy the art and writing out in the world created by and for human beings with a desire to communicate with algorithmic outputs with no discernible purpose.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        15 days ago

        I feel like the indistinguishability implied by this undercuts the communicative properties of the human art, no? I suppose AI might not be able to make a coherent Banksy, but not every artist is Banksy.

        If you can’t tell if something was made by Unstable or Rutkowski, isn’t it fair to say either neither work has soul (or a message), or both must?

        • nairui@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          15 days ago

          That is only if one assumes the purpose of art is its effect on the viewer which is only one purpose. Think of your favorite work of art, fiction, music, did it make you feel connected to something, another person? Imagine a lonely individual who connected with the loneliness in a musical artist’s lyrics, what would be the purpose of that artist turned out to be an algorithm?

          Banksy, maybe Rutkowski, and other artists have created a distinct language (in this case visual) that an algorithm can only replicate. Consider the fact that generative AI cannot successfully generate an image of a full glass of wine, since they’re not commonly photographed.

          I do think that the technology itself is interesting for those that use it in original works that are intended to be about algorithms themselves like those surreal videos, I find those really interesting. But in the case of passing off algorithmic output as original art, like that guy who won that competition with an AI generated image, or when Spotify creates algorithmically generated music, to me that’s not art.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            15 days ago

            That reminds me of the Matrix - “You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realise? Ignorance is bliss”

            Okay, so does it matter if there’s no actual human you’re connecting to, if the connection seems just as real? We’re deep into philosophy there, and I can’t reasonably expect an answer.

            If that’s the whole issue, though, I can be pretty confident it won’t change the commercial realities on the ground. The artist’s studio is then destined to be something that exists only on product labels, along with scenic mixed-animal barnyards. Cypher was unusually direct about it, but comforting lies never went out of style.

            That’s kind of how I’ve interpreted OP’s original question here. You could say that’s not a “legitimate” use even if inevitable, I guess, but I basically doubt anyone wants to hear my internet rando opinion on the matter, since that’s all it would be.

            Consider the fact that generative AI cannot successfully generate an image of a full glass of wine, since they’re not commonly photographed.

            Okay, I have to try this. @[email protected] draw for me a glass of wine.

  • GaMEChld@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    18 days ago

    I like using it to help get the ball rolling on stuff and organizing my thoughts. Then I do the finer tweaking on my own. Basically I kinda use a sliding scale of the longer it takes me to refine an AI output for smaller and smaller improvements is what determines when I switch to manual.

  • Affidavit@lemm.ee
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    19 days ago

    I’d say there are probably as many genuine use-cases for AI as there are people in denial that AI has genuine use-cases.

    Top of my head:

    • Text editing. Write something (e.g. e-mails, websites, novels, even code) and have an LLM rewrite it to suit a specific tone and identify errors.
    • Creative art. You claim generative AI art is soulless and poor quality, to me, that indicates a lack of familiarity with what generative AI is capable of. There are tools to create entire songs from scratch, replace the voice of one artist with another, remove unwanted background noise from songs, improve the quality of old songs, separate/add vocal tracks to music, turn 2d models into 3d models, create images from text, convert simple images into complex images, fill in missing details from images, upscale and colourise images, separate foregrounds from backgrounds.
    • Note taking and summarisation (e.g. summarising meeting minutes or summarising a conversation or events that occur).
    • Video games. Imagine the replay value of a video game if every time you play there are different quests, maps, NPCs, unexpected twists, and different puzzles? The technology isn’t developed enough for this at the moment, but I think this is something we will see in the coming years. Some games (Skyrim and Fallout 4 come to mind) have a mod that gives each NPC AI generated dialogue that takes into account the NPC’s personality and history.
    • Real time assistance for a variety of tasks. Consider a call centre environment as one example, a model can be optimised to evaluate calls based on language and empathy and correctness of information. A model could be set up with a call centre’s knowledge base that listens to the call and locates information based on a caller’s enquiry and tells an agent where the information is located (or even suggests what to say, though this is currently prone to hallucination).
  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 days ago

    Fake frames. Nvidia double benefits.

    Note: Tis a joke, personally I think DLSS frame generation is cool, as every frame is “fake” anyway.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    19 days ago

    Idea generation.

    E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.

    A work colleague asked an LLM client for wedding vow ideas to break through writer’s block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      19 days ago

      Those are just ideas that were previously “generated” by humans though, that the LLM learned

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Those are just ideas that were previously “generated” by humans though, that the LLM learned

        That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.

        • Iunnrais@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          18 days ago

          I feel like “passing it through a statistical model”, while absolutely true on a technical implementation level, doesn’t get to the heart of what it is doing so that people understand. It’s using the math terms, potentially deliberately to obfuscate and make it seem either simpler than it is. It’s like reducing it to “it just predicts the next word”. Technically true, but I could implement a black box next word predictor by sticking a real person in the black box and ask them to predict the next word, and it’d still meet that description.

          The statistical model seems to be building some sort of conceptual grid of word relationships that approximates something very much like actually understanding what the words mean, and how the words are used semantically, with some random noise thrown into the mix at just the right amounts to generate some surprises that look very much like creativity.

          Decades before LLMs were a thing, the Zompist wrote a nice essay on the Chinese room thought experiment that I think provides some useful conceptual models: http://zompist.com/searle.html

          Searle’s own proposed rule (“Take a squiggle-squiggle sign from basket number one…”) depends for its effectiveness on xenophobia. Apparently computers are as baffled at Chinese characters as most Westerners are; the implication is that all they can do is shuffle them around as wholes, or put them in boxes, or replace one with another, or at best chop them up into smaller squiggles. But pointers change everything. Shouldn’t Searle’s confidence be shaken if he encountered this rule?

          If you see 马, write down horse.

          If the man in the CR encountered enough such rules, could it really be maintained that he didn’t understand any Chinese?

          Now, this particular rule still is, in a sense, “symbol manipulation”; it’s exchanging a Chinese symbol for an English one. But it suggests the power of pointers, which allow the computer to switch levels. It can move from analyzing Chinese brushstrokes to analyzing English words… or to anything else the programmer specifies: a manual on horse training, perhaps.

          Searle is arguing from a false picture of what computers do. Computers aren’t restricted to turning 马 into “horse”; they can also relate “horse” to pictures of horses, or a database of facts about horses, or code to allow a robot to ride a horse. We may or may not be willing to describe this as semantics, but it sure as hell isn’t “syntax”.