By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    8
    ·
    1 month ago

    After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

    Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

  • edgemaster72@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 month ago

    understanding what the machine spits out

    This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

      • 667@lemmy.radio
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

        In the end, I got decent code that worked for the purpose I needed.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

          And this means that it isn’t writing professional code.

          It’s great for quickly generating useful and testable code snippets though.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    LLMs are just computerized puppies that are really good at performing tricks for treats. They’ll still do incredibly stupid things pretty frequently.

    I’m a software engineer, and I am not at all worried about my career in the long run.

    In the short term… who fucking knows. The C-suite and MBA circlejerk seems to have decided they can fire all the engineers because wE CAn rEpLAcE tHeM WitH AI 🤡 and then the companies will have a couple absolutely catastrophic years because they got rid of all of their domain experts.

  • Rookeh@startrek.website
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    I’ve tried Copilot and to be honest, most of the time it’s a coin toss, even for short snippets. In one scenario it might try to autocomplete a unit test I’m writing and get it pretty much spot on, but it’s also equally likely to spit out complete garbage that won’t even compile, never mind being semantically correct.

    To have any chance of producing decent output, even for quite simple tasks, you will need to give an LLM an extremely specific prompt, detailing the precise behaviour you want and what the code should do in each scenario, including failure cases (hmm…there used to be a term for this…)

    Even then, there are no guarantees it won’t just spit out hallucinated nonsense. And for larger, enterprise scale applications? Forget it.

    • finestnothing@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      My CTO thoroughly believes that within 4-6 years we will no longer need to know how to read or write code, just how to ask an AI to do it. Coincidentally, he also doesn’t code anymore and hasn’t for over 15 years.

      • recapitated@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        From a business perspective, no shareholder cares at how good an employee is at personally achieving a high degree of skill. They only care about selling and earning, and to a lesser degree an enduring reputation for longer term earnings.

        Economics could very well drive this forward. But I don’t think the craft will be lost. People will need to supervise this progress as well as collaborate with the machines to extend its capabilities and dictate its purposes.

        I couldn’t tell you if we’re talking on a time scale of months or decades, but I do think “we” will get there.

        • whyrat@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Hackers and hobbiests will persist despite any economics. Much of what they do I don’t see AI replacing, as AI creates based off of what it “knows”, which is mostly things it has previously ingested.

          We are not (yet?) at the point where LLM does anything other than put together code snippets it’s seen or derived. If you ask it to find a new attack vector or code dissimilar to something it’s seen before the results are poor.

          But the counterpoint every developer needs to keep in mind: AI will only get better. It’s not going to lose any of the current capabilities to generate code, and very likely will continue to expand on what it can accomplish. It’d be naive to assume it can never achieve these new capabilities… The question is just when & how much it costs (in terms of processing and storage).

  • Jimmycrackcrack@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    14 days ago

    I don’t know how to program, but to a very limited extent can sorta kinda almost understand the logic of very short and simplistic code that’s been written for me by someone who can actually code. I tried to get to get chat GPT to write a shell script for me to work as part of an Apple shortcut. It has no idea. It was useless and ridiculously inconsistent and forgetful. It was the first and only time I used chat GPT. Not very impressed.

    Given how it is smart enough to produce output that’s kind of in the area of correct, albeit still wrong and logically flawed, I would guess it could eventually be carefully prodded into making one small snippet of something someone might call “good” but at that point I feel like that’s much more an accident in the same way that someone who has memorised a lot of French vocabulary but never actually learned French might accidentally produce a coherent sentence once in a while by trying and failing 50 times before succeeding and then failing again immediately after without ever having even known.

  • PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 month ago

    Ai is excellent at completing low effort ai generated Pearson homework while I spend all the time I saved on real projects that actually matter. My hugging face model is probably trained on the same dataset as their bot. It gets it correct about half the time and another 25% of the time, I just have to change a few numbers or brackets around. It takes me longer to read the instructions than it takes the ai bot to spit out the correct answer.

    None of it is “good” code but it enables me to have time to write good code somewhere else.

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Great question.

    is there any legit reason anyone should learn advanced coding techniques?

    Don’t buy the hype. LLMs can produce all kinds of useful things but they don’t know anything at all.

    No LLM has ever engineered anything. And there’s no sparse (concession to a good point made in response) current evidence that any AI ever will.

    Current learning models are like trained animals in a circus. They can learn to do any impressive thing you an imagine, by sheer rote repetition.

    That means they can engineer a solution to any problem that has already been solved millions of times already. As long as the work has very little new/novel value and requires no innovation whatsoever, learning models do great work.

    Horses and LLMs that solve advanced algebra don’t understand algebra at all. It’s a clever trick.

    Understanding the problem and understanding how to politely ask the computer to do the right thing has always been the core job of a computer programmer.

    The bit about “politely asking the computer to do the right thing” makes massive strides in convenience every decade or so. Learning models are another such massive stride. This is great. Hooray!

    The bit about “understanding the problem” isn’t within the capabilities of any current learning model or AI, and there’s no current evidence that it ever will be.

    Someday they will call the job “prompt engineering” and on that day it will still be the same exact job it is today, just with different bullshit to wade through to get it done.

    • ConstipatedWatson@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Wait, if you can (or anyone else chipping in), please elaborate on something you’ve written.

      When you say

      That means they can engineer a solution to any problem that has already been solved millions of times already.

      Hasn’t Google already made advances through its Alpha Geometry AI?? Admittedly, that’s a geometry setting which may be easier to code than other parts of Math and there isn’t yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

      Isn’t this still engineering a solution? Sometimes even researchers reach new results by having a machine verify many cases (see the proof of the Four Color Theorem). It’s true that in the Four Color Theorem researchers narrowed down the cases to try, but maybe a similar narrowing could be done by an AI (sooner or later)?

      I don’t know what I’m talking about, so I should shut up, but I’m hoping someone more knowledgeable will correct me, since I’m curious about this

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        Isn’t this still engineering a solution?

        If we drop the word “engineering”, we can focus on the point - geometry is another case where rote learning of repetition can do a pretty good job. Clever engineers can teach computers to do all kinds of things that look like novel engineering, but aren’t.

        LLMs can make computers look like they’re good at something they’re bad at.

        And they offer hope that computers might someday not suck at what they suck at.

        But history teaches us probably not. And current evidence in favor of a breakthrough in general artificial intelligence isn’t actually compelling, at all.

        Sometimes even researchers reach new results by having a machine verify many cases

        Yes. Computers are good at that.

        So far, they’re no good at understanding the four color theorum, or at proposing novel approaches to solving it.

        They might never be any good at that.

        Stated more formally, P may equal NP, but probably not.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    No, a large part of what “good code” means is correctness. LLMs cannot properly understand a problem so while they can produce grunt code they can’t assemble a solution to a complex problem and, IMO, it is impossible for them to overtake humans unless we get really lazy about code expressiveness. And, on that point, I think most companies are underinvesting into code infrastructure right now and developers are wasting too much time on unexpressive code.

    The majority of work that senior developers do is understanding a problem and crafting a solution appropriate to it - when I’m working my typing speed usually isn’t particularly high and the main bottleneck is my brain. LLMs will always require more brain time while delivering a savings on typing.

    At the moment I’d also emphasize that they’re excellent at popping out algorithms I could write in my sleep but require me to spend enough time double checking their code that it’s cheaper for me to just write it by hand to begin with.

  • nous@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    They can write good short bits of code. But they also often produce bad and even incorrect code. I find it more effort to read and debug its code then just writing it myself to begin with the vast majority of the time and find overall it just wastes more of my time overall.

    Maybe in a couple of years they might be good enough. But it looks like their growth is starting to flatten off so it is up for debate as to if they will get there in that time.

  • recapitated@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I’m my experience they do a decent job of whipping out mindless minutea and things that are well known patterns in very popular languages.

    They do not solve problems.

    I think for an “AI” product to be truly useful at writing code it would need to incorporate the LLM as a mere component, with something facilitating checks through static analysis and maybe some other technologies, maybe even mulling the result through a loop over the components until they’re all satisfied before finally delivering it to the user as a proposal.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      It’s a decent starting point for a new language. I had to learn webdev as an embedded C coder, and using a LLM and cross-referencing the official documentation makes a new language much more approachable.

  • Red_October@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Technically it’s possible, but it’s neither probable nor likely, and it’s especially not effective. From what I understand, a lot of devs who do try to use something like ChatGPT to write code end up spending as much or more time debugging it, and just generally trying to get it to work, than they would have if they’d just written it themselves. Additionally, you have to know how to code to be able to figure out why it’s not working, and even when all of that is done, it’s almost impossible to get it to integrate with a larger project without just rewriting the whole thing anyway.

    So to answer the question you intend to ask, no, LLMs will not be replacing programmers any time soon. They may serve as a tool of dubious value, but the idea that programmers will be replaced is only taken seriously by by people who manage programmers, and not the programmers themselves.

  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Of course it can. It can also spit out trash. AI, as it exists today, isn’t meant to be autonomous, simply ask it for something and it spits it out. They’re meant to work with a human on a task. Assuming you have an understanding of what you’re trying to do, an AI can probably provide you with a pretty decent starting point. It tends to be good at analyzing existing code, as well, so pasting your code into gpt and asking it why it’s doing a thing usually works pretty well.

    AI is another tool. Professionals will get more use out of it than laymen. Professionals know enough to phrase requests that are within the scope of the AI. They tend to know how the language works, and thus can review what the AI outputs. A layman can use AI to great effect, but will run into problems as they start butting up against their own limited knowledge.

    So yeah, I think AI can make some good code, supervised by a human who understands the code. As it exists now, AI requires human steering to be useful.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    Yes and no. GPT usually gives me clever solutions I wouldn’t have thought of. Very often GPT also screws up, and I need to fine tune variable names, function parameters and such.

    I think the best thing about GPTis that it knows the documentation of every function, so I can ask technical questions. For example, can this function really handle dataframes, or will it internally convert the variable into a matrix and then spit out a dataframe as if nothing happened? Such conversions tend to screw up the data, which explains some strange errors I bump into. You could read all of the documentation to find out, or you could just ask GPT about it. Alternatively, you could show how badly the data got screwed up after a particular function, and GPT would tell that it’s because this function uses matrices internally, even though it looks like it works with dataframes.

    I think of GPT as an assistant painter some famous artists had. The artist tells the assistant to paint the boring trees in the background and the rough shape of the main subject. Once that’s done, the artist can work on the fine details, sign the painting, send it to the local king and charge a thousand gold coins.