Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    260
    arrow-down
    2
    ·
    9 个月前

    Founder of company which makes major revenue by selling GPUs for machine learning says machine learning is good.

    • Murvel@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      25
      ·
      9 个月前

      Yes but Nvidia relies heavily on programmers themselves. Without them Nvidia wouldn’t have a single product. The fact that he despite this makes these claims is worth taking note.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        43
        arrow-down
        1
        ·
        9 个月前

        Lol. They’re at the top of the food chain. They can afford the best developers. They do not benefit from competition. As with all leading tech corporations, they are protectionist, and benefit more from stifling competition than from innovation.

        Also, more broadly the oligarchy don’t want the masses to understand programming because they don’t want them to fundamentally understand logic, and how information systems work, because civilization is an information system. It makes more sense when you realize Linux/FOSS is the socialism of computing, and anti-competitive closed source corporations like Nvidia (notorious for hindering Linux and FOSS) are the capitalist class of computing.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      25
      ·
      9 个月前

      It doesn’t make him wrong.

      Just like we can now uss LLM to create letters or emails with a tone, it’s not going to be a big leap to allow it to do similar with coding. It’s quite exciting, really. Lots of people have ideas for websites or apps but no technical knowledge to do it. AI may allow it, just like it allows non artists to create art.

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        edit-2
        9 个月前

        I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It’s useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 个月前

          Agreed. I mainly use it for learning.

          Instead of googling and skimming a couple blogs / so posts, I now just ask the AI. It pulls the exact info I need and sources it all. And being able to ask follow up questions is great.

          It’s great for learning new languages and frameworks

          It’s also very good at writing unit tests.

          Also for recommending Frameworks/software for your use case.

          I don’t see it replacing developers, more reducing the number of developers needed. Like excel did for office workers.

          • TangledHyphae@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 个月前

            You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.

            I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 个月前

          It can’t tell yet when the output is ridiculous or incorrect for non coding, but it will get there. Same for coding. It will continue to grow in complexity and ability.

          It will get there, eventually. I don’t think it will be writing complex code any time soon, but I can see it being aware of all the libraries and foss that a person cannot be across.

          I would foresee learning to code as similar to learning to do accounting manually. Yes, you’ll still need to understand it to be a coder, but for the average person that can’t code, it will do a good enough job, like we use accounting software now for taxes or budgets that would have been professionally done before. For complex stuff, it will be human done, or human reviewed, or professional coders giving more technical instructions for ai. For simple coding, like you might write a python script now, for some trivial task, ai will do it.

        • Jolan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 个月前

          I think this is going to age really badly and I don’t like LLMs but I think it will be soon. People also said that AI as we see it now is decades away but we got it quite quickly so I think it’s a very small step to go from writing fully grammatically correct English to fully correct code. It’s basically just a language the ai has to learn. But I guess what do I know. We’ll just have to wait and see

          • TangledHyphae@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 个月前

            I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.

            https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small

            This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.

            But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.

      • MartianSands@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        ·
        9 个月前

        It might not make him wrong, but he also happens to be wrong.

        You can’t compare AI art or literature to AI software, because the former are allowed to be vague or interpretive while the latter has to be precise and formally correct. AI can’t even reliably do art yet, it frequently requires several attempts or considerable support to get something which looks right, but in software “close” frequently isn’t useful at all. In fact, it can easily be close enough to look right at first glance while actually being catastopically wrong once you try to use it for real (see: every bug in any released piece of software ever)

        Even when AI gets good enough to reliably produce what it’s asked for first time & every time (which is a long way away for quite a while yet), a sufficiently precise description of what you want is exactly what programmers spend their lives writing. Code is a description of a program which another program (such as a compiler) can convert into instructions for the computer. If someone comes up with a very clever program which can fill in the gaps by using AI to interpret what it’s been given, then what they’ve created is just a new kind of programming language for a new kind of compiler

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          9 个月前

          I don’t disagree with your point. I think that is where we are heading. How we interact with computers will change. We’re already moving away from keyboard typing and clicks, to gestures and voice or image recognition.

          We likely won’t even call it coding. Hey Google, I’ve downloaded all the episodes for the current season of Pimp My PC, can you rename the files by my naming convention and drop them into jellyfin. The AI will know to write a python script to do so. I expect it to be invisible to the user.

          So, yes, it is just a different instruction set. But that’s all computers are. Data in, data out.

      • variaatio@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 个月前

        Well difference is you have to know coming to know did the AI produce what you actually wanted.

        Anyone can read the letter and know did the AI hallucinate or actually produce what you wanted.

        On code. It might produce code, that by first try does what you ask. However turns AI hallucinated a bug into the code for some edge or specialty case.

        Hallucinating is not a minor hiccup or minor bug, it is fundamental feature of LLMs. Since it isn’t actually smart. It is a stochastic requrgitator. It doesn’t know what you asked or understand what it is actually doing. It is matching prompt patterns to output. With enough training patterns to match one statistically usually ends up about there. However this is not quaranteed. Thus the main weakness of the system. More good training data makes it more likely it more often produces good results. However for example for business critical stuff, you aren’t interested did it get it about right the 99 other times. It 100% has to get it right, this one time. Since this code goes to a production business deployment.

        I guess one can code comprehensive enough verified testing pattern including all the edge cases and with thay verify the result. However now you have just shifted the job. Instead of programmer programming the programming, you have programmer programming the very very comprehensive testing routines. Which can’t be LLM done, since the whole point is the testing routines are there to check for the inherent unreliability of the LLM output.

        It’s a nice toy for someone wanting to make a quick and dirty test code (maybe) to do thing X. Then try to find out does this actually do what I asked or does it have unforeseen behavior. Since I don’t know what the behavior of the code is designed to be. Since I didn’t write the code. good for toying around and maybe for quick and dirty brainstorming. Not good enough for anything critical, that has to be guaranteed to work with promise of service contract and so on.

        So what the future real big job will be is not prompt engineers, but quality assurance and testing engineers who have to be around to guard against hallucinating LLM/ similar AIs. Prompts can be gotten from anyone, what is harder is finding out did the prompt actually produced what it was supposed to produce.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 个月前

        Until somewhere things go wrong and the supplier tries the “but an AI wrote it” as a defense when the client sues them for not delivering what was agreed upon and gets struck down, leading to very expensive compensations that spook the entire industry.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 个月前

          Aor Canada already tried that and lost. They had to refund the customer as the chatbot gave incorrect information.

          • BombOmOm@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 个月前

            Turns out the chatbot gave the correct information. Air Canada just didn’t realize they had legally enabled the AI to set company policy. :)

    • TherouxSonfeir@lemm.ee
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      9
      ·
      9 个月前

      I use “AI” when I work. It’s like having a really smart person who knows a bit about everything available 24/7 with useful responses. Sure, it’s not all right, but it usually leads me down the path to solving my problem a lot faster than I could with “Googling.” Remember Google? What a joke.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        50
        arrow-down
        2
        ·
        9 个月前

        I think it‘s less of a really smart person and more of a very knowledgeable person with an inflated ego so you take everything they say with a grain of salt. Useful nonetheless.

    • JackFrostNCola@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 个月前

      Just being a stickler here but Electronics Engineers, not Electrical. Similar sounding but like the difference between a submarine captain and an airplane captain.

    • ___@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      9
      ·
      edit-2
      9 个月前

      This. The technology is here to stay and will literally change the world. In a few years when the Sora and SD3 models are released and well understood, and desktop GPUs begin offering 24GB vram to midrange cards out of demand, it will be crazier than we can imagine. LLMs are already near human level with enough compute. As tech gets faster and commoditized, everyone becomes and artist and a programmer. Information will no longer be trusted, and digital verification technology will proliferate.

      Invest now.

      That and nuclear batteries capable of running pi like machines for decades. 1w is on the horizon by BetaVolt.

      • Nachorella@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        9 个月前

        I’m not sure why you’re being downvoted. I don’t think the current technology is going to replace programmers or artists any time soon (speaking as someone who works as an artist and programmer in a field that monitors ai and its uses) but I also acknowledge that my guess is as good as yours.

        I don’t think it’s going to replace artists because as impressive as the demos we all see are, inevitably, whenever I’ve done any thorough testing, every AI model fails at coming up with something new. It’s so held back by what it’s trained on, that to contemplate it replacing an artist - who are very capable of imagining new things - seems absurd to me.

        Same with programming - ask for something it doesn’t know about and it’ll lie and make something up and confidently proclaim it as truth. It can’t fact check itself and so I can only see it as a time saving tool for professionals and a really cool way for hobbyists to get results that were otherwise off the table.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          9 个月前

          I cant speak for certain about generating art, I’m no artist and my limit of experience there is playing around with stable diffusion, but it feels like its in the same place as LLMs for programming. Its incredibly impressive at first but once you’ve used it for a bit the flaws become obvious. It will be a very powerful tool for artists to use, just like LLMs are for programming, and will likely significantly decrease the time needed to produce something, but is nowhere near replacing a human entirely.

          • Nachorella@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 个月前

            Yeah, for art it’s similar, you can get some really compelling results, but once tasked with creating something a bit too specific it ends up wasting your time more than anything.

            There’s definitely uses for it and it’s really cool, but I don’t think it’s as close to replacing professionals as some people think.

      • 🅿🅸🆇🅴🅻@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 个月前

        Unique style paintings will become even more valuable in the future. Generative AI only spews “art” based on previous styles it learned / was trained on. Everything will be even more rehashed than it is today (nod to Everything is a Remix). Having a painting made by an actual human hand on your wall will be more ego-boosting than an AI generated one.

        Sure, for general digital art (ie logos, game character design, etc) when uniqueness isn’t really mandatory, AI is a good, very cheap tool.

        As for the “everyone becomes a programmer” part… naah.

        • Rikudou_Sage@lemmings.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 个月前

          Having a painting made by an actual human hand on your wall will be more ego-boosting

          Nothing really changes, this has always been the case.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    2
    ·
    9 个月前

    Having used Chat GPT to try to find solutions to software development challenges, I don’t think programmers will be at that much risk from AI for at least a decade.

    Generative AI is great at many things, including assistance with basic software development tasks (like spinning up blueprints for unit tests). And it can be helpful filling in code gaps when provided with a very specific prompt… sometimes. But it is not great at figuring out the nuances of even mildly complex business logic.

    • DacoTaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      9 个月前

      This.
      I got a github copilot subscription at work and its useful for suggesting code in small parts, but i would never let it decide what design pattern to use to tackle the problem we are solving. Once i know the solution i can use ai, and verify its output to use in the code

      • DjMeas@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 个月前

        I’m using it at work as well and Copilot has been pretty decent with writing out entire methods when I start with the jsdoc or code comments before writing the actual method. It’s now becoming my habit to have it generate some near-working code or decent boilerplate.

        If you haven’t tried it yet, give this a shot!

    • Schal330@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 个月前

      I’m a junior dev that has been on the job for ~6 months. I found AI to be useful for learning when I had to make an application in Swift and had zero experience of the language. It presented me with some turd responses, but from this it gave me the idea of what to try and what to look into to find answers.

      I find that sometimes AI can present a concept to me in a way I can understand, where blogs can fail. I’m not worried about AI right now, it’s a tool to make our jobs easier!

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 个月前

      I think it will get good enough to do simple tickets on its own with oversight, but I would not trust it without it submitting it via a pr for review and iteration.

      I agree, it would take at least a decade for fully autonomous programming, and frankly, by the time it can fully replace programmers it will be able to fully replace every office job, at which point were going to have to rethink everything.

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    9 个月前

    As a developer building on top of LLMs, my advice is to learn programming architecture. There’s a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn’t writing low level functions, it’s architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won’t go away, they’ll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.

    I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.

    I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they’re perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.

    • gazter@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 个月前

      In my comment elsewhere in the thread I talk about how, as a complete software noob, I like to design programs by making a flowchart first, and how I wish the flowchart itself was the code.

      It sounds like what I’m doing might be (super basic) programming architecture? Where can I go to learn more about this?

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 个月前

        Look up visual programming languages. When you apply a visual metaphor to programming it really is basically just really detailed and complex flow charts.

  • Wooki@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    8
    ·
    edit-2
    9 个月前

    This overglorified snake oil salesman is scared.

    Anyone who understands how these models works can see plain as day we have reached peak LLM. Its enshitifying on itself and we are seeing its decline in real time with quality of generated content. Dont believe me? Go follow some senior engineers.

      • thirteene@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        9 个月前

        There is a reason they didn’t offer specific examples. LLM can still scale by size, logical optimization, training optimization, and more importantly integration. The current implementation is reaching it’s limits but pace of growth is also happening very quickly. AI reduces workload, but it is likely going to require designers and validators for a long time.

        • Wooki@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          9 个月前

          For sure evidence is mounting that model size benefit is not returning the quality expected. Its also had the larger net impact of enshitifying itself with negative feedback loops between training data, humans and back to training. This one being quantified as a large declining trend in quality. It can only get worse as privacy, IP laws and other regulations start coming into place. The growth this hype master is selling is pure fiction.

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 个月前

            But he has a lot of product to sell.

            And companies will gobble it all up.

            On an unrelated note, I will never own a new graphics card.

            • Wooki@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 个月前

              Secondhand is better value, still new cost right now is nothing short of price fixing. You only need look at the size reduction in memory since A100 was released to know what’s happening to gpu’s.

              We need serious competition, hopefully intel is able to but foreign competition would be best.

              • msage@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 个月前

                I doubt that any serious competitor will bring any change to this space. Why would it - everyone will scream ‘shut up and take my money’.

      • Wooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 个月前

        Fediverse is sadly not as popular as we would like sorry cant help here. That said i follow some researchers blogs and a quick search should land you with some good sources depending on your field of interest

      • Wooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        9 个月前

        You asked the question already answered. Pick your platform and you will find a lot of public research on the topic. Specifically for programming even more so

  • some pirate@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    9 个月前

    Lmao do the opposite of whatever this guy says, he only wants his 2 trillion dollar stockmarket bubble not to burst

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    edit-2
    9 个月前

    the day programming is fully automated, so will other jobs.

    maybe it’d make more sense if he suggested to be a blue collar worker instead.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      9 个月前

      Human can probably still look forward to back breaking careers of manual labor that consist of complex varied movements!

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      9 个月前

      At best, in the near term (5-10 years), they’ll automate the ability to generate moderate complexity classes and it’ll be up to a human developer to piece them together into a workable application, likely having to tweak things to get it working (this is already possible now with varying degrees of success/utter failure, but it’s steadily improving all the time). Additionally, developers do far more than just purely code. Ask any mature dev team and those who have no other competent skills outside of coding aren’t considered good workers/teammates.

      Now, in 10+ years, if progress continues as it has without a break in pace… Who knows? But I agree with you, by the time that happens with high complexity/high reliability for software development, numerous other job fields will have already become automated. This is why legislation needs to be made to plan for this inevitability. Whether that’s thru UBI or some offshoot of it or even banning automation from replacing major job fields, it needs to be seriously discussed and acted upon before it’s too little too late.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    9 个月前

    Well. That’s stupid.

    Large language models are amazingly useful coding tools. They help developers write code more quickly.

    They are nowhere near being able to actually replace developers. They can’t know when their code doesn’t make sense (which is frequently). They can’t know where to integrate new code into an existing application. They can’t debug themselves.

    Try to replace developers with an MBA using a large language model AI, and once the MBA fails, you’ll be hiring developers again - if your business still exists.

    Every few years, something comes along that makes bean counters who are desperate to cut costs, and scammers who are desperate for a few bucks, declare that programming is over. Code will self-write! No-code editors will replace developers! LLMs can do it all!

    No. No, they can’t. They’re just another tool in the developer toolbox.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      9 个月前

      I’ve been a developer for over 20 years and when I see Autogen generate code, decide to execute that code and then fix errors by making a decision to install dependencies, I can tell you I’m concerned. LLMs are a tool, but a tool that might evolve to replace us. I expect a lot of software roles in ten years to look more like an MBA that has the ability to orchestrate AI agents to complete a task. Coding skills will still matter, but not as much as soft skills will.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        9 个月前

        I really don’t see it.

        Think about a modern application. Think about the file structure, how the individual sources interrelate, how non-code assets are stored, how applications are deployed, and all the other bits and pieces that go into an application. An AI can’t know any of that without being trained - by a human - on the specifics of that application’s needs.

        I use Copilot for my job. It’s very nice, and makes my job easier. And if my boss fired me and the rest of the team and tried to do it himself, the application would be down in a day, then irrevocably destroyed in a week. Then he’d be fired, we’d be rehired, and we - unlike my now-former boss - would know things like how to revert the changes he made when he broke everything while trying to make Copilot create a whole new feature for the application.

        AI code generation is pretty cool, but without the capacity to know what code actually should be generated, it’s useless.

        • paf0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 个月前

          It’s just going to create a summary story about the code base and reference that story as it implements features, not that different that a human. It’s not necessarily something it can do now but it will come. Developers are not special, and I was never talking about Copilot.

          • kescusay@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 个月前

            I don’t think most people grok just how hard implementing that kind of joined-up thinking and metacognition is.

            You’re right, developers aren’t special, except in those ways all humans are, but we’re a very long way indeed from being able to simulate them in AI - especially in large language models. Humans automatically engage in joined-up thinking, second-order logic, and so on, without having to consciously try. Those are all things a large language model literally can’t do.

            It doesn’t know anything. It can’t conceptualize a “summary story,” or understand parts that it might get wrong in such a story. It’s glorified autocomplete.

            And that can be extraordinarily useful, but only if we’re honest with ourselves about what it is and is not capable of.

            Companies that decide to replace their developers with one guy using ChatGPT or Gemini or something will fail, and that’s going to be true for the foreseeable future.

            • paf0@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 个月前

              Try for a second to think beyond what they’re able to do now and think about the future. Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.

              • kescusay@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 个月前

                Try for a second to think beyond what they’re able to do now and think about the future.

                I am. In the future, they will need to be able to perform tasks using joined-up thinking, second-order logic, and metacognition if they’re going to replace people like me with AI. And that is a very hard goal to achieve. Maybe not P = NP hard, but by no means trivial.

                Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.

                I have. My company looked at Autogen. We concluded it wasn’t worth it. The solution to AI agents not being able to actually understand what they’re doing isn’t to amplify the problem by creating teams of them.

                Every few years, something new comes along driven by incredible hype, and people declare programming to be dead. They insist a robot will be able to do my job. I have yet to see a technology that will plausibly do that in ten years, let alone now. And all the hype is built on a foundation of ignorance over how complicated a modern, enterprise-ready application is, and how necessary being able to think about its many moving parts is.

                You know who doesn’t suffer from that ignorance? Microsoft, the creators of Autogen. And they’re currently hiring developers, not laying them off and replacing them with Autogen.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 个月前

        Well, I sometimes see a few tools at my job, which are supposed to be kinda usable by people like that. In reality they can’t 90% of time.

        That’d be because many people think that engineers deal in intermediate technical details, and the general idea is clear for this MBA. In fact it’s not.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    edit-2
    9 个月前

    You remember when everyone was predicting that we are a couple of years away from fully self-driving cars. I think we are now a full decade after those couple of years and I don’t see any fully self driving car on the road taking over human drivers.

    We are now at the honeymoon of the AI and I can only assume that there would be a huge downward correction of some AI stocks who are overvalued and overhyped, like NVIDIA. They are like crypto stock, now on the moon tomorrow, back to Earth.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      9 个月前

      Two decades. DARPA Grand Challenge was in 2004.

      Yeah, everybody always forgets the hype cycle and the peak of inflated expectations.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 个月前

      Waymo exists and is now moving passengers around in three major cities. It’s not taking over yet, but it’s here and growing.The timeframe didn’t meet the hype but the technology is there.

      • filister@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        9 个月前

        Yes, the technology is there but it is not Level 5, it is 3.5-4 at best.

        The point with a full self-driving car is that complexity increases exponentially once you reach 98-99% and the last 1-2% are extremely difficult to crack, because there are so many corner cases and cases you can’t really predict and you need to make a car that drives safer than humans if you really want to commercialize this service.

        Same with generative AI, the leap at first was huge, but comparing GPT 3.5 to 4 or even 3 to 4 wasn’t so great. And I can only assume that from now on achieving progress will get exponentially harder and it will require usage of different yet unknown algorithms and models and advances will be a lot more modest.

        And I don’t know for you but ChatGPT isn’t 100% correct especially when asking more niche questions or sending more complex queries and often it hallucinates and sometimes those hallucinations sound extremely plausible.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 个月前

      Quantuum computing is going to make all encryption useless!! Muwahahahahaaa!

      . . . Any day now . . Maybe- ah! No, no thought this might be the day, but no, not yet.

      Any day now.

        • Podginator@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 个月前

          If you were able to generate near life-like images and simulacrams of human speech why would you tell anyone?

          Money. The answer is money.

          Quantum computing wouldn’t be developed just to break encryption, the exponential increase in compute power would fuel a technological revolution. The encryption breaking would be the byproduct.

  • madcaesar@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    9 个月前

    This seems as wise as Bill Gates claiming 4MB of ram is all you’ll ever need back on 98 🙄

  • gornius@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    9 个月前

    It’s just as crazy as saying “We don’t need math, because every problem can be described using human language”.

    In other words, that might be true as long as your problem is not complex enough to be able to be understood using human language.

    You want to solve a real problem? It’s way more complex with so many moving parts you can’t just take LLM to solve it, because that takes an actual understanding of a problem.

    • Fandangalo@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 个月前

      Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?

      I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.

      Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.

      It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.

      • Spiritreader@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 个月前

        And because they are such black boxes, there’s the sector of Explainable AI which attempts to provide transparency.

        However, in order to understand data from explainable AI, you still need domain experts that have experience in interpreting what that data means and how to make changes.

        It’s almost as if any reasonably complex string of operations requires study. And that’s what tech marketing forgets. As you said, it all has to come from somewhere.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 个月前

      Ha

      If you ever write code for a living first thing you notice is that people can’t explain what they need by using natural language ( which is what English, Mandarin etc is), even if they don’t need to get into details.

      • baldingpudenda@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 个月前

        Also, natural language can be vague and confusing. Look at legalese and law statutes. “When it comes to the law, NOTHING is understood!” ‐- Dragline

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    34
    ·
    9 个月前

    I don’t think he’s seen the absolute fucking drivel that most developers have been given as software specs before now.

    Most people don’t even know what they want, let alone be able to describe it. I’ve often been given a mountain of stuff, only to go back and forth with the customer to figure out what problem they’re actually trying to solve, and then do it in like 3 lines of code in a way that doesn’t break everything else, or tie a maintenance albatross around my neck for the next ten years.

    • jcg@halubilo.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 个月前

      And that’s really what all these guys saying “AI will take er jobs” don’t understand. Good programmers are not just good coders, coding is really the easy part. They’re also good analysts and listeners. I understand what he’s saying - if you spend time accruing specific domain knowledge instead of computer science then you can perhaps make better, bespoke solutions because the “coding” can be handled by AI. But in present day, AI makes garbage code all the time and you’ll be left there not being able to do amything about it because it doesn’t make any sense to you. So who do you call? Someone who can code. Even if we get to this hypothetical dream scenario where you tell an AI to do something and it just does it perfect (gigantic IF), who’s making that AI? The interface for it? The important safety nets to make sure it doesn’t go on a rampage? Itself? Too much context is already lost in conversations between humans, let alone an AI. I can think of one kind of AI that would be able to do it perfectly though (assuming AIs could be perfected, that is), and that’s an AI pre-equipped with full understanding of the domain. But then in that case, why do you need the human in the mix at all?

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 个月前

      Yesterday, I had to deal with a client that literally contradicted himself 3 times in 20 minutes, about whether a specific Date field should be obligatory or not. My boss and a colleague who were nearby started laughing once the client went away, partly because I was visibly annoyed at the indecision.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    34
    ·
    9 个月前

    I don’t see how it would be possible to completely replace programmers. The reason we have programming languages instead of using natural language is that the latter has ambiguities. If you start having to describe your software’s behaviour in natural language, then one of three things can happen:

    1. either this new natural programming language has to make assumptions about what you intend, and thus will only be capable of outputting a certain class of software (i.e. you can’t actually create anything new),
    2. or you need to learn a new way of describing things unambiguously, and now you’re back to programming but with a new language,
    3. or you spend forever going back and forth with the generator until it gives you the output you want, and this would take a lot longer to do than just having an experienced programmer write it.
    • ReplicaFox@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 个月前

      And if you don’t know how to code, how do you even know if it gave you the output you want until it fails in production?

    • model_tar_gz@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      edit-2
      9 个月前

      But that’s not what the article is getting at.

      Here’s an honest take. Let me preface this with some credential: I’m an AI Engineer with many years in field. I’m directly working on right now multiple projects that augment and automate code generation, documentation, completion and even system design/understanding. We’re not there yet. But the pace of progress in how fast we are improving our code-AI is astounding. Exponential growth in capability and accuracy and utility.

      As an anecdotal example; a few years ago I decided I would try to learn Rust (programming language), because it seemed interesting and we had a practical use case for a performant, memory-efficient compiled language. It didn’t really work out for me, tbh. I just didn’t have the time to get very fluent with it enough to be effective.

      Now I’m on a project which also uses Rust. But with ChatGPT and some other models I’ve deployed (Mixtral is really good!) I was basically writing correct, effective Rust code within a week—accepted and merged to main.

      I’m actively using AI code models to write code to train, fine-tune, and deploy AI code models. See where this is going? That’s exponential growth.

      I honestly don’t know if I’d recommend to my young kids programming as a career now even if it has been very lucrative for me and will take me to my retirement just fine. It excites me and scares me at the same time.

      • rolaulten@startrek.website
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        9 个月前

        There is more to a program then writing logic. Good engineers are people who understand how to interpret problems and translate the inherent lack of logic in natural language into something that machines are able to understand (or vice versa).

        The models out there right now can truly accelerate the speed of that translation - but translation will still be needed.

        An anecdote for an anecdote. Part of my job is maintaining a set of EKS clusters where downtime is… undesirable (five nines…). I actively use chatgpt and copilot when adjusting the code that describes the clusters - however these tools are not able to understand and explain impacts of things like upgrading the control plane. For that you need a human who can interpret the needs/hopes/desires/etc of the stakeholders.

        • model_tar_gz@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 个月前

          Yeah I get it 100%. But that’s what I’m saying. I’m already working on and with models that have entire codebase level fine-tuning and understanding. The company I work at is not the first pioneer in this space. Problem understanding and interpretation— all of what you said is true— there are causal models being developed (I am aware of one team in my company doing exactly that) to address that side of software engineering.

          So. I don’t think we are really disagreeing here. Yes, clearly AI models aren’t eliminating humans from software today; but I also really don’t think that day is all that far away. And there will always be need for humans to build systems that serve humans; but the way we do it is going to change so fundamentally that “learn C, learn Rust, learn Python” will all be obsolete sentiments of a bygone era.

          • rolaulten@startrek.website
            link
            fedilink
            English
            arrow-up
            7
            ·
            9 个月前

            Let’s be clear - current AI models are being used by poor leadership to remove bad developers (good ones don’t tend to stick around). This however does place some pressure on the greater tech job market (but I’d argue no different then any other downturn we have all lived through).

            That said, until the issues with being confidently incorrect are resolved (and I bet people a lot smarter then me are tackling the problem) it’s nothing better then a suped up IDE. Now if you have a public resources you can point me to that can look at a meta repo full of dozens of tools and help me convert the python scripts that are wrappers of wrappers( and so on) into something sane I’m all ears.

            I highly doubt we will ever get to the point where you don’t need to understand how an algorithm works - and for that you need to understand core concepts like recursion and loops. As humans brains are designed for pattern recognition - that means writing a program to solve a sodoku puzzle.

  • rottingleaf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    9 个月前

    I think this is bullshit regarding LLMs, but making and using generative tools more and more high-level and understandable for users is a good thing.

    Like various visual programming means, where you sketch something working via connected blocks (like PureData for sounds), or in Matlab I think one can use such constructors to generate code for specific controllers involved in the scheme, or like LabView.

    Or like HyperCard.

    Not that anybody should stop learning anything. There’s a niche for every way to do things.

    I just like that class of programs.

    • gazter@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 个月前

      As someone who’s had a bit of exposure to PLCs and ladder logic, and dabbled in some more ‘programming’ type languages, I would love to find some sort of ‘language’ that fits together like ladder logic, but for more computery type applications.

      I like systems, not programs. Most of my software design is done by building a flowchart, then stumbling around trying to figure out how to write that into code. I feel it would be so much easier if I could just make the flowchart be the code.

      I want a grown up Scratch.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 个月前

        In some sense this is regressive, but I agree that ladder logic is more intuitive.

        I hated drawing flowcharts in university, but at this point have learned that if you understand what you’re doing, you can draw a flowchart. If you don’t, you shouldn’t have written that program.

        So yeah.

        I think the environment to program “Buran” used such a language (that is, they’d draw flowcharts instead of code).