• plyth@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    Explicit programmers are needed because the general public has failed to learn programming. Hiding the complexity behind nice interfaces makes it actually more difficult to understand programming.

    This comes all from programmers using programs to abstract programming away.

    What if the 2030s change the approach and use AI to teach everybody how to program?

    • Luccus@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      49 seconds ago

      I find this to be a real problem with visial shaders. I know how certain mathematical formulas affect an input, but instead of just pressing the Enter key and writing it down, I now have to move blocks around, and oh no, they were nicely logically aligned, now one block is covering another block, oh noo, what a mess and the auto sort thing messes up the logical sorting completly… well too bad.

      And I find that most solutions on the internet forget that previous outputs can be reused when using the visual editor. Getting normals from already generated noise without resampling somehow becomes arcane knowledge.

  • Rusty@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 hours ago

    You can add SQL in the 70s. It was created to be human readable so business people could write sql queries themselves without programmers.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      4 hours ago

      Ironically, one of the universal things I’ve noticed in programmers (myself included) is that newbie coders always go through a phase of thinking “why am I writing SQL? I’ll write a set of classes to write the SQL for me!” resulting in a massively overcomplicated mess that is a hundred times harder to use (and maintain) than a simple SQL statement would be. The most hilarious example of this I ever saw was when I took over a young colleague’s code base and found two classes named “OR.cs” and “AND.cs”. All they did was take a String as a parameter, append " OR " or " AND " to it, and return it as the output. Very forward-thinking, in case the meanings of “OR” and “AND” were ever to change in future versions of SQL.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        3 hours ago

        Object Relational Mapping can be helpful when dealing with larger codebases/complex databases for simply creating a more programmatic way of interacting with your data.

        I can’t say it is always worth it, nor does it always make things simpler, but it can help.

        • bort@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 hour ago

          the problem with ORM is that some people go all in on it and ignore pure SQL completely.

          In reality ORM only works well for somewhat simple queries and structures, but at some times you will have to write your own queries in SQL. But then you have some bonus complexity, that comes from 2 different things filling the same niche. It’s still worth it, but there is no free cake.

        • trxxruraxvr@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 hours ago

          I don’t have a lot of experience with projects that use ORMs, but from what I’ve seen it’s usually not worth it. They tend to make developers lazy and create things where every query fetches half the database when they only need one or two columns from a single row.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    After shovels were invented, we decided to dig more holes.

    After hammers were invented, we needed to drive more nails.

    Now that vibe coding has been invented, we are going to write more software.

    No shit

  • Imacat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    44
    ·
    9 hours ago

    Least it’s an improvement over no/low code. You can dig in and unfuck some ai code easily enough but god help you if your no code platform has a bug that only their support team can fix. Not to mention the vendor lock in and licensing costs that come with it.

    • Sunsofold@lemmings.world
      link
      fedilink
      arrow-up
      44
      ·
      10 hours ago

      ‘I want you to make me a Facebook-killer app with agentive AI and blockchains. Why is that so hard for you code monkeys to understand?’

    • madcaesar@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      8 hours ago

      Getting ai to do a complex problem correctly takes so much detailed explanation, it’s quicker to do it myself

      • TurdBurgler@sh.itjust.works
        link
        fedilink
        arrow-up
        9
        arrow-down
        11
        ·
        edit-2
        8 hours ago

        While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

        I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?

        For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

        Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.

        LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

        1. Plan first, using planning modes to help you, decomposition the plan
        2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

        https://www.promptingguide.ai/

        https://www.anthropic.com/engineering/claude-code-best-practices

        There are community guides that take this even further, but these are some starting references I found very valuable.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          3 hours ago

          While you’re right that it’s a new technology and not everyone is using it right, if it requires all of that setup and infrastructure to work then are we sure it provides a material benefit. Most projects never get that kind of attention at all, to require it for AI integration means that currently it may be more work than it’s worth.

          • expr@programming.dev
            link
            fedilink
            arrow-up
            11
            ·
            6 hours ago

            Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.

            Sometimes, the tool is simply bad and not worth using.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 hours ago

      Even writing an RFC for a mildly complicated feature to mostly describe it takes so many words and communication with stakeholders that it can be a full time job. Imagine an entire app.

  • w3dd1e@lemmy.zip
    link
    fedilink
    arrow-up
    29
    ·
    12 hours ago

    Doesn’t matter if they can replace coders. If CEOs think it can, it will.

    And now, it’s good enough to look like it works so the CEO can just push the problem down the road and get an instant stock inflation

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        7 hours ago

        I don’t want so spend my career un-fucking vibe code.

        I want to create something fun and nice. If I wanted to clean other people’s mess, I would be a janitor.

        • pinball_wizard@lemmy.zip
          link
          fedilink
          arrow-up
          6
          ·
          7 hours ago

          If I wanted to clean other people’s mess, I would be a janitor.

          I’ll take your share of the slop cleanup if you don’t want it. I wouldn’t mind twice the slop cleanup extortion salary.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          10 hours ago

          I hope all those companies go bankrupt, people hiring those CEOs lose everything, and the CEOs never manage to find another job in their lives…

          But that’s a not bad second option.

          • LordCrom@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            9 hours ago

            The CEOs will get a short term boost to profits and stock price. Theyll get a massive bonus from it. Then in a few years when shit starts blowing up, they will retire before that happens with a nice compensation package, leaving the company, employeez, and stockholders up shits creek from his short sighted plan.

            But the CEO will be just fine on his yacht, dont worry.

  • ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    13 hours ago

    I don’t get how an MDA would translate to “no programmers needed”. Maybe they meant “coders”?
    But really, I feel like the people who use this phrase to pitch their product either don’t know how many people actually find it difficult to break down tasks into logical components, such that a computer would be able to use, or they’re lying.

    • TeamAssimilation@infosec.pub
      link
      fedilink
      arrow-up
      31
      arrow-down
      1
      ·
      12 hours ago

      Software engineering is a mindset, a way of doing something while thinking forward (and I don’t mean just scalability), at least if you want it done with quality. Today you can’t vibe code but proofs of concept, prototypes that are in no way ready for production.

      I don’t see current LLMs overcoming this soon. It appears that they’ve reached their limits without achieving general AI, which is what truly would obsolete programmers, and humans in general.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        21
        ·
        12 hours ago

        Yeah why is it always coders that are supposed to be replaced and not a whole slew of other jobs where a wrong colon won’t break the whole system?

        Like management or C-Suits. Fuck I’d take chatgpt as a manager any day.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 hours ago

        programmers, and humans in general

        With current levels of technology, they would require humans for maintenance.
        Not because they don’t have self-replication, because they can just make that if they have a proper intelligence, but because their energy costs are too high and can’t fill AI all the way.


        OK, so I didn’t think enough. They might just end up making robots with expert systems, to do the maintenance work which would require not wasting resources on “intelligence”.

  • Pechente@feddit.org
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    edit-2
    13 hours ago

    LLMs often fail at the simplest tasks. Just this week I had it fail multiple times where the solution ended up being incredibly simple and yet it couldn’t figure it out. LLMs also seem to „think“ any problem can be solved with more code, thereby making the project much harder to maintain.

    LLMs won’t replace programmers anytime soon but I can see sketchy companies taking programming projects by scamming their clients through selling them work generated by LLMs. I‘ve heard multiple accounts of this already happening and similar things happened with no code solutions before.

    • Rikudou_Sage@lemmings.world
      link
      fedilink
      arrow-up
      7
      ·
      9 hours ago

      Today I removed some functions and moved some code to separate services and being the lazy guy I am, I told it to update the tests so they no longer fail. The idiot pretty much undid my changes and updated the code to something very much resembling the original version which I was refactoring. And the fucker did it twice, even with explicit instructions to not do it.

    • TurdBurgler@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      27
      ·
      edit-2
      12 hours ago

      Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.

      My own anecdotes:

      In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects, or maintain existing ones.

      Our product delivery has been accelerated while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.

      The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.

      Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.

      I don’t place a power drill on the table and say “build a shelf,” expecting it to happen, but marketing of AI has people believing they can.

      Instead, you give an intern a power drill with a step-by-step plan with all the components and on-the-job training available on demand.

      If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.

      You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.

      • TeamAssimilation@infosec.pub
        link
        fedilink
        arrow-up
        23
        arrow-down
        2
        ·
        12 hours ago

        I seriously doubt your quality is maintained when an LLM writes most of your code, unless a human audits every line and understands what and why it is doing it.

        If you break the tasks small enough that you can do this each step, it is no longer writing a full application, it’s writing small snippets, and you’re code-pairing with it.

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          14
          ·
          edit-2
          7 hours ago

          Great? Business is making money. I already explained we have human reviewed PRs on top of full test coverage and other validations.

          We’re compliant on security policies at our organization, and we have no trouble maintaining what the current code we’re generating because it’s based on years of well defined patterns and best practices that we score internally across the entirety of engineering at our organization.

          As more examples in the real world:

          Aider has written 7% of its own code (outdated, now 70%) | aider https://aider.chat/2024/05/24/self-assembly.html

          https://aider.chat/HISTORY.html

          LibreChat is largely contributed to by Claude Code, it’s the current best open source ChatGPT client, and they’ve just been acquired by ClickHouse.

          https://clickhouse.com/blog/clickhouse-acquires-librechat

          https://github.com/danny-avila/LibreChat/commits/main/

          Such suffering from the quality! So much worse than our legacy monolith!

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 hours ago

            Your product is an LLM tool written with LLM tools. That’s is hilarious.

            If the goal is to see how much middleware you can sell idiots, you’re doing great!

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          arrow-down
          11
          ·
          12 hours ago

          We have human code review and our backlog has been well curated prior to AI. Strongly definitely acceptance criteria, good application architecture, unit tests with 100% coverage, are just a few ways we keep things on the rails.

          I don’t see what the idea of paircoding has to do with this. Never did I claim I’m one shotting agents.

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          7
          ·
          edit-2
          11 hours ago

          Cursor and Claude Code are currently top tier.

          GitHub Copilot is catching up, and at a $20/mo price point, it is one of the best ways to get started. Microsoft is slow rolling some of the delivery of features, because they can just steal the ideas from other projects that do it first. VScode also has extensions worth looking at: Cline and RooCode

          Claude Code is better than just using Claude in cursor or copilot. Claude Code has next level magic that dispells some of the myths being propagated here about “ai bad at thing” because of the strong default prompts and validation they have built into it. You can say dumb human ignorant shit, and it will implicitly do a better job than others tools you give the same commands to.

          To REALLY utilize claude code YOU MUST configure mcp tools… context7 is a critical one that avoids one of those footguns, “the model was trained on older versions of these libraries.”

          Cursor hosts models with their own secret sauce that improves their behavior. They hardforked VSCode to make a deeper integrated experience.

          Avoid antigravity (google) and Kiro (Amazon). They don’t offer enough value over the others right now.

          If you already have an openai account, codex is worth trying, it’s like Claude Code, but not as good.

          JetBrains… not worth it for me.

          Aider is an honorable mention.

      • Eheran@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        12 hours ago

        One of the rare comments here that is not acid spewing rage against AI. I too went from “copying a few lines to save some time” and having to recheck everything to several hundred lines working out of the box.

        • TurdBurgler@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          9
          ·
          12 hours ago

          I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.

          I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.

          • Trail@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            33 minutes ago

            Get back to us when you actually launch and maintain a product for a few months then. Because you don’t have anything in production then.

          • chunkystyles@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 hours ago

            What are your plans when these AI companies collapse, or start charging the actual costs of these services?

            Because right now, you’re paying just a tiny fraction of what it costs to run these services. And these AI companies are burning billions to try to find a way to make this all profitable.

            • Eheran@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              2 hours ago

              What are your plans when the Internet stops existing or is made illegal (same result)? Or when…

              They are not going away. LLMs are already ubiquitous, there is not only one company.

  • Speiser0@feddit.org
    link
    fedilink
    arrow-up
    16
    arrow-down
    7
    ·
    12 hours ago

    Well, have you seen what game engines have done to us?

    When tools become more accessible, it mostly results in more garbage.

    • Grimy@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      4
      ·
      10 hours ago

      I’m guessing 4 out of 5 of your favorite games have been made with either unity or unreal. What an absolutely shit take.

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        The fact that they have favorite games using these engines has absolutely nothing to do with there being massive amounts of garbage games. The shit take is yours.

      • Speiser0@feddit.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        7 hours ago

        You’re guess is wrong. :P And anyways, I didn’t say all games using an easy to use game engine are shit.

        If you use an easy game engine (idk if unreal would even fit this, btw), it is easier to produce something usable at all. Meanwhile, the effort needed to make the game good (i.e. game design) stays the same. The result is that games reach a state of being publishable with a lower amount of effort spent in development.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    12 hours ago

    I barely use AI for work but I gotta say that it’s the first time I can get some very specific tasks done faster.

    I currently make it write code generators, I fix the up and after that I have something better at making boilerplate than these LLMs. Today I had to throw up a bunch of CRUD for a small webapp and it saved me around 1-2 hours.

    • mesa@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Yeah forms and very basic HTML its good. Anything complex and you have to take over. Great at saving time, like an intern. But a bit worse in that the intern will typically get better and the ai hasn’t really.

    • TurdBurgler@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      12 hours ago

      That’s a great methodology for a new adopter.

      Curious if you read about it, or did it out of mistrust for the AI?

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    10 hours ago

    Early 80s: High level structured languages (Hello COBOL!)

    Late 80s: 4th generation languages

    At least before that people just assumed everybody that interacted with a computer was a programmer, so managers didn’t have a compulsion when hearing the name and decided to fire all programmers.

  • Skullgrid@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    12 hours ago

    help me. I am stuck working in SDET and my job makes me do a cert every 6 months that’s “no code” and I need to transition to writing code.

    I’ve been SDET since 2013, in c# and java. I am so fucking sick of selenium and getting manual testing dumped on my lap. I led a test team for a fortune 500 company as a contractor for a project. I can also program in the useless salesforce stack (apex, LWC).

    I am the sole breadwinner for my household. I have no fucking idea what to do.

    • TurdBurgler@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      6
      ·
      edit-2
      7 hours ago

      If you’re not already messing with mcp tools that do browser orchestration, you might want to investigate that.

      For example, if you setup puppeteer, you can have a natural conversation about the website you’re working on, and the agent can orchestrate your browser for you. The implication is that the agent can get into a feedback loop on its own to verify the feature you’re asking it to build.

      I don’t want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/