In this post, I go into hopefully better output parsing.

  • Laser@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Linux also has some shells working on structured data (e.g. nu and elvish). But those don’t really help you on machines that don’t have those installed.

    In general, I think Powershell’s approach is better than the traditional one, but I really dislike the general commands. Always feels weird using it.

    • JWBananas@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      But those don’t really help you on machines that don’t have those installed.

      Meanwhile the article assumes jq is present, which it hasn’t been by default on any of my servers.

      Off I go down the structured data shells for Linux rabbit hole!

      • Laser@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        Maybe I should have written it differently: I think people are rather willing to install another tool than a whole new shell. The later would also require you to fundamentally change your scripts, because that’s most often where you need that carefully extracted data. Otherwise manually copying and pasting would suffice.

        I was thinking about writing a post about nu as well. But I wasn’t sure that appeals to many, or is it should be part of something like a collection of stuff I like, together with fish as an example of a more traditional shell.

        • chonkyninja@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          Maybe you should just learn the toolsets better. Structured looks great until you’re tasked either with parsing a 20GB structured file. Each line of text representing a single item in a collection makes total sense, then when you get into things like a list subtype there’s nothing stopping you from using a char sequence to indicate a child item.

          • Laser@feddit.orgOP
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            One might wonder if at those file sizes, working with text still makes sense. I think there’s a good reason journald uses a binary format for storing all that data. And “real” databases tend to scale better than text files in a filesystem as well, even though a filesystem is a database.

            Most people won’t have to deal with that amount of text based data ever.