Or don’t, I’m not your father.
The situation Sometimes, you want to get a very specific part of a command output. The classic problem is to get the IP address of an interface. Even today, the results to these queries on your favorite search engine will most likely involve grep, sed, awk and the like.
But what if I told you there’s a better way?
Using JSON for structuring data Some tools offer outputting their data in JSON format. That data can be processed with the venerable jq utility. The control you have over the processing is higher and the chance that the data format of the output changes is lower. Let’s look at what that means in practice.
Linux also has some shells working on structured data (e.g.
nu
andelvish
). But those don’t really help you on machines that don’t have those installed.In general, I think Powershell’s approach is better than the traditional one, but I really dislike the general commands. Always feels weird using it.
Meanwhile the article assumes
jq
is present, which it hasn’t been by default on any of my servers.Off I go down the structured data shells for Linux rabbit hole!
Maybe I should have written it differently: I think people are rather willing to install another tool than a whole new shell. The later would also require you to fundamentally change your scripts, because that’s most often where you need that carefully extracted data. Otherwise manually copying and pasting would suffice.
I was thinking about writing a post about
nu
as well. But I wasn’t sure that appeals to many, or is it should be part of something like a collection of stuff I like, together with fish as an example of a more traditional shell.Maybe you should just learn the toolsets better. Structured looks great until you’re tasked either with parsing a 20GB structured file. Each line of text representing a single item in a collection makes total sense, then when you get into things like a list subtype there’s nothing stopping you from using a char sequence to indicate a child item.
One might wonder if at those file sizes, working with text still makes sense. I think there’s a good reason
journald
uses a binary format for storing all that data. And “real” databases tend to scale better than text files in a filesystem as well, even though a filesystem is a database.Most people won’t have to deal with that amount of text based data ever.