In this video I discuss Ubuntu's decision to switch to using rust implementations of the core utilities (mkdir, ls, cat, etc...) and what it could mean for the broader Linux ecosystem. My merch is ...
Rust is better for writing multithreaded applications which means that the small amount of utilities that can utilize parallelism receive a significant speedup. uutils multithreaded sort was apparently 6x faster than the GNU utils single threaded version.
P.S. I strongly doubt handwritten assembly is more efficient than modern C compilers.
Compilers have a lot of chalenges to even compile, let alone optimize. Just register allocation alone is a big problem. An inherent problem is that the compiler does not know what the program is supposed to do. Humans still write better assembly then compilers.
The one down arrow on the guy you are responding to is from me, just so everybody knows.
This is just wrong, the compiler (and linker) knows exactly what the program does as it has the ENTIRE source code available. Compilers have been so good the last 20 years that it is quite hard to write things faster in assembly/machine code.
One of the harder parts about assembly is keeping track of which registers a subroutine uses and which one is available, as the program grows larger you might be forced to push/pop to the stack all the time.
Inlining code is also difficult in assembler, the compiler is quite adept at that.
It might have been true up until the 90s, but then compilers started getting so good (Watcom) there was rarely any point to write assembler code, unless there was some extremely hardware specific thing that needed to be done
Of course, for hot paths or small examples it is, but I doubt it’s feasible or maintainable to write a “real” projects like core utilities in assembly.
P.S. I strongly doubt handwritten assembly is more efficient than modern C compilers.
As with everything, it all depends.
When writing super efficient assembly you write towards the destination and not necessarily to fit higher level language constructs. There are often ways to cut corners for aspects not needed, reduction in instructions and loops all based on well designed assembly.
The problem is you aren’t going to do that for every single CPU instruction because it would take forever and not provide a good ROI. It is far more common to write 99% of your system code in C and then write just the parts that can really benefit from fine tuned assembly. And please note that unless you’re writing for an RTOS or something crazy critical on efficiency, its going to be even less assembly.
It’s been proven faster. That’s all I personally know.
Nothing except for binary coding can be faster than C I think.
Rust is better for writing multithreaded applications which means that the small amount of utilities that can utilize parallelism receive a significant speedup. uutils multithreaded sort was apparently 6x faster than the GNU utils single threaded version.
P.S. I strongly doubt handwritten assembly is more efficient than modern C compilers.
My simple assembly program can rum circles around compilers. As long as something is small it is possible to optimize better than a C compiler.
Compilers have a lot of chalenges to even compile, let alone optimize. Just register allocation alone is a big problem. An inherent problem is that the compiler does not know what the program is supposed to do. Humans still write better assembly then compilers.
The one down arrow on the guy you are responding to is from me, just so everybody knows.
This is just wrong, the compiler (and linker) knows exactly what the program does as it has the ENTIRE source code available. Compilers have been so good the last 20 years that it is quite hard to write things faster in assembly/machine code.
One of the harder parts about assembly is keeping track of which registers a subroutine uses and which one is available, as the program grows larger you might be forced to push/pop to the stack all the time. Inlining code is also difficult in assembler, the compiler is quite adept at that.
It might have been true up until the 90s, but then compilers started getting so good (Watcom) there was rarely any point to write assembler code, unless there was some extremely hardware specific thing that needed to be done
The last time I wrote assembly I was making a make shift sound card thing with an Arduino. I hooked a speaker up to the GPIO and was toggling the bit
In large applications maybe not, but in benchmarks there can be a perfectly optimized assembly
Of course, for hot paths or small examples it is, but I doubt it’s feasible or maintainable to write a “real” projects like core utilities in assembly.
I’m sure you can do
cat
in assemblyAs with everything, it all depends.
When writing super efficient assembly you write towards the destination and not necessarily to fit higher level language constructs. There are often ways to cut corners for aspects not needed, reduction in instructions and loops all based on well designed assembly.
The problem is you aren’t going to do that for every single CPU instruction because it would take forever and not provide a good ROI. It is far more common to write 99% of your system code in C and then write just the parts that can really benefit from fine tuned assembly. And please note that unless you’re writing for an RTOS or something crazy critical on efficiency, its going to be even less assembly.
Multithreading isn’t a true efficiency benefit. I was talking about different things there.
Fortran
I’m not sure why people are downvoting you, since Fortran is known to be extremely performant when dealing with multidimensional arrays.