Ah! A really useful bit of research that can pay off huge dividends!
While it has been true for some time that the most expensive part of application development is the human involvement, I think we have used that as an excuse to not prune our code, use the most efficient language for the circumstances, or otherwise seek to optimize the code aspect of the application. Which is really strange, because we’ll pay big bucks for a faster CPU or a faster buss, and we look to see the RPM on our hard drives in order to eke a bit faster response from our machines, and then we’ll smother that architecture with bloatware.
In a previous life I worked for a company that had a couple Assembler programmers who wrote and maintained the read and write routines for certain high-traffic databases. As a rookie I used the typical IO routines that I was familiar with, the ones that came with the language package. When we ran our stress test on the code changes for the month’s implementation package, my code was the bottle neck, specifically my IO. When we changed the IO calls from the generic IO routines to the company’s custom Assembler routines our thru-put returned to normal, and I became a believer in efficient code, especially in critical high-volume areas such as key data calls. And those Assembler guys weren’t some anachronistic dinosaurs but key components of the Systems group.
With all the hype the latest cool language gets, whether it is Java or Ruby on Rails, HTML or CSS, it can be easy to loose sight of the fact that these are meta-languages, using high-level language concepts to implement in one statement the things the old timers might need 100 or 1000 lines of COBOL or Fortran to implement. A benefit of working in nGL languages is that it is easier to put together an application from a concept; a potential cost is that the implemented code is, as I discovered with my IO routines, bloated and less efficient.
But how do we know of our code is bloated? It isn’t necessarily the disk space it keeps, it is also the programming efficiency, whether a routine is smothered under 15 loops, or only 2 loops, even if the 2-loop option has an extra statement or two.
Fortunately, the guy who wrote this article, or at least the guys about whom this article was written, took the time and energy to delve into this very question. They pulled together a bunch of languages and wrote identical applications so we could have a definitive answer. And the article that was written by/about them is appropriately called, “Speed, size and dependability of programming languages“.