Following on from Sunday’s post, which was really about using overly generic solutions to actually quite specific problems (e.g. a full-featured message bus when all anyone really cares about is data change notification), there are some other things that are taken as gospel in the industry that have been annoying me, for the same reason.
Hungarian notation is the coding convention of prefixing every variable name with a short code indicating its data type. So for example, if you have a length or something and it’s an integer, you might store it in a variable called iLength
, or the name of something in a (zero-terminated) string called szName
. Why is this silly? Because while it gives code a superficial gloss of “engineering” it doesn’t actually help! Let’s say that you attempt iLength + szName
, what will happen?
- The language will spot that this makes no sense, and it will fail to compile, in which case, the fancy naming scheme, which must be adhered to manually by every developer, hasn’t actually helped
- The language will silently coerce
iLength
into a string (an implicititoa()
, in C) and the result might be “1Gaius”. The fancy naming convention hasn’t helped again. And now you have some dodgy code that compiles and seems to run just fine… - Now let’s say you have
iLength
andiWidth
. They are both integers, and the compiler can’t stop you adding a length to a width when you meant to multiply them, and the code “looks right” as well. Now what if the length is an integer number of inches but the width, queried from another system is in centimetres? Think it can’t happen? Really?
But too many developers blindly adhere to this system – and even advocate it. When instead they should be stepping back and taking their thinking to the next level. An ML-derived language simply won’t let you mix and match datatypes without explicit conversion – not even integers and floating point. F# has units of measure. Which leads me onto my next point: unit testing.
Now don’t get me wrong: I am a strong believer in both not cutting corners on QA, and automating anything that can be automated. But there is an old saying, why get a dog and bark yourself? Too much unit testing is really just moving features such as strong typing and static analysis and enforced purity out of the language and into the developer’s head and into his manually written code – when really the developer should be taking a step back and saying, why am I doing this when I could be using a language that catches many classes of bugs “for free”? ML compilers will prove at compile time that there is no possible code path into a function without well-formed arguments, and no way out of that function without a well-formed return value. This frees the programmer to concentrate on what programming is really about – solving people’s problems – and less worrying that he’s missed an edge case – and I’ve seen projects where the unit tests dwarfed the actual application code. Why are organizations still willing to pay for this?†
And that leads me onto my final rant for this evening, design patterns. Wikipædia defines it as a description or template for how to solve a problem that can be used in many different situations. But this is merely another example of features that should be in the language instead existing in developers heads, and re-implemented in their code in every project (a point made by writer Paul Graham). And for this too, it’s the individual developers, the skilled and senior ones anyway, the ones who are paid to think and not just to code, who should be saying,
- Why am I doing another Decorator or Strategy, shouldn’t my language support functional composition?
- Why am I doing another Command pattern, shouldn’t my language support Currying?
- Why am I doing another Iterator or Visitor, shouldn’t my language support polymorphic functions?
- You get the picture… An enormous amount of effort is expended retrofitting these features onto OO languages, perhaps this is another example of the innovator’s dilemma.
This is the reason that I am excited about F#, tho’ I still haven’t really gotten to grips with it in the “real work” sense that I have with OCaml – yet. It’s a first-class citizen of Visual Studio now, fully supported by probably the biggest tools vendor on the planet, with the full ecosystem of all the libraries available for .NET, there’s no reason not to use it (on Windows anyway, I know all about Mono but on Unix you lose the ecosystem, so I’ll stick with OCaml there, and most of my work is on Unix systems and needs database connectivity). Similarly on the JVM people are turning to Scala. This is certainly a very interesting time to be a curious and ambitious software engineer…
† I do know the answer to this: because any organization will seek to commoditize the goods and services it relies on; they believe it’s worth being less productive upfront in order to get cheaper maintenance in the future. Of course neither of these is actually true! And of course that writing unnecessary unit tests, refactoring code, building/using over-engineered generic frameworks or “solutions”, etc is indistinguishable from “real work”… That is why I say that ordinary developers have created this comfortable niche for themselves.
I don’t intend this piece to “convert” anyone whose eyes are not yet open, it’s just here to further crystallize some of my thoughts.
Pingback: Segmentation Faults, TAP and Eclipse | So I decided to take my work back underground