Writing software is hard, particularly when the schedules keep programmers “nose to the grindstone.” Every so often, it's important to take a breather and look around the world and discover what we can find-what we find can often help us write software better. Philosophy seems a strange partner to the software craftsman, but ironically a brief dip in the waters of abstract thought will help hone skills later useful to the craft of code, models, and workable software.
Recently, a friend of mine was presented with an interesting dilemma regarding the replacement of a departed programmer. Stripped down to the essence of the problem, it went something like this: two candidates remained in the interview process, and each was asked a question: “How well do you know the programming language?”
One answered, “I've never had any formal education on it, but I've spent the last five years programming with it.”
The other answered, “I've studied the language grammar, written a compiler for it, and passed every exam ever created for it with a perfect grade.”
Which would you hire?
This is obviously a contrived scenario, but it does highlight a more fundamental problem, that of the inevitable comparison of “knowing” a topic-in the first case, we have a developer who's used the tool but never really studied it, and in the second, we've a developer who's studied it, but never really used it. Don't answer the question just for itself, though; look deeper. What we're wrestling with is the basic question-which developer knows the language? And that in turn leads to a much more interesting question: Which is the greater/better path to knowledge-reason or experience?
What Do You Know?
It turns out we're not the first to explore this little bit of abstract thought-this argument over the path by which we come to “know” things colors much of the history of Western philosophy. Put simply, there are two strands of philosophical thought on this subject: rationalism and empiricism.
The rationalists and empiricists differentiate themselves on three axes:
Although none of the battle-lines are cast in stone, the breakdown seems readily apparent: an analytic statement, if true, is necessarily so and thus can be known a priori; a synthetic statement, on the other hand, is contingently so and therefore can only be known a posteriori.
Back to Software
Bringing this little diversion back to software for a moment, consider the basic question again: How do we know a subject? Can a developer know the subject only after having spent a certain amount of time working within the technology? Is experience and empirical observation the only true path to understanding? Or can a developer use a priori knowledge to infer certain truths about the technology without ever having to link or run a program with it?
Not surprisingly, our industry is rather two-faced about the whole subject, borrowing from each camp without reservation, embarrassment, or consistency.
On the one hand, we clearly have a preference for the a priori/necessary/analytic side of the world, particularly when a new technology emerges. Consider, for example, the desperate clamor for “best practices” when something like WCF or Workflow or BizTalk Server comes out of Redmond: this is new; this is something that I might be using and I desperately don't want to be using it incorrectly, so give me the “best practices” to use. In the view of the empiricists, this is clearly fallacious-how could anybody, even the technology's creators, have any knowledge of the tech whatsoever, if they've not used it? Remember, as our fearless Editor-in-Chief pointed out in his July/August 2009 CODE Magazine editorial, “10,000 Hours to Graceland,” Malcolm Gladwell suggests that it takes 10,000 hours to become an expert on any subject, which is clearly an empirical perspective.
Clearly analytic conclusions can be drawn from any new technology, based on its fundamental precepts. Consider WCF: “Anything that hits the network will be exponentially slower than if it stays entirely in-process. WCF is about crossing boundaries, including machine boundaries. Therefore, WCF will be exponentially slower if it hits the network.” This is not only a fair statement, but also (not surprisingly) a good inference about how WCF should be used-minimizing round trips across the network. This trick works with SQL, too, by the way-which is why we pull queries back in sets rather than a single column/single row at a time, or use stored procedures to avoid processing on the client. And it doesn't take much to realize that this generalizes well to other things-Web services, REST, Ajax, ASP.NET, ... the list goes on.
Score one for the a priori guys, then?
Not entirely. We've all been there, working with the developer who's got the PhD in Computer Science but can't reverse a String, or the compiler-savvy college intern that can talk for days about optimization phases in the language compiler but can't for the life of him open a connection to the database. We joke about “PowerPoint Architects” and other similarly-qualified folks who “live in the Ivory Tower,” particularly those in the academic and research fields. What do they know? Most of us have anecdotal evidence suggesting that simply learning about a subject doesn't always translate into knowing the subject, a clear point for the empiricists.
But taken too far, this works against us, too. Consider the C# language: when it “hit the streets” in alpha form at the 2000 PDC, nobody outside of Microsoft (and precious few inside Microsoft) had any experience with it. Given that it would ship roughly eighteen months thereafter (in 2002), clearly nobody could claim to “know” C# until 2005 or so.
Yet, not surprisingly, a large amount of information could be inferred from our experience with other sources, namely C#‘s "kissin’ cousin" over on the other side of the technology fence, Java. Those of us who had extensive experience with Java already knew a great deal about what programming on this new platform and language would be like, and that experience translated almost perfectly. In fact, some of that experience translated directly into language changes-the “using” keyword was added to C# 1.0 based on a mailing list thread from Chris Sells, whose brief experience with Java memory management led him to openly question C#'s lack of deterministic finalization (garbage collection). Without any real experience in the C# language yet, mind you.
By the way, it's not just the new technology space where this breakdown plays out-when a new project comes around, what's more natural than the desire to prototype the final result “so people can get an idea of what it'll be like” (in other words, so they can get some a posteriori knowledge of it, rather than just a priori)? Ironically, that prototype often serves the opposite ends, causing end users to draw conclusions from the prototype that may or may not be true of the final product… but that's the whole point of a posteriori if you think about it.
Moving Forward
Having invested the half hour or so to read this far, you may be curious where this little bit of high-level waxing philosophical gets us. Given that CODE Magazine readers tend to fall more into the “practitioner” side of the programming world (as opposed to the “academic” bin), several interesting little tidbits emerge from this:
There's likely a few more nuggets of wisdom in there for the taking, but I'll leave those for you to discover for yourself.
Not All Programming Is Computer Science
Programmers frequently forget that a large portion of this business is “soft science”-things they didn't teach you in programming class. Yes, obviously, the ability to think rationally and logically is important, but programming is much more than just if-then-else's and exceptions. Spending a little time studying the non-science-y parts of programming can go a long ways not only towards being able to communicate with your target audience and project champions, but also give you some skills you can turn back on the craft of programming itself.
Now if you'll excuse me, I have some more light reading to do: The Prince is bound to have some good insights on how to deal with management…
(For those interested in a light introduction to philosophical topics, I highly recommend either “50 Philosophy Ideas,” by Ben Dupre, and/or any of the “Blackwell Philosophy and Pop Culture Series,” such as “Battlestar Galactica and Philosophy” or "Batman and Philosophy.")
Ted Neward