History teaches us …

I was on a conference call with a group of IT application architects, and, with a few minutes left until the end of the scheduled hour, the summary of the meeting began with that phrase: “History teaches us …”. As an amateur historian, my interest was immediately piqued. The end of the sentence was “that a three-tier architecture is the best architecture.”

Had it not been for that three-word preamble, I would have let the assertion stand unchallenged. But Clio had been summoned — and I had to disagree.


Frederick Lanchester, was the first person to try to apply mathematical modeling techniques to warfare. In the early part of this century, he developed the Lanchester equations to model armed conflict. I came across an article of his as a child in a collection of books ( © 1956), still available today, The World of Mathematics.. Lanchester looks at the tactics of Napolean in the Italian campaign against Austria and Lord Nelson at Trafalgar to illustrate his point. The lesson that I learned from that history (although there are others to be learned) is that the most effective tactic is to split one’s opponent into two nearly equal halves. The reason for that is related to the effectiveness of the fighting forces being proportional to the square of the size.

Software complexity, also being proportional to the square of the size could very well learn this lesson of history; in which case the correct number of tiers might well be two.

William of Ockham (who lived in the fourteenth century) is famous for Occam’s Razor (also known as the “law of parsimony”):

“Entia non sunt multiplicanda praeter necessitatem” (“Entities should not be
multiplied unnecessarily.”)

This is often simplified to “keep it simple, Simon”.

Although famous today for the Razor, in his own time, William was famous for being a nominalist. Nominalism holds that there is no underlying “truth” to be discovered — rather, things mean what we all agree that they mean — and we can choose to agree differently — which will change our perception of reality. In other words: That word does not mean what you think it means.

So, when someone asks the question: “Do you agree that a three-tier architecture is the best architecture?” some people might think that a “tier” has some divinely-inspired (if they are Medieval) or expert-defined (if they are contemporary) meaning, and that the question was meant to elicit a discussion around the variable “three”.

But a nominalist, who has read Lanchester’s article in The World of Mathematics might know that the number is a constant; that that constant is “two”; that the variable is “tier”; that the question is wrong; and that the correct question is “What is the meaning of “tier” for which a two-tier architecture is the best architecture?”

Aha!, the discerning reader exclaims. Why not assert a priori that “three” is the correct number — having attended Catholic school, this seems perfectly reasonable — and the correct question is “What is the meaning of “tier” for which a three-tier architecture is the best architecture?” Then, I would have agreed a priori with the initial premise, and we would have all move forward unanimously. But this line of reasoning springs the trap laid by William with his nominalist Razor.

For if it is true that there is a definition of “tier” for which a two-tier architecture is the best architecture, and there is a definition of “tier” for which a three-tier architecture is the best architecture, then by choosing “three” a priori, you have multiplied an entity (entia multiplicanda) needlessly.

Therefore, history teaches us that a two-tier architecture is the best architecture.


The lessons of history are not easy to identify, interpret, or internalize. I love the fact that I work at a place where we try to learn them.

Advertisements

Atualização

I’ve been writing code lately. Makes me feel young, again. Unfortunately, something terrible happened.

For some reason (and thereby hangs a tale), I decided to write the application in Brazillian Portuguese. You know, localized for pt-BR only.

Based on my Smalltalk days, I decided to write the “Check for Updates” code first. (A Smalltalk programmer would write a one line app that did nothing, and then change it until it did what they wanted. If they had to quit and restart just because they had changed something, they lost a point. I guess thereby hangs another tale). I figured the modern equivalent was to release an app that didn’t do anything except automatically check for updates and update itself. Then, one could release, and add functionality as needed — losing a point, of course, every time somebody needed to re-install manually. But, I digress

So, obviously, the first method (I’m using Cocoa on a Mac) I needed to write was buscaAtualização. That’s Portuguese for “fetchUpdate”.

That’s not a valid identifier name in Objective-C. Or more precisely, it might theoretically be a valid name, but gcc doesn’t implement universal character names.

Programming languages, it turns out, aren’t localizable. Or localized. If you want to write something in a computer language, you need to write English. I should have suspected something the minute I launched XCode. The menubar had a File (instead of Arquivo) menu. Development tools don’t seem to be localized either. Not much point, really, since you’d have to write in English anyway.

Even if I could have named my method buscaAtualização, what is the Portuguese localization of [[NSURLConnection alloc] initWithRequest: aRequest delegate: self ]? Well, it turns out that all the library and framework classes, methods, functions, variables, macros, etc. are in English. And none of it is localized.

Even something as ancient as strerror(3) returns the POSIX error messages in English. No localized Portuguese error messages here.

This is terrible. I’ve been such a great fan of Literate Programming for so many years — it never occurred to me that for most of the world, even literate programs must necessarily be written in a foreign language.

Even as modern a language as Haskell wouldn’t accept buscaAtualização as a valid identifier.

This is awful. We’re much farther from using programming languages as a form of communication than I had thought. I’ll need some time to digest the implications.

Does anybody have any suggestions on how to write a program in Portuguese?

Using Open Source

In Adobe Illustrator I type the word copyright into the search box in the Help Center. The page that results includes the following paragraph:

This product includes either BISAFE and/or TIPEM software by RSA Data Security, Inc. This product includes cryptographic software written by Eric Young (eay@cryptosoft.com). This software is based in part on the work of the Independent JPEG Group. Portions include technology used under license from Verity, Inc. and are copyrighted. © 1994 Hewlett Packard Company. © 1985, 1986 Regents of the University of California. All rights reserved. Portions of this code are licensed from Apple Computer, Inc. under the terms of the Apple Public Source License Version 2. The source code version of the licensed code and the license are available at http://www.opensource.apple.com/apsl. This product includes PHP, freely available from http://www.php.net. This product includes the Zend Engine, freely available at http://www.zend.com. This product includes software developed by Brian M. Clapper (bmc@clapper.org). © 1991 by the Massachusetts Institute of Technology. ©1996, 1995 by Open Software Foundation, Inc. 1997,1996, 1995, 1994, 1993, 1992, 1991. All rights reserved.

Many open source advocates, looking inside their own organization (or others), will enumerate the open source software in use to make the case that “everybody” is “using open source”. As I start counting all the open source projects embedded in Illustrator, it seems to exceed the amount of open source software used in many of these “censuses”. By all accounts, Adobe Illustrator is an “open source” product.

Sort of.

And they are not alone. The same exercise with Mathematica leads me to this web page. The list is not as exhaustive as Illustrator, but finding GMP there was certainly an eye-opener.

Can we say that Mathematica is open source? Or “open source friendly”?

This chain of reasoning came about because of this blog post that I stumbled across. Seems like Apple was adding OCUnit to XCode. So I went looking for the equivalent copyright page for XCode (because I know it also uses gcc and gdb at a minimum). I couldn’t find such a page. The best I could find was something that advised me that this Apple product included some (unspecified) open source software, the source code of which was available here.

So, I got to thinking. Assuming rational markets. If I’m selling proprietary software in any particular application domain, and there exists some “attributive-licensed” software (MIT, BSD, Apache, etc.) which is superior in some way (faster, more featureful) than the code I wrote / licensed, wouldn’t I include it in my product? And continue to sell my product as before?

In which case, the distinction (technology-wise) between assembling a custom solution using attributive-licensed open source libraries, and buying a commercial product, seems to be ever more evanescent. (Of course, “reciprocal-licensed” (GPL’d) software may be more distinguished in this regard — depending on the nature of the integration.)

To Make or To Do

“Did you know,” said Gina, the other day, “that in Spanish, the word meaning to make is the same as the word meaning to do?” I don’t speak Spanish, but it seems that the usage of the Portuguese fazer and the French faire supports this hypothesis. I’m going to have to back up and put this comment in context.

When we met, Gina’s friends were unanimous: I wasn’t her type. And my family was likewise unanimous: Gina wasn’t right for me. Ever since, we are always on the lookout for proof that this relationship couldn’t possibly work — because we have nothing in common and are completely opposite in every regard. So this rumination about vocabulary was about to turn into the latest salvo in this decade long game.

“Because, you see,” she continued, “you are a maker whilst I am a doer. Proving, once again, that we have nothing in common.”

Setting aside the interesting conundrum of a pair of verbs which are arguably opposites in one language whilst being the same word in another, I’d like to ponder the significance of this observation to information technology. More specifically: software. The question that suggests itself is: is coding (or cutting code as Amir would have it) doing or making? I used to think it was making — but that, of course is a product-centric view. Software-as-a-service needs to take the world view that the production of software is doing: there is no such thing as finishing.

The real import of this question, of course, is that doers and makers are different kinds of people; if in fact this essential nature of software is changing, then the people who participate in the activity and enjoy it will also change.

Unless they speak a Romance language — in which case there doesn’t seem to be any difference.

Professionalization

I met and had breakfast last week with Ben Hyde. A really smart and interesting fellow — I wish I had recorded our conversation. We had fascinating and wide-ranging discussions all morning — but like when trying to remember the really hilarious stand-up routine you heard last night, you can only remember one or two jokes. So it is with that morning.

We did talk about my earlier post comparing the open source movement to the labor movement. Ben agreed that there was merit to an analogy around the idea of “principle for organizing the labor pool”, but suggested that professionalization was a better analogy. My initial reaction was that I liked the idea — we both grew up in an era before software engineering was invented. Back then, it was an art.

When I speak about computer programming as an art, I am thinking primarily of it as an art form, in an aesthetic sense. The chief goal of my work as an educator and author is to help people learn how to write beautiful programs…My feeling is that when we prepare a program, the experience can be just like composing poetry or music…Some programs are elegant, some are exquisite, some are sparkling. My claim is that it is possible to write grand programs, noble programs, truly magnificent ones!…computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. Programmers who subconsciously view themselves as artists will enjoy what they do and will do it better.

— D. Knuth (Computer Programming as an Art. Turing Award Speech 1974)

What the discussion highlighted was the way in which the open source movement falls short of either of these two labor movements. In the event that open source becomes a union movement, open source developers need to start paying dues to fund open source organizations, and start to negotiate with their employers to insist on “free software rights” as part of their employment agreements. Free software sanctions against organizations that balk.

To become a professional movement, there needs to be some form of accreditation with a governing body (like the AMA, or Bar Association) in order to be permitted to “practice open source programming”. Presumably, only accredited practitioners would be allowed access to the source code.

Neither one of these scenarios seems very likely.

Who is a programmer?

After work, the question got asked. It came up in the context of another discussion about the relevance of Free/Open Source Software. Availability of the source code is probably only relevant to computer programmers. After all, if you aren’t a programmer, what would you do with source code? In which case, a freely copyable binary would be equivalent to freely copyable source code. The ability to do something with the source code (i.e. to create a derivative work), is something only a programmer could do. Strikes me as the definition of a programmer. Yes, I know that benefits might accrue to the non-programmer indirectly, but conceding that there are no direct benefits to most people doesn’t seem like a great debating point.

We know that only 2.4% of the population are employed in “computer and mathematical occupations”. Which would seem to put an upper bound on the number of people to whom Free and Open Source Software would be relevant. And any movement which can only possibly be relevant to such a small fraction of the population is going to have difficulty garnering widespread support, or even interest. Assuming, of course, that we restrict ourselves to professional programmers. There might be amateur programmers.

And so, we come to the real questions: who should be a programmer? Who should be considered a programmer? Is the correct analogy that the skill of programming is like the skill of reading and writing? An esoteric skill for most of the world’s history — practiced only by specialists — professional scribes — until, in the last few hundred years, we came to expect that everybody ought to be a scribe, or at least literate. Even if only a relatively small number of people read or write for a living?

Or, is the correct analogy that being a programmer is more like being a radio technician and learning Morse code? An esoteric skill which remains esoteric.

Is it more like being a driver (chauffeur)? Or a pilot?

Because, I fear that if it is the latter, then the Free and Open Source Software has more in common with the national association for Amateur Radio than the National Institute for Literacy.

This idea of universal computer literacy has deep roots. The work that led to the desktop computing environments we use today was motivated by that vision. Alan Kay talks about it at length here. An excerpt:

It started to hit home in the Spring of ’74 after I taught Smalltalk to 20 PARC nonprogrammer adults. They were able to get through the initial material faster than the children, but just as it looked like an overwhelming success was at hand, they started to crash on problems that didn’t look to me to be much harder than the ones they had just been doing well on. One of them was a project thought up by one of the adults, which was to make a little database system that could act like a card file or rolodex. They couldn’t even come close to programming it. I was very surprised because I “klnew” that such a project was well below the mythical “two pages” for end-users we were working within. That night I worote it out, and the next day I showed all of them how to do it. Still, none of them were able to do it by themsleves. Later, I sat in the room poindering the board from my talk. Finally, I counted the number of nonobvious ideas in this little program. They came to 17. And some of them were like the concept of the arch in building design: very hard to discover, if you don’t already know them.

The connection to literacy was painfully clear. It isn’t enough to just learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate mre language abilities. And it help greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas [Papert 70s]. So, we decided we should teach design….

In a more contemporary vein, this post talks about the importance of having designers and programmers working on the same text and the author asserts that

Designers are perfectly capable of understanding and manipulating constructs like <% for person in @post.whos_talking %> or < if @person.administrator? % > . While they will rarely be the originator for these fragments, they’ll surely be the manipulators of them.

Doesn’t manipulating code fragments make you a programmer? Of a sort?

So, even if we aren’t there yet, shouldn’t the Free/Open Source Software movements, aspire to universal programming literacy?

I do.