How can you measure how far a given complex System is from the simplest System it can be? Here's a hint:
distance = sqrt( Tp^2 + Tm^2 + Tv^2 + Tl^2 )
Now some basic, if not fabricated, definitions. We can call these "Data State Transitions".
A "Phase Transition" (Tp) is when dynamic, malleable, or fluid data becomes static, stiff, or frozen, through serialization for example. Or frozen data becomes fluid, that is, it is de-serialized into a runtime data structure. The direction of the transition is irrelevant, it is still costly.
A "Model Transition" (Tm) is when data is contained in one meta-model and translated into another. For example, you convert a DOM instance into a set of POJO instances.
A "Value Transition" (Tv) is when the data itself is converted to new data through the application of business logic. 2 + 2 = 4, or fullName = firstName + lastName. This of course assumes the model and phase remain the same.
A "Location Transition" (Tl) is when data moves from one component of a system, to another component. Where components are separated by some boundary that prevents direct memory access between them. For example, separate processes on the same machine, or on different machines. This transition can get real sticky, so I will leave it out of most of the discussion.
In context of the Data State Transition terms, the core value of many typical Computing Systems is their ability to transition data from one 'value' to another (Tv), where model (Tm), phase (Tp), and location (Tl) transitions are purely implementation details.
So, the simplest valuable thing a typical Computing System can be is one that transitions less valuable data into new more valuable data. The simplest realistic system is one that reads data off the disk, modifies it, and writes it back, hopefully with no model transitions. And the simplest practical system would be one that likely has to put the data into a new model for the consumer System. Where "realistic" is something that could work, and "practical" is something that works and is useful.
Actually, a System that does nothing is the simplest thing it can be, it just wouldn't be valuable.
Tm = Tp = Tv = 0
- simplest valuable:
Tm = Tp = 0, Tv = 1
- simplest realistic:
Tm = 0, Tp = Tv = 1
- simplest practical:
Tm = Tp = Tv = 1
Thus, I propose to find the distance between the simplest thing a system can be and what it currently is, you simply apply this equation:
distance = sqrt( Tp^2 + Tm^2 + Tv^2 + Tl^2 )
Obviously, for this to work, you need to add up every type of transition your system uses.
Using RMI? Tp = 2. Marshaling to and from the wire. Streaming data would be Tp = 0.
Using a RDBMS with O/R mapping into POJO instances? Tm = 1 and likely Tp = 2. This is assuming ResultSet is strictly similar to the model the DB is pushing out over the wire (rows and columns).
Reading XML off disk into the DOM? Tm = 0 and Tp = 1. The model is the same for both, it is just that the data is unfrozen.
Reading XHTML file bytes from disk and pushing over the wire to a web browser, Tp = 1 and Tm = 0, since browsers consume XHTML and use a DOM. Up to the browser data is streaming, the browser adds the Tp + 1 by creating the DOM. It could be said that a web-server serving static XML has zero complexity overhead (except you have atleast Tl + 1).
Location transtions are a whole story in themselves. In theory you add up all the 'sockets' and 'streams' you need to employ to complete a request to get your Tl.
But what about network routers, clustered tiers, and caching proxies? And saving bytes to local disk, SAN, or NAS?
My initial thoughts are that clustered tiers, in many cases, can be considered one logical entity. As an entity, regardless of the number of machines, it may add Tl + 1. But within the cluster, depending on the protocol and it's size you may get Tl + N where N is the number of machines, and possibly Tv + N, since the business of the cluster might be to compute who the authority is for the request. Like I said Tl is sticky.
A flat load balanced set of caching proxies with 100% hit rate can be effectively transparent, Tl + 0 in steady-state. But in reality, Tl + (1 - hit-rate). The value here is that this proxy can 'hide' the other transition types from the request handling process.
Anyways, you get the point, even if the reasoning immediately above isn't completely together.
Now it could be argued that a System composed of many computers and processes could be so optimized that it may have a really low 'simplicity' score (short distance to simple) due to caching, clustering etc. But such a thing really isn't simple, is it?
So let's introduce two more definitions:
"Static View" (Vs) is the set of all paths defined between components of a System. Like what a class is to an instance. It's the Model.
"Dynamic View" (Vd) is the actual paths used in an executing System, where some paths may be traversed multiple times during a request/event process. An instance of a Static View, if you will.
Once we started discussing Tl, we muddied the discussion by talking about hit-rate and steady-state. These things manifest during runtime but can be accounted for during design.
The bottom line is this. In context of the Static View of a System, many machines and processes equals alot of complexity. In context of a Dynamic View, you can employ tricks here and there to 'simplify' the system during runtime.
You will notice as you add more tricks to 'simplify' the runtime system, you make the static system more complex. So you should make the static System as simple as possible, and if you have bottlenecks, throw in a few tricks here and there. And if you are smart, you will employ technologies in your static system that make the tricks only add complexity incrementally.
For example, using XML, and web and proxy servers where applicable. XML is streamable, and can be very efficient, especially when the Tp component is StAX. And if you give up this notion that data must be Objects (read POJO), a simple language like E4X will let you apply Tv rules/logic without Tm overhead.
Or just use data structures that mirror your data more than a RDBMS does. Note that Nutch uses Lucene for index storage (Tm = 0), not a RDBMS, and they build other efficient data structures for other data types (Tm = 0). Personally I think Tm is a killer for both running systems and developers. Having to think in one data model, but program for another is costly and error prone.
Simplicity is about subtracting the obvious, and adding the meaningful. The point of "creating" the Data State Transition terms is so that they can be identified and accounted for. So that unnecessary transitions can be strongly scrutinized and subsequently eliminated or replaced by transitions that don't cascade compensating transitions down the line.
Anyways, I hope all this makes sense.