Just like the notion of distance in the artifact world is based on the hierarchy of artifacts, the notion of distance in the run-time world is based on a hierarchy of locations. These are the locations where executable knowledge is located at any given time. So, given two pieces of executable knowledge P1 and P2, we can define an ordinal scale:
P1 and P2 are inside the CPU registers or the CPU execution pipeline - that's minimum distance.
P1 and P2 are inside the same L1 cache line
P1 and P2 are inside the L1 cache
… etc
for the full scale, see the (updated) Summary at the Physics of Software website.
Dude, I don't care about cache lines!
Yeah, well, but the processor does, and I'm looking for a good model of the real-world, not for an abstract model of computing disconnected from reality (which would be more like the math of software, not the physics).
You may be writing your code in Java or C#, or even in C++, and be blissfully unaware of what is going on under the hood. But the caches are there, and their effect is absolutely visible. For a simple, experimental, and well-written summary, see Igor Ostrovsky's Gallery of Processor Cache Effects (the source code is in C#, but results wouldn't be different in Java or C++).
Interestingly enough, most algorithms and data structures are not optimized for modern processors with N levels of cache. Still, there is an active area of research on cache-oblivious algorithms which, despite the name, are supposed to perform well with any cache line size across any number of cache levels (you can find a few links to specialized algorithms here but you'll have to work around broken links).
What about virtual memory? Again, we can ignore the magic most of the times, but when we're looking for high-performance solutions, we have to deal with it. Unconvinced? Take a look at You're Doing It Wrong, where Poul-Henning Kamp explains (perhaps with a bit too much "I know it all and you don't" attitude :-)) why textbooks are not really talking about real-world computers [anymore].
Consequences
What happens when run-time distance grows? We're bound to see a staircase-like behavior, as in the second picture in the gallery above, just with more risers/treads. When you move outside your process, you have context switching. When you move outside your computer, you have network latency. When you move outside your LAN, you also have name lookup and packet hops. We'll understand all this stuff much better as we get to the concept of friction.
There is more. When talking about artifact distance, I said that coupling (between artifacts) should decrease as distance increases. In the run-time world, coupling to the underlying platform should decrease as distance increases. This must be partially reflected in the artifacts themselves, but it is also a language / platform / transformation concern.
Knowledge at short distance can be tightly coupled to a specific hw / sw platform. For instance, all code inside one component can be tightly bound to:
- An internal object model, say the C++ object model of a specific compiler version, or the C# or Java object model for a specific version of the virtual machine.
- A specific operating system, if not virtualized (native code).
- A specific hardware, if not virtualized.
This is fine at some level. I can even accept the idea that all the components inside a single application have to share some underlying assumptions. Sure, it would be better if components were relatively immune from binary issues (a plague in the C++ world). But overall (depending on the size of the application) I can "control" things and make sure everything is aligned.
But when I'm talking to another service / application over the network, my degree of control is much smaller. If everything is platform-dependent (with a broad definition of platform, mind you: Java is a platform), we're in for major deployment / maintenance issues. Even worse, it would represent a huge platform lock-in (I can't use a different technology for a new service). Things get just worse on a global network scale. This is why XML took the world by storm, why web services have been rather successful in the real world, and so on. This is also why I like technologies / languages that take integration with other technologies / languages seriously, and not religiously.
As usual, the statement above is bi-directional. That is, it makes very little sense to pursue strong decoupling from the underlying platforms at micro-level. Having a class talking to itself in XML is not a brilliant strategy. Again, design is about balance: in this case, balance between efficiency and convenience on one side, and flexibility and evolvability on the other. Balance is obtained when you can depend on your platform locally, and be increasingly independent as you move farther.
Run-time Distance is not a constant
Not necessarily, anyway; it depends on small-scale technical choices. In C++, for instance, once you get two objects in the same cache line, they will stay there for their entire life, because identity in C++ is address-based. In a garbage collected environment, this is not true anymore: objects can move freely during collection.
Moreover, once we move from the cache line to the entire cache, things come and go, they become near and distant along time. This contributes to complex performance patterns, and indeed, modern hardware makes accurate performance prediction almost impossible. I'm pretty sure there are some interesting phenomena to be studied here - a concept of oscillating distance, perhaps the equivalent of a performance beat when two concurrent threads have slightly different oscillating frequency, and so on, but I'm not currently investigating any of this - it's just too early.
At some point, distance becomes more and more "constant". Sure, a local service may migrate to LAN and then to WAN, but usually it does so because of human intervention (decisions!), and may require changes on the artifact side as well. Short-range distance is a fluid notion, changing as executable knowledge is, well, executed :-).
By the way: distance in the artifact world is not a constant, either. It is constant when the code is frozen. As soon as we change it, we change some distance relationships. In other words, when the computer is processing executable knowledge, run-time distance changes. When we process encoded knowledge (artifacts), artifact distance changes.
Distance-preserving transformations
Knowledge encoded in artifacts can be transformed several times before becoming executable knowledge. Most of those transformations are distance-preserving, that is, they map nearby knowledge to nearby knowledge (although with some jumps here and there).
For instance, pure sequences of statements (without choices, iterations, calls) are "naturally" converted into sequential machine-level instructions that not only will likely sit in the same cache line, but won't break the prefetch pipeline either. Therefore, code that is near in the artifact world will end up near in the run-time world as well.
In the C/C++ family, data that is sequentially declared in a structure (POD) is sequential in memory as well. Therefore, there is a good chance of sharing the same cache line if you keep your structures small.
Conversely, data and code in different components are normally mapped to different pages (in virtual memory systems). They won't share the same cache line (they may compete for the same cache line, but won't be present in the same line at once). So distant things will be distant.
Even "recent" advances in processor architecture are increasing the similarity between the artifact and run-time class of distance. Consider predicated execution: it's commonly used to remove branches at machine-level for short sequence of statements in if/else conditionals (see Fig. 1 in the linked paper). In terms of distance, it allows nearby code in the artifact space to stay close in the run-time space, by eliminating branches and therefore maximizing proximity in the execution pipeline.
Some transformations, however, are not distance-preserving. Inlining of code (think of C++ inline functions, for instance) will shrink distance while moving from the artifact world to the run-time world.
Aspect Oriented Programming is particularly interesting from the point of view of distance. On the artifact side, aspects allow to isolate cross-cutting concerns. Therefore, they allow to increase distance between the advice, that is factored out, and the join points. A non-distance-preserving transformation (weaving) brings the two concepts back together as we move toward execution.
Curiously enough, some non-preserving transformations work in the opposite way: they allow things to be near in the artifact space, yet be distant in the run-time world. Consider the numerous technologies (dating back to remote procedure calls) that allow you to code as if you were invoking a local function, or a method of a local object, while in fact you are executing a remote function, or a method of a remote object (through some kind of local proxy). This is creating an illusion of short distance (in the artifact world) while in fact maintaining high distance in the run-time world. As usual, whenever we create software over a thin layer of illusion, there is a potential for problems. When you look at The 8 fallacies of distributed computing , you can immediately recognize that dealing with remote objects as if they were local objects can be rather dangerous. Said otherwise, the illusion of short distance is a leaky abstraction. More on distributed computing when I'll get to the concept of friction.
A closing remark: in my previous post, I said that gravity (in the artifact world) tends to increase performance (a run-time property). We can now understand that better, and say that it is largely (not entirely) because:
- the most common transformations are distance-preserving.
- performance increases as the run-time distance decreases.
Again, friction is also at play here, but I have yet to introduce the concept.
Addenda on the artifact side
Some concepts on the artifact side are meant to give the illusion of a shorter distance, while maintaining separation. Consider extension methods in .NET or the more powerful concept of category in objective C. They both give the illusion of being very close to a class (when you use them) while in fact they are just as distant as any other class. (By the way: I've been playing with extension methods [in C#] as a way to get something like partial specialization in C++; it kinda works, but not inside generics, which is exactly were I would need it).
Distance in the Decision Space
While thinking about distance in the artifact and in the run-time world, I realized that the very first notion of distance I introduced was in the decision space. Still, I haven't defined that notion at the detail level (or lack thereof :-) at which I've defined distance in the artifact and run-time world. I have a few ideas, of course, the simplest definition being "the number of decision that must be undone + the number of [irreversible?] decisions that must be taken to move your artifacts". Those decisions would involve some mass of code. Moving that mass for that "space" would give a notion of necessary work. Anyway, it's probably too early to say more, as I have to understand the decision space better.
Coming soon, I hope, the notion of friction.