Mechanical devices require maintenance, some more than others. Maintenance is intended to keep them working, because mechanical parts tend to wear out. Maintenance is required so that they can still work "as new", that is, do the same thing they were built to do.
Software programs require maintenance, some more than others. Maintenance is intended to keep them useful, because software parts don't wear out, but they get obsolete. They solve yesterday's problems, not today's problems. Or perhaps they didn't even solve yesterday's problems right (bugs). Maintenance is required so that software can so something different from what it was built to do.
Looking from a slightly different perspective, software is encoded knowledge, and the value of knowledge is constantly decaying (see the concept of half-life of knowledge). Maintenance is required to restore value. Energy (human work) must be supplied to keep encoded knowledge current, that is, valuable.
Either way, software developers spend a large amount of their time changing stuff.
Change
Ideally, changes would be purely additional. We would write a new software entity (e.g. a class) inside a new artifact (a source file). We would transform that artifact into something executable (if needed; interpreted languages don't need this step). We would deploy just the new executable artifact. The system would automagically discover the new artifact and start using it.
Although it's possible to create systems like that (I've done it many times through plug-in architectures), programs are not written this way from the ground up. Plug-ins, assuming they exist at all, usually appears at a relatively coarse granularity. Below that level, maintenance is rarely additional. Besides, additional maintenance is only possible when the change request is aligned with the underlying architecture.
So, the truth is, most often we can't simply add new knowledge to the system. We also have to change and remove existing parts. A single change request then results in a wave of changes, propagating through the system. Depending on some factors (that I'll discuss later on) the wave could be dampened pretty soon, or even amplified. In that case, we'll have to change more and more parts as the wave propagates through the system. If we change a modularized decisions, the change wave is dampened at module boundary.
The nature of change
Consider a system with two hardware nodes. The nodes are based on completely different hardware architectures: for instance, node 1 is a regular PC, node 2 is a DSP+ASIC embedded device. They are connected through standard transport protocols. Above transport, they don't share a single line of code, because they don't calculate the same function.
Yet, if you change the code deployed in node 1, you have to change some (different) code in node 2 as well, or the system won't work. Yikes! Aren't those two systems loosely coupled according to the traditional coupling theory? Sure. But still, changes need to happen on both sides.
The example above may seem paradoxical, or academic, but is not. Most likely, you own one of those devices. It's a decoder, like those for DVB-T, sat TV, or inside DivX players. The decoder is not computing the same function as the encoder (in a sense, it's computing the inverse function). The decoder may not share any code at all with the encoder. Still, they must be constantly aligned, or you won't see squat.
This is the real nature of change in software: you change something, and a number of things must be changed at the same time to preserve correctness. In a sense, change is affecting very distant, even physically disconnected components, and must be simultaneous: change X, and Y, Z, K must all change, at once.
At this point, we may think that all the physical analogies are breaking down, because this is not how the physical world behaves. Except it does :-). You just have to look at the right scale, and move from the familiar Newtonian physics to the slightly less comfortable (for me, at least) quantum physics.
Quantum Entanglement
Note: if you really, really, really hate physics, you can skip this part. Still, if you never came across the concept of Quantum Entanglement, you may find it interesting. Well, when I first heard of it, I thought it was amazing (though I didn't see the connection with software back then).
You probably know about the Heisenberg Uncertainty Principle. Briefly, it says that when you go down to particles like photons, you can't measure (e.g.) both position and wavelength at arbitrarily high precision. It's not a problem of measurement techniques. It's the nature of things.
Turns out, however, that we can create entangled particles. For instance, we can create two particles A and B that share the same exact wavelenght. Therefore, we can try to circumvent the uncertainty principle by measuring particle A wavelength, and particle B position. Now, and this is absolutely mind-blowing :-), it does not work. As soon as you try to measure particle B position, particle A reacts, by collapsing its wave function, immediately, even at an arbitrary distance. You cannot observe A without changing B.
Quoting wikipedia: Quantum entanglement [..] is a property of certain states of a quantum system containing two or more distinct objects, in which the information describing the objects is inextricably linked such that performing a measurement on one immediately alters properties of the other, even when separated at arbitrary distances.
Replace measurement with change, and that's exactly like software :-).
Software Entanglement
The parallel between entangled particles and entangled information is relatively simple. What is even more interesting, it works both in the artifact world and in the run-time world, at every level in the corresponding hierarchy (the run-time/artifact distinction is becoming more and more central, and was sorely missing in most previous works on related concepts). On the other hand, the concept of entanglement has far-reaching consequences, and is also a trampoline to new concepts. This post, therefore, is more like a broad introduction than an in-depth scrutiny.
Borrowing from quantum physics, we can define software entanglement as follows:
Two clusters of information are entangled when performing a change on one immediately requires a change on the other.
A simple example in the artifact space is renaming a function. All (by-name) callers must be changed, immediately. Callers are entangled with the callee. A simple example in the run-time space is caching. Caching requires cache coherence mechanisms, exactly because of entanglement.
Note 1: I said "by name", because callers by reference are not affected. This is strongly related with the idea of dampening, and we'll explore it in a future post.
Note 2: for a while, I've been leaning on using "tangling" instead of "entanglement", as it is a more familiar word. So, in some previous posts, you'll find mentions to "tangling". In the end, I decided to go with "entanglement" because "tangling" has already being used (with different meaning) in AOP literature, and also because entanglement, although perhaps less familiar, is simply more precise.
Not your granpa coupling
Coupling is a time-honored concept, born in the 70s and survived to this day. Originally, coupling was mostly concerned with data. Content coupling took place when a module was tweaking another module's internal data; common coupling was about sharing a global variable; etc. Some forms of coupling were considered stronger than others.
Most of those concepts can be applied to OO software as well, although most metrics for OO coupling takes a more simplified approach and mostly consider dependencies as coupling. Still, some forms of dependency are considered stronger than others. Inheritance is considered stronger than composition; dependency on a concrete class is considered stronger than dependency on an interface; etc. Therefore, the mere presence of an interface between two classes is assumed to reduce coupling. If class A doesn't talk to class B, and doesn't even know about class B existence, they are considered uncoupled.
Of course, lack of coupling in the traditional sense does not imply lack of entanglement. This is why so many attempts to decouple systems through layers are so ineffective. As I said many times, all those layers in business systems can't usually dampen the wave of changes coming from the (seemingly) innocent need to add a field to a database table. In every layer, we usually have some information node that is tangled with that table. Data access; business logic; user interface; they all need to change, instantly, so that this new field will be put to use. That's entanglement, and it's not going away by layering.
So, isn't entanglement just coupling? No, not really. Many systems that would be defined as "loosely coupled" are indeed "heavily entangled" (the example above with the encoder/decoder is the poster child of a loosely coupled / heavily entangled system).
To give honor to whom honor is due, the closest thing I've encountered in my research is the concept of connascence, introduced by Meilir Page-Jones in a little known book from mid-90s ("What Every Programmer Should Know About Object-Oriented Design"). Curiously enough, I came to know connascence after I conceived entanglement, while looking for previous literature on the subject. Stille, there are some notable differences between connascence and entanglement, that I'll explore in the forthcoming posts (for instance, Page-Jones didn't consider the RT/artifact separation).
Implications
I'll explore the implications of entanglement in future posts, where I'll also try to delve deeper into the nature of change, as we can't fully understand entanglement until we fully understand change.
Right now, I'd like to highlight something obvious :-), that is, having distant yet entangled information is dangerous, and having too much tangled information is either maintenance hell or performance hell (artifact vs run-time).
Indeed, it's easy to classify entanglement as an attractive force. This is an oversimplification, however, so I'll leave the real meat for my next posts.
Beyond Good and Evil
There is a strong urge inside the human being to classify things as "good" and "bad". Within each category, we further classify things by degree of goodness or badness. Unsurprisingly, we bring this habit into computer science.
Coupling has long been sub-classified by "strength", with content coupling being stronger than stamp coupling, in turn stronger than data coupling. Even connascence has been classified by strength, with e.g. connascence of position being stronger than connascence of name. The general consensus is that we should aim for the weakest possible form of coupling / connascence.
I'm going to say something huge now, so be prepared :-). This is all wrong. When two information nodes are entangled, changes will propagate, period. The safest entanglement is the one with the minimum likelihood of occurrence. Of course, that must be weighted with the number of entangled nodes. It's very similar to risk exposure (believe or not, I couldn't find a decent link explaining the concept to the uninitiated :-).
There is more. We can dampen the effects of some forms of entanglement. That requires human work, that is, energy. It's usually upfront work (sorry guys :-), although that's not always the case. Therefore, even if I came up with some classification of good/bad entanglement, it would be rather pointless, because dampening is totally context-dependent.
I'll explore dampening in future posts, of course. But just to avoid excessive vagueness, think about polymorphism: it's about dampening the effect of a change (which change? Which other similar change is not dampened at all by polymorphism?). Think about the concept of reference. It's about dampening the effect of a change (in the run-time space). Etc.
Your system is not really aligned with the underlying forcefield unless you have strategies in place to dampen the effect of entanglement with high risk exposure. Conversely, a deep understanding of entanglement may highlight different solutions, where entangled information is kept together, as to minimize the cost of change. Examples will follow.
What's next?
Before we can talk about dampening, we really need to understand the nature of change better. As we do that, we'll learn more about entanglement as well, and how many programming and design concepts have been devised to deal with entanglement, while some concepts are still missing (but theoretically possible). It's a rather long trip, but the sky is clear :-).
4 comments:
Is the example of function call by name and by reference comparable with entaglement of business systems? i.e. if each layer rely on the knowledge of fields number and fields name in a table they are entagled, but if we assume there's *some* fields and that we can retrieve their properties in some way, and so design our systems maybe it's less entagled, even if we fuse UI and business logic (don't kill me, it's just an example :))
Fulvio: as you already know :-), "going meta" can be an effective strategy to dampen the wave of changes. Of course, the strategy is only effective when the meta-level information is rich enough for your purpose. This is not always the case, but again, thinking this way may lead to a different allocation of responsibilities (more aligned with the actual forcefield, perhaps less aligned with the common way of thinking).
Carlo, can you please provide some pointers on "going meta"? (old blog posts, web sites, books...)
Daniele: in this context, by "going meta" I meant primarily the use of reflection/introspection. There is no lack of literature on that, but here a few pointers:
https://ecs.victoria.ac.nz/twiki/pub/Courses/COMP205_2009T1/LectureSchedule/27-Reflection.pdf
simple classroom slides; they mention the gui builder, which is really a classic in reflection-based solutions. Understanding the architecture of design-time GUI configuration inside modern IDEs is now a prerequisite :-) to create your own modern, extensible UIs.
http://books.google.com/books?id=dCpsE2b26oAC&pg=PR6&lpg=PR6&dq=reflection+%22software+design%22&source=bl&ots=_m5qL0XQPM&sig=AHUEID558pd_Hfk6Vu3jbG7997Y&hl=en&ei=RzH2TIDtAc30sgbC26zaBA&sa=X&oi=book_result&ct=result&resnum=9&ved=0CGEQ6AEwCA#v=onepage&q=reflection%20%22software%20design%22&f=false
more academically oriented papers, and unfortunately not fully available for free; if you have never applied reflection as a design technique, you may find some of those papers interesting. Look around, many have been made available by authors.
http://spectrum.library.concordia.ca/1484/1/MQ64060.pdf
a thesis-tutorial on reflection and patterns; this is a wide area where you can find many other papers.
As usual, once you "get" the concepts, the best you can do is learn more about some real-world application (I would start with that propery grid stuff in your IDE of choice) and then play with the idea in your own field.
As an aside, if you're stuck with C++ you'll find that it's lacking reflection, so your options would be rather limited...
Post a Comment