We have a long history of manipulating physical materials. Cavemen built primitive tools out of rocks and sticks. I would speculate :-) that they didn't know much about physics, yet through observation, serendipity and good old trial and error they managed to survive pretty well, or we wouldn't be here today talking about software.
For centuries, we only had empirical observations and haphazard theories about the real nature of the physical world. The Greek Classical Element were Earth, Water, Air, Fire, and Aether, which is not exactly the way we look at materials today. Yet the Greeks managed to build large and incredibly beautiful temples.
Of course, today we know a quite a bit more. Take any kind of physical material, like a copper wire. In the real world, you can:
- apply thermal stress to the material, e.g. by warming it. Different materials react differently.
- apply mechanical stress to the material, e.g. by pulling it on both ends. Different materials react differently.
- apply electromagnetic forces, e.g. by moving the wire inside a magnetic field. Different materials react differently.
- etc.
To get just a little more into specifics, if we restrict ourselves to mechanical stress, and focus on the so-called static loading, we end up with just a few kind of stress: tension, compression, bending, torsion, and shear (see here and also here).
Although people have built large, beautiful structures without any formal knowledge of material science, it is hard to deny the huge progress made possible by a deeper understanding of what is actually going on with materials. Knowledge of forces and reactions allows us to choose the best material, the best shape, and even to create new, better materials.
The software world
Although we can build large, even beautiful systems, it is hard to deny that we are still in the dark ages. We have vague principles that usually can't be formally defined. We have different materials, like statically typed and dynamically typed languages, yet we don't have a sound theory of forces, so we end up with the usual evangelist trying to convert the world to its material of choice. It's kinda funny, because it's just as sensible as having a Brick Evangelist trying to convince you to use bricks instead of glass for your windows ("it's more robust!"), and a Glass Evangelist trying to convince you to build your entire house out of glass ("you'll get more light!").
Indeed, a key realization in my exploration on the nature of software was that we completely lack the notion of force. Sure, we use a bunch of terms like "flexibility", "heavy load", "fast", "fragile", etc., and they all seem to be somehow inspired by physics, but in the end it's just vernacular usage without any kind of theory behind.
As I realized that, I began looking for a small, general yet useful set of "software forces". Nothing too fancy. Tension, compression, bending, torsion, and shear are not fancy, but are incredibly useful for physical materials (no, I wasn't looking for similar concepts in software, but for something just as useful).
Initially, I focused my attention on artifacts, because design is very much about form, and form is made visible through artifacts. I got many ideas and explored several avenues, but most seemed more like ad hoc concepts or turned out to be dead ends. Eventually, I realized we already knew the fundamental forces, though we never used the term "force" to characterize them.
Interestingly, those forces are better known in the run-time world, and are usually restricted to data. Upon closer scrutiny, it turned out they also apply to procedural knowledge, and to the artifact world as well. Also, it turned out that those forces were intimately connected with the concept of entanglement, and could actually shed some light on entanglement itself. More than that, they could be used to find entangled information, in practice, no more philosophy, thank you :-).
So, let's take the easy road and see what we can do with run-time knowledge expressed through data, and build from there.
It's almost too simple
Think about a simple data-oriented application, like a book library. What can you do with a Book record? It's rather obvious: create, read, update, delete. That's it. CRUD. (I'm still thinking about the need for an identity-preserving Move outside the database realm, but let's focus on CRUD right now). CRUD for run-time data is trivial, and is a well-known concept.
CRUD in the artifact world is also relatively simple and intuitive:
Create: we create a new artifact, being it a component, a class, a function.
Read: this got me thinking for a while. In the end, my best shot is also the simplest: we (the contingent intepreter) read the function, as an act of understanding. More on the idea of contingent/essential interpreter in a rather whimsical :-) presentation by Richard Gabriel, that I already suggested a few years ago.
Update: we change an artifact; for instance, we change the source code of some function.
Delete: we remove an artifact; for instance, we remove a member function from a class.
Note that given the fractal nature of software, some operations are recursively identical (by deleting a component we delete all the sub-artifacts), while others change nature as we move into the fractal hierarchy (by creating a new class inside a component we're also updating the component). This is true in the run-time world of data as well.
Extending CRUD to run-time procedural knowledge is not necessarily trivial. Can we talk about CRUD for functions (as run-time, not compile-time, concepts)? We usually don't, but that's because the most common material and tools (programming languages) we're using are largely based on the idea that procedural knowledge is created once (through artifacts), then compiled and executed.
However, interpreted languages always had the possibility of creating, updating and deleting functions at run-time. Some compiled languages allow dynamic function creation (like C#, through Reflection.Emit) but not update/delete. The ability to update a function through reflection would be an important step toward AOP-like possibilities, but most languages have no support for that. This is fine, though: liquids can't be compressed, but that didn't prevent us from defining the concept of compression. I'm looking for concepts that apply in both worlds (artifacts and run-time) but not necessarily to all materials (languages, etc.). So, to summarize:
- Create, Update, Delete retain the usual meaning also for the run-time concept of functions. It means we're literally creating, updating and deleting a function at run-time. The same applies to the (run-time) concept of class (not object). We can Create, Update, Delete classes at run-time as well (given the appropriate language/material).
- Read, in the artifact world, is the human act of reading, or better, is the contingent interpreter (us) giving meaning to (interpreting) the function. At run-time, the equivalent notion is execution, with the essential interpreter (the machine) giving meaning to the function. Therefore, reading a function in the run-time world simply means executing the function. Executing a function may also update some data, but this is fine: the function is read, data are updated; there is no contradiction.
Where do we go from here?
I'm trying to keep this post short, so I'll necessarily leave a lot of stuff for next time. However, I really want to anticipate something. In my previous post I said:
Two clusters of information are entangled when performing a change on one immediately requires a change on the other.
I can now state that better as:
Two clusters of information are entangled when a C/R/U/D on one immediately requires a C/R/U/D on the other.
Yes, the two definitions are not equivalent, as Read is not a Change. The second definition, however, is better, for reasons we'll explore in the next few posts. One reason is that it provides guidance on the kind of "change" you need to consider. Just restrict yourself to CRUD, and you'll be fine. As we'll see next, we can usually group CD and RU, and in fact I've defined two concepts representing those groupings.
Entanglement through CD and/or RU is not necessarily a bad thing. It means that the entangled clusters are attracting each other, and, inasmuch as that specific force is concerned, rejecting others. In other words, they want to form a single cluster, possibly a composite cluster in the next hierarchical level (like two functions attracting themselves and creating a class). Of course, it might also be the case that the two clusters should be merged into a single cluster.
We'll see more about different form of CRUD-induced entanglement in the next few posts. We'll also see how many concepts (from references to database normalization) are actually just ways to deal with the many forms of entanglement, or to move entanglement from the artifact space to the run-time space. Right now, I'll use my remaining time to explain polymorphism through the entanglement perspective.
Polymorphism explained
Consider the usual idea of multiple shapes (circle, triangle, rectangle, etc.) that can be drawn on a screen. In the pre-object era (say, in ANSI C) we could have defined a ShapeType enum, a Shape struct with a ShapeType field and possibly a ShapeData field, where ShapeData would have probably been a union (with all fields for all different shapes). Draw would have been implemented as a switch/case over the ShapeType field.
We have U/U entanglement between ShapeType and ShapeData, and between ShapeType and Draw. U/U means that whenever we update ShapeType we need to update ShapeData as well. I add the Spiral type, so I need to add the relevant fields in the ShapeData union, and add logic to Draw. We also have U/U entanglement between ShapeData and Draw: if we change the way we store shape data, we have to change the Draw function as it depends on those data. We could live with that, but is not pretty.
What if we have another function that depends on ShapeType, like Area? Well, we need another switch-case, and Area will have the same kind of U/U entanglement with ShapeType and ShapeData as Draw. It is that entanglement that is bringing all those concepts together. ShapeType, ShapeData, Draw and Area want to form a cluster.
Note that without polymorphism (language-supported or simulated through function pointers) the code mass is organized into a potentially degenerative form: every function (Draw, Area, etc) is U/U-entangled with ShapeType, and is therefore attracting new procedural knowledge related to each and every ShapeType value. The forcefield will make those functions bigger and bigger, unless of course ShapeType is no longer extended.
Object-oriented polymorphism "works" because it changes the gravitational centers. Each concrete class (Circle, Triangle, etc) becomes a gravitational center for procedural and structural knowledge entangled with a single (ex)ShapeType value. In fact, ShapeType no longer needs to exist at all. There is still U/U-entanglement between the (ex)ShapeData portion related to Circle and the procedural knowledge in Circle.Draw and Circle.Surface (in a Java/C#ish syntax). But this is fine, because they have now been brought together into a new cluster (the Circle class) that then rejected the other artifacts (the rest of the ShapeData fields, the other functions). As I said in my previous post, entanglement between distant element is bad.
Note that we don't really change the mass of code. We just organize it differently, around different gravitational centers. If you draw functions as tall boxes in the switch-case form, you can see that classes are just "wide" boxes cutting through functions. The area of the boxes (the amount of code) does not change.
Object-oriented polymorphism in a statically-typed language also requires [interface] inheritance. This brings in a new form of U/U-entanglement, this time between the interface and each and every concrete class. Whenever we change the interface, we have to change all the concrete classes.
Interestingly, some of those functions might be entangled in the run-time world: they tend to be called together (R/R-entanglement in the run-time space). That would suggest yet another clustering (in the artifact space), like separating all the drawing operations from the geometrical operations in different interfaces.
Note that the polymorphic solution is not necessarily better: it's just a matter of probability. If the set of functions is unstable, plain polymorphism is not really a good answer. We may want to explore a more complex solution based on the Visitor pattern, or some other clustering of artifacts. Understanding entanglement for Visitor is indeed an interesting exercise.
Coming soon (I hope :-)
CD and RU-entanglement (with their real names :-), paving our way to the concept of isolation, or shielding, or whatever I'm gonna settle for. Oh guys, by the way: happy new year!
No comments:
Post a Comment