We
used to be taught that, by spending enough time thinking about
a problem, we would come up with a "perfect" model, one
that embodies many interesting properties (often disguised as
principles). One of those properties was stability, that is, most
individual abstractions didn't need to change as requirements
evolved. Said otherwise, change was local, or even better, change
could be dealt with by adding new, small things (like new classes),
not by patching old things.
That
school didn't last; some would say it failed (as in "objects
have failed"). At some point in time, another school prevailed,
claiming that thinking too far into the future was bad, that it could
lead to the wrong model anyway, and that you'd better come up with
something simple that can solve today's problems, keeping the code
quality high so that you can easily evolve it later, safely protected
by a net of unit tests.
As
is common, one school tended to mischaracterize the other (and
vice-versa), usually by pushing things to the extreme through some
cleverly designed argument, and then claiming generality. It's easy
to do so while talking about software, as we lack sound theories and
tangible forces.
Consider
this picture instead:
Even
if you don't know squat about potential energy, local minima and
local maxima, is there any doubt the ball is going to fall easily?