One of my coding catch-phrases is Aggregation Trumps Inheritance. By that, I mean that combining small simple objects is a more powerful technique than inheritance.
I didn't always think this; coming out of the Objective-C/NextStep camp, I was used to using lots of inheritance. In fact, for a long time, I thought a framework was a set of base classes for me to subclass. You can see this in the implementation of Tapestry, where you start with Tapestry base classes.
Even at the time, I was concerned that the Three Amigo's had a problem with UML. Namely, that when doing a sequence diagram, it was very, very awkward to show the flow from an object to a super-class implementation of a method. There simply wasn't the necessary geometric direction to draw the line. This seemed to be a problem ... UML didn't seem to handle inheritance very well, and that made it difficult to diagram some design I had ... especially those that involved overriding a base class implementation of a method.
In retrospect, the idea of a framework as a set base classes is a bit flawed. It made sense in Objective-C land, due to the lack of a garbage collector. Memory management was a buggy, agonizing process (remember retain cycles, anyone?) so you wanted to minimize the number of objects allocated. Therefore, better to subclass and allocate a single object than to allocate several related objects and have to manage who-owns-who.
That isn't the approach I take any longer; if you look at Tapestry or HiveMind, you'll see how I trust the garbage collector, and use large numbers of really small objects ... objects that may implement an interface or two, but are otherwise inheritance unencumbered, you know, POJOs (plain old Java objects). And, if I'm inclined to diagram in UML, it works fine ... no ambiguity about which object owns which implementation of which method. HiveMind especially has very few base classes or interfaces exposed to your code, which is the way it should be.
As I take Separation of Concerns ever more seriously, I see more places where I was using inheritance out of inertia or reflex, and I can code better using smaller objects. Most often, I'm using the GoF Strategy pattern.
For example, I was adding a simple expression parser to HiveMind. I started thinking about a PropertyToken and a ClassNameToken as the leaves of my AST (Abstract Syntax Tree), with AndToken, OrToken and NotToken classes to add structure. Each node would have an evaluate() method that would return true or false. The leaf tokens would do some real work (see if a JVM System Property is true, or see if a class exists) and the other tokens would combine those values together. Real CompSci Parser 101 stuff.
But then I noticed that I really had two concerns here: the structure of the AST, and the way each node is evaluated. Using inheritance, I would inherit the AST structural behavior from some AbstractToken base class, and the subclasses would each provide their own evaluate() method implementation (as well as any additional properties).
One I saw it that way, I realized that the evaluation part was completely separate and could be factored out. My final solution for the AST uses a Node class, and an Evaluator interface. Each Node owns a left and right child node, and an evaluator. The Node class is about the structure of the AST ... the evaluator is about how the Node evaluates to true or false. The Node.evaluate() method internally delegates to the Node's evaluator (the Node passes itself as a parameter).
The end result was much less code to write and test. First I tested the Node class to make sure that structure and evaluation worked correctly. Then I defined Evaluator implementations (AndEvaluator, OrEvaluator, etc.). Ultimately, And, Or and Not were completely stateless internally ... so I made them singletons. Breaking the code apart this way made it easier to mock Evaluators when I was testing Nodes and vice-versa.
I'm tackling a similar problem in Tapestry now: re-worked the way page recorders work. In Tapestry 3.0, a page recorder is responsible for persisting certain page properties into the HttpSession as attributes, and restoring page properties from those attributes in later requests.
The 3.0 code is broken in a couple of ways; the page recorders are owned by the engine, not the request cycle, which can cause conflicts when you build a Tapestry application using frames (updating the frames cause race conditions as different threads use and update the page recorders in different ways).
It was always my intention to allow different implementations of IPageRecorder, so that other schemes could be used, such as storing data in HTTP Cookies ... but that never happened.
With HiveMind providing configuration and infrastructure, it will be much more reasonable to make this pluggable. In the long term, I want to support more complex life cycles for page data ... such as properties that stay persistent until you navigate to some other page in the application.
So, I'm finding that in the new code, the page recorder is a thin buffer between the page instance and a PropertyPersistenceStrategy object that does the actual work ... and the strategy object is dynamically looked up (using a name stored in the page specification), which is the key to pluggability.
The page recorders can now be lightweight, created as part of a request and discarded afterwards; goodbye thread contention and it simplifies the life cycle and the IPageRecorder interface.
Anyway, back to the moral: if you can subdivide an object into smaller pieces ... do it! Any time you can change an "is-a" relationship to a "has-a" relationship, you are going to find advantages in coding, testing, the works!