Wednesday, June 23, 2004

Clever Jetty Hack: Dynamically merging directories inside a web application context

I'm working on converting an existing web site from homebrew servlets and JSPs to Tapestry. What's interesting is that the application in question is deployed into different hosts using different "skins". Certain stylesheets and images are different, and there are minor differences in layout and behavior.

So, I'm working with their workspace directory layout; there's a common directory that contains the main content (images, stylesheets, Tapestry artifacts). Then there's additional directories, such as defaultskin, that contain skin-specific assets and artifacts.

When building and deploying, the content is merged together to form a composite web application context.

For development purposes, I want to leave the directory structure intact, but that makes it impossible to run the application (the assets and artifacts in the defaultskin directory simply aren't visible. What I need is to be able to have a virtual directory within my web application that points to an entirely different directory; I want to map /skin in my context to the defaultskin folder, so I can use URLs like http://.../skin/images/about.gif.

Did a bit of research and realized that the correct approach was to create my own implementation of Jetty's WebApplicationContext that included the necessary hooks:

package portal.jetty;

import java.io.IOException;

import org.mortbay.jetty.servlet.ServletHttpContext;
import org.mortbay.jetty.servlet.WebApplicationContext;
import org.mortbay.util.Resource;

/**
 * Used only during testing using Jetty, this allows resources "within"
 * a web application context to be retrieved from an entirely different
 * directory.
 *
 * @author Howard Lewis Ship
 */
public class SkinContext extends WebApplicationContext
{
    private String _skinURI;
    private String _skinPath;
    private Resource _skinResource;

    /**
     * Standard constructor; passed a path or URL for a war (or exploded war
     * directory).
     */
    public SkinContext(String war)
    {
        super(war);
    }

    public String getSkinURI()
    {
        return _skinURI;
    }

    public void setSkinURI(String string)
    {
        _skinURI = string;
    }

    public Resource getResource(String contextPath) throws IOException
    {
        if (contextPath.startsWith(_skinURI))
        {
            String postPrefixPath = contextPath.substring(_skinURI.length());

            return _skinResource.addPath(postPrefixPath);
        }

        return super.getResource(contextPath);
    }

    public String getSkinPath()
    {
        return _skinPath;
    }

    public void setSkinPath(String string)
    {
        _skinPath = string;
    }

    public void start() throws Exception
    {
        super.start();

        _skinResource = Resource.newResource(_skinPath);
    }

}

This gets combined with a specialized jetty.xml (startup configuration file):

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure 1.2//EN" "http://jetty.mortbay.org/configure_1_2.dtd">

<!-- At deployment time, the WAR will be built properly, so that content under 
  "skin" is just appropriate to the type of site. During development, we don't want to have
  to copy files around unecessarily, so we alias the web/defaultskin directory as "/skin". -->
   
<Configure class="org.mortbay.jetty.Server">
  <Call name="addListener">
    <Arg>
      <New class="org.mortbay.http.SocketListener">
        <Set name="port">8080</Set>
      </New>
    </Arg>
  </Call>
  
  <Call name="addContext">
    <Arg/>
    <Arg>
      <New class="portal.jetty.SkinContext">
        
          <Arg>web/common</Arg>
        
        <Set name="contextPath">/</Set>
        <Set name="skinURI">/skin/</Set>
        <Set name="skinPath">web/defaultskin/</Set>
      </New>
    </Arg>
  </Call>
  
</Configure>

Viva open source! I didn't even bother pulling down the Jetty source to figure this out, I just navigated around Jetty's browse CVS to find the few code snippets. Of course, I don't think this solution is ideal for deployment, but that's not the point ... it's about making my development time efficient and I'm quite happy with it.

Tuesday, June 15, 2004

A little EasyMock trick

I had a problem with EasyMock ... I had to validate a call into a method that passed an exception. Exceptions don't compare well to each other so this was causing problems.

Fortunately, EasyMock has an escape clause for this purpose, you can override how it compares expected vs. actual arguments for a particular method. Thus:

public class ExceptionAwareArgumentsMatcher extends AbstractMatcher
{
  protected boolean argumentMatches(Object expected, Object actual)
  {
    if (expected instanceof Throwable)
      return expected.getClass().equals(actual.getClass());

    return super.argumentMatches(expected, actual);
  }
}

This just checks that the exception passed into the mock object is the right type, not that it is any particular instance.

In use, it looks like this:

MockControl c = MockControl.createStrictControl(ErrorHandler.class);
ErrorHandler eh = (ErrorHandler) c.getMock();

eh.error(LOG,
  "Unable to order cartoon character 'wilma' due to dependency cycle:"
  + " A cycle has been detected from the initial goal [wilma]",
  null,
  new CyclicGoalChainException(new Goal("")));

c.setMatcher(new ExceptionAwareArgumentsMatcher());

c.replay();

HiveMind Framework Stack

I've created a new diagram to illustrate the HiveMind framework stack:

I think this is a good, at-a-glance to what HiveMind is doing for you; the Application has pulled a couple of facade services out of the HiveMind Registry, but the implementations of those services are dependent on many other services and configurations inside HiveMind and even on external entities (such as an session EJB inside the J2EE area).

Saturday, June 12, 2004

Order Up at the Tapestry Deli

MindBridge has just put up a new Tapestry web site: http://www.t-deli.com.

Not only does it contain a working version of the Tapestry Workbench, but it contains Tapestry-related Ant tasks and components. A lot of the stuff people are always looking for, like smarter version of Tapestry's Conditional and Foreach components that not only have more sensible names, but also automatically adapt when they are inside a Form.

Occasionally I feel the crushing weight of Tapestry and the needs of the ever-growing community bearing down on me. It is such a relief to see the other committers sharing the load, especially in such a visible way. Thanks MindBridge!

Friday, June 11, 2004

Boston NEJUG Tapestry Presentation

Last night I gave my Tapestry presentation to the Boston NEJUG. Good crowd, about 100 people, at Sun's campus in Burlington, MA. Amazingly, I didn't have my expected projector problems; I was able to face the audience, and type, and see what was getting projected. Makes things so much easier.

It started a bit slow, and I was concerned that I was going to run out of material before I ran out of time. What I didn't realize was that these folks were simply too polite to interrupt with questions; once I asked for questions I got deluged. There were minor hiccups in terms of getting the examples put together (I'm not quite Steve Jobs when it comes to live demos), but those hiccups do demonstrate the support that Spindle and Tapestry give you when developing, so its not a bad thing.

Dan Jacobs (who runs the WebTech Users Group) had a few challenges for me .. such as ripping out the entire Form from the AddAddress page and moving it into its own component. No problem (OK, minor problem, I should have watched the error line reported by Tapestry) and I got it working reasonably in just a couple of minutes.

I've put up an HTML version of my presentation, by request. As my presentation get better, my slide sets get smaller, and that's very good! I'll get a copy of the Eclipse project up shortly.

More on Tapestry and JSF

A recent article at OnJava, Improving JSF by Dumping JSP has been the source of some interesting discussions. The author, Hans Bergsten, is on the JSF expert group and is quite aware of Tapestry ... he even modified the Hangman example from chapter two of Tapestry in Action for this article.

Limitations of JSF/JSP

Interestingly, the JSF component tree is built initially during the JSP render, the first time a page is accessed. That means components early in a page may not be able to reference or interact with components later in the page on that first render, which results in some oddities. He gives an example of a label component and an input field component, where the label does not appear the first time a page is rendered. Whoops! That a violation of the consistency principal to me.

Then it turns out the JSF mandates lots of limitation on how you write your JSP. For example, in most cases, you must use a <h:outputText> tag rather than literal text to output literal text ... otherwise the text tends to end up in the wrong place!

Almost two years ago, I was on a quest to mix and match Tapestry and JSP, such that a Tapestry page could use a JSP instead of an HTML template to render its response. I made a lot of progress but hit some complex cases where I could not merge the two models together, and I instead went outside the box to create Tapestry 3.0 (which has JSP-like features, such as implicit components).

Apparent the JSF team hit the same snags and, because they are blessed by the powers that be, simply said "Thou shalt not do the things that make JSF break. If you do, it is your fault, not ours.". Wish I had it so easy!

Now, as much as the JSF leaders might say that the limitations discussed in Hans' article are merely the result of using JSPs, and that JSF transcends JSPs ... well, any non-JSP implementation of JSF is going to be proprietary, and non-portable or lack the IDE tool support that justifies JSF in the first place. This should sound familiar, its the same set of promises that came out five years ago for EJBs (tools will fix everything!), and the same pattern of complexity, lock-in, and inefficiency that resulted.

Stepping closer to Tapestry

Hans' article continues with a discussion of how to use JSF with a custom ViewHandler that mimics Tapestry's HTML templates and page specifications. That's great I suppose ... but what I'd rather see is a comparison to chapter six which rebuilds the simple Hangman example using custom components. Creating new components in JSF is very involved ... lots of classes, lots of XML, lots of fiddly bits. Components in Tapestry are streamlined and easy ... and powerful, in that they can have their own HTML templates. So as complimentary as this article is to Tapestry, it is also an unfair comparison.

More observations

A JSF application can act as an inversion-of-control container, creating and configuring managed beans. This is a feature of Tapestry as well, primarily through the <bean> element of the page and component specifications. I think this is a better solution, to keep the beans associated with the page (or component) and for the names to be local to the page (or component). A lot of thought in Tapestry has gone into allowing different developers to work on different parts of an application without conflict. Anything global is a measured risk ... in Tapestry, the only true globals are the names of the pages.

It was interesting to the the <component> tags for selectForm and selections, where selections was nested inside selectForm (this also meant that in the template, the selections component was referenced as selectForm:selections. In Tapestry, the two components would be peers (both children of the containing page) and the relationship (that the form encloses the selections) would be determined at runtime, during the render. This is important, because in Tapestry a Form cannot know statically about all the form element components that will be rendered inside its body ... those components may be in the body of the Form (the most typical case), or inside components within the body of the Form ... or, with clever use of Block and RenderBlock, potentially on an entirely different page of the application! Tapestry doesn't care, which is why artifacts such as form control ids must be allocated dynamically.

Now I may be nitpicking here, and I'm not sure which of these is related to the JSF standard, and which to Hans' ViewHandler implementation. The HTML template includes the same kind of previewable HTML as Tapestry, but its not clear how or if it is discarded. Additionally, the javax.faces.Command/javax.faces.Link combination (used to render the link around each letter) has a rendered parameter similar to Tapestry's disabled parameter ... but neccessitates a duplication of the javax.faces.Graphic to cover the cases where the link is not rendered (because that particular letter has already been guessed).

Summary

I just don't see JSF aspiring to any of Tapestry's guiding principals: Simplicity, Consistency, Efficiency, or Feedback. It's very gratifying that the JSF experts are looking to Tapestry for inspiration ... this is quite similar to the situation with the EJB 3.0 specification and Spring.

There has been some discussion of what Tapestry can do that JSF can't. That's a silly question ... there's nothing Tapestry can do that JSF can't. There's nothing either framework can do that a servlet can't, or a Perl script for that matter. It's a question of how easy it is to get it working, and how well it works at runtime, and how maintainable it is in the long run.

Based on everything I've seen, JSF still faces an uphill battle in these areas. Yes, FUD. Yes, I'm biased. But last night at the NEJUG, I coded two simple applications while explaining Tapestry and fielding questions and where I messed up, Spindle or Tapestry itself helped me fix my problems. I doubt I could have accomplished the same things using the JSF RI (even if I had the necessary experience) in the time available. I suspect, over time, we'll be seeing some better shoot-outs between Tapestry, WebWork and JSF. Struts, at this time, is turning into an also-ran kept around for legacy purposes. In the meantime, I repeat the rallying cry: Results Not Standards.

Wednesday, June 09, 2004

New in HiveMind: PipelineService

Just finished checking in the hivemind.lib.PipelineFactory service into HiveMind.

This is neat stuff; pipelines are a close cousin to interceptors, but tend to be things that are a) coded manually and b) specific to a particular service. Think Servlets and Servlet Filters.

The PipelineFactory is a service implementation factory that constructs a pipeline ... a series of filters that call each other, then eventually call an underlying service. It looks like this at runtime:

The bridge classes are fabricated at runtime and connected together with the filters. Each filter is passed the next bridge as a parameter (the last filter gets the terminator instead), and is free to invoke the bridge before or after doing some kind of work, and to change the parameters as it sees fit.

A minimal use of this shows up as a service, a configuration, and some contributions to the configuration:

service-point (id=MyPipeline interface=mypackage.MyService)
{
  invoke-factory (service-id=hivemind.lib.PipelineFactory)
  {
    create-pipeline (filter-interface=mypackage.MyFilter configuration-id=MyPipeline)
  }
}

configuration-point (id=MyPipeline schema-id=hivemind.lib.Pipeline)

contribution (configuration-id=MyPipeline)
{
  filter (service-id=FrobFilter)
  filter (service-id=BazFilter)
  filter (service-id=FooFilter after="*")
}

This will be very useful in Tapestry 3.1 because many different services (not just Tapestry engine services, but all kinds of infrastructure) will be implemented as pipelines of simpler services. These pipelines will be exposed as configurations that will be pluggable. Got a particular transaction management strategy that you want to apply to all requests? Plug into the correct pipeline. Want to add some authorization checks to some kinds of requests? Plug into those service's pipelines.

This concept is demonstrating two key selling points of HiveMind:

  • Libraries can provide configuration points that most applications can ignore, but certain applications can plug into.
  • Mixing services and configurations (services passed as data, or services defined in terms of contributed data) extends the power of post the service model and the configuration model.

Once again, this underscores the difference in philosophies between HiveMind and Spring ... in Spring you still code against the metal. It's a much better metal to code against than the raw Java APIs or the raw J2EE APIs, but to make use of a service provided by Spring, you need to know a lot of details about how and when to configure and instantiate it ... it all goes into your springbeans.xml file.

HiveMind's philosophy is that just by having a framework available on the classpath, it will provide services and configurations that you don't need to know about. You can find out about them via HiveDoc (and other documentation) and take advantage of them as needed.

Take the example above: you don't need to know about the implementation of the PipelineFactory:

service-point (id=PipelineFactory interface=org.apache.hivemind.ServiceImplementationFactory)
{
  invoke-factory (service-id=hivemind.BuilderFactory)
  {
    construct (class=org.apache.hivemind.lib.pipeline.PipelineFactory service-id-property=serviceId)
    {
      set-service (property=classFactory service-id=hivemind.ClassFactory)
      set-service (property=defaultImplementationBuilder service-id=DefaultImplementationBuilder)
    }
  }
}

This configuration, and its dependencies on the ClassFactory and DefaultImplementationBuilder services, are the PipelineFactory's concern. It might change at some point due to some refactoring within the HiveMind framework ... and it won't affect your code at all. You may use some hypothetical third party framework and it may make use of PipelineFactory in its module descriptor and you don't have to know about it.

This is not all to say you couldn't create some kind of pipeline factory in Spring. You absolutely could ... but its a lot of work. It would be easier to create a bunch of individual beans and configure them to each other ... you don't save a lot of effort by adding additional abstractions (though you'd probably have to code a "bridge" class manually) ... you don't save anything because you are responsible for defining and configuring every bean in your application, even when the code comes out of a third party framework.

Creating a pipeline like this in HiveMind is a bit of effort, a new recipe to follow. However, contributing into your own pipeline, or a pipeline provided by another framework, is very streamlined. The way HiveMind can mix and match configurations and services, and the way it can accumulate contributions from multiple modules, is the key to this ease of use.

Saturday, June 05, 2004

Getting ready for pipelines

In between some yardwork around the homestead, I've been coding up a storm on the HiveMind front. This morning, I hit all the remaining code inside the framework to handle the new localized message strategy (more on that in a later post). I then refactored ClassFab and friends to make them easier to use outside of HiveMind. ClassFab is my attempt to "tame" Javassist, which is a powerful bytecode enhacement framework used by Tapestry and HiveMind. ClassFab streamlines the work of creating new classes at runtime ... the heavy lifting is done by Javassist, of course, but ClassFab is a prettier face.

The refactoring allowed me to create a better test suite for ClassFab; previously, the tests were a bit indirect, more like integration tests, where ClassFab was used to create a running HiveMind Registry. I've been trying to get away from that as the primary method of testing in HiveMind (and, some day, Tapestry), based largely on listening to Dave Thomas and reading Pragmatic Unit Testing. I now try to test directly, instantiating the objects inside my unit tests (assisted by EasyMock). I then do an additional integration test or two to verify that my module deployment descriptors are correct. This has a lot of advantages; it's much easier to set up exceptional cases than trying to do so using the integration approach, so this more disciplined approach yields better confidence (and code coverage!)

The main effort was this evening, where I created a pair of related services. The first is passed an interface and it creates a default (or placeholder) implementation of that interface. More Javassist, of course. The second service is a service implementation factory, allowing you to easily create placeholder services.

This is leading up to something very powerful, that builds upon the concept in HiveMind of mixing configurations and services on an equal footing: processing pipelines. Think in terms of servlets and servlet filters. Some series of filters will call each other, and the last filter will call into the servlet itself.

The end result will be a way to easily define a service in terms of a configuration point and a pair of interfaces: the service interface and the related filter interface. The configuration will define a series of filters (with all of the neat ordering power that's already built into HiveMind). HiveMind will dynamically create the necessary classes and such that will make the pipeline operate efficiently. This will be great for Tapestry, as it will break apart a number of monolithic classes and code blocks into tiny pieces, and give end users the ability to easily hook into many different places to provide application specific extensions.

Thursday, June 03, 2004

HiveMind vs. Spring: Philosophy

I was struck this morning with just the right way to explain the difference between Spring and HiveMind.

Spring is based on a recipe philosophy. If you want a particular kind of behavior, there's a recipe for getting it. You want declarative transactions? You lookup up the recipe for it ... you define your bean implementation (in Spring terminology, a "target bean", the other beans you need (a transaction manager bean in this case), and then combine them into a final bean, ready for use in the application.

Along the way, you may have to cut-and-paste some details, particularly the beans (transactionManager and friends) provided by Spring. On the other hand, you have a single place (your beans XML file) to look for all the details.

HiveMind uses a very different philosophy: a component oriented approach. In HiveMind, modules (that is, JARs packaged with a HiveMind module deployment descriptor) provide services (and configurations) using well-known names (those fully qualified service and configuration ids). Just by having the JARs on the classpath, the services are defined and waiting to be used, directly or indirectly, in your application.

Is this difference that important? I think so. HiveMind has a greater opportunity to be an integration platform than Spring. Over time, underlying frameworks change and its easy to imagine a number of situations where a commonly used bean, such as a transaction manager in Spring, would change over time. These changes could be something as simple as adding new properties or methods, or something more dramatic such as renaming a class or moving it to a new package.

Under the Spring model, upgrading from one release to another may break existing bean configurations.

Under the HiveMind model, the provided service would still have the same fully qualified id and service interface even if its implementation is radically changed ... and any such changes are restricted to the JAR and its module deployment descriptor.

In practice, this is not such a big deal; people don't go willy-nilly changing versions of important frameworks for the applications without the expectation that they'll need to do a good amount of testing and perhaps some tinkering.

In fact, the core difference in the philosophies is not about managing change over time, but in managing configurations across different simultaneous deployments. For Vista, the product that birthed HiveMind, there were pending requirements to support multiple databases (Oracle, MS SQL Server, perhaps PostgreSQL) with a single code base. The idea was that the application code would call into a data access object layer (implemented as HiveMind services), and those services would, in turn, call into a database abstraction layer to do the real work. The abstract layer would also be HiveMind services, with a set specific to Oracle, to SQL Server, etc.

One option was to isolate the different abstraction layers into different JARs and just control which JARs were put into deployment.

A pending (and only potential) requirement for Vista was to be able to provide different scaled versions of the product to different clients. So, a small school might get the JBoss version with a small number of Vista tools. The Vista application is divided up into a large number, about 30, different tools, each focused on a particular part of the application, such as Mail, Discussions, Quizzes, Lab Assignments, and so forth.

A large implementation of Vista might want the fully suite of Tools, and be set up to deploy into WebLogic and Oracle.

The HiveMind vision is that the installer tool for Vista could simply decide which of the modules should be deployed based on licensing information. In this way, rather than maintain different installers and different configurations of the product (with potential duplication of code across source trees), we rely on HiveMind to integrate the application on the fly, at startup.

Alas, I switched over to consulting long before I had a chance to see any part of this vision take root at WebCT ... but if you are curious why HiveMind is the way it is, that vision is the root cause.

Wednesday, June 02, 2004

HiveMind is one year old!

Just checking the CVS access logs; I did the first check in of HiveMind code into the Apache CVS on May 30 2003. HiveMind is over one year old now.

Tuesday, June 01, 2004

Spring Integration in HiveMind

I've added some basic integration with Spring to HiveMind. You can now make Spring responsible for createing a service and, effectively, link to it from within HiveMind. That is, the SpringLookupFactory can create a core service implementation by accessing a bean defined within a Spring BeanFactory. This opens up access to all sorts of Spring functionality (especially Hibernate integration) but keeps the syntax simple (and in SDL). Rod has suggested that HiveMind somehow "use" Spring's transaction interceptor, but that's a project for another day.

We'll have to see if the Spring guys follow suit and provide some integration in the other direction; I would love to see something symmetric, where a Spring factory could defer bean creation to a HiveMind service. That would allow Spring beans to tap into HiveMind's much more advanced configuration management.

I've also been wasting time fighting with Forrest to get the menus for the HiveMind site generated properly. It's doing the right thing for non-index.html pages. All the index.html pages keep getting all the menus, not just the menus for the selected tab. Probably a bug deep, deep, deep inside Forrest.