Sunday, December 19, 2004

Groovy, Junit, Eclipse

So, during my trip to Javapolis, I hit that point where I had typed one parenthesis too many. You know the one, it involves class casts, the very ones the compiler could so easily do for us automatically. Mine typically look like:

    protected IPage newPage(String name)
    {
        MockControl control = newControl(IPage.class);
        IPage result = (IPage) control.getMock();

        result.getPageName();
        control.setReturnValue(name);

        return result;
    }

That's from a bit of a unit test, where I'm using EasyMock to create unit test fixtures on the fly.

Regardless, for my unit tests, I really get sick of all the casting and types. So I'd like to start using Groovy. Recoding the above code snippet into Groovy should end up looking like:

    protected newPage(name)
    {
        control = newControl(IPage.class)
        result = control.getMock()

        result.getPageName()
        control.setReturnValue(name)

        return result
    }

No return types. No variable types. Fewer parenthesis (remaining ones are there for aesthetic reasons). Ultimately, I think this is easier to read.

Is Groovy ready for prime time? Not for me. It's very important for my development cycle to be able to hit the Run... button in Eclipse and see the clean, green sweep of the progress bar in the JUnit view.

No such luck with Groovy. I created the simplest test case I could think of:

package com.examples;

import  groovy.util.GroovyTestCase


class TestStuff extends GroovyTestCase
{

     void testSomething() {
       assertEquals (true, true)
    }
}

I then used the JUnit launch configuration to find all of my tests in my project ... which should just be this TestStuff class. Instead:

java.lang.RuntimeException: No filename given in the 'test' system property so cannot run a Groovy unit test
	at groovy.util.GroovyTestSuite.loadTestSuite(GroovyTestSuite.java:97)
	at groovy.util.GroovyTestSuite.suite(GroovyTestSuite.java:85)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:324)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.getTest(RemoteTestRunner.java:364)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:398)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:305)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:186)
Failed to invoke suite(): java.lang.RuntimeException: Could not create the test suite: java.lang.RuntimeException: No filename given in the 'test' system property so cannot run a Groovy unit test

OK. From what I can tell in the documentation, this means that Groovy has a placeholder TestCase class that is dependant on a JVM system property to identify the actual, single (!) script to execute. We'll just focus in, and run tests in my src folder instead (already less than ideal, because I often have several folders of unit test classes).

Boom. JUnit sees no unit tests in the source folder.

The only way I've found to get my Groovy unit test to execute is to use Run ... -> Groovy. That's great for running one set of tests, available in a single unit test class. HiveMind has 523 tests, Tapestry has over 580 ... these tests are spread across a large number of individual unit test classes. I don't maintain a JUnit test suite any more because JUnit support in both Eclipse and Ant will "scan" for my test cases. When I make changes, I need to easily run all of my tests.

I don't doubt I could cobble something together using Ant to get my tests to execute, but my preferred work style is to stay in Eclipse. Shutting back and forth between Eclipse and the command line, or even running Ant from inside Eclipse is not acceptible. The JUnit support inside Eclipse blows away what's available at the command line ... the direct access to code lines, the ability to get a diff on failed assertions (something you may have missed ... double click on the mismatch message in the stack trace and a diff window pops up to show you exactly what didn't match). And the general pleasantness of the green bar. I don't want to run my unit tests outside of Eclipse ... but, for the moment, Groovy is trying to force me to.

What I did eventually find was that I needed to generate a TestSuite for my Groovy tests:

package com.examples;

import junit.framework.Test;
import junit.framework.TestCase;
import junit.framework.TestSuite;

/**
 * @author Howard M. Lewis Ship
 */
public class GroovySuite extends TestCase
{
    public static Test suite() throws Exception
    {
        TestSuite suite = new TestSuite("Groovy Tests");

        suite.addTestSuite(Class.forName("com.examples.TestStuff"));

        return suite;
    }
}

This works, but is not my ideal. I have to use the full qualified class name as a string, otherwise Eclipse's builder considers this an error (the Groovy builder doesn't seem to be well integrated into Eclipse, so Eclipse has no knowledge of Groovy classes). I also have to maintain this file as I add new Groovy test classes. Further, this must be coded in Java (so Eclipse knows about it). I'll probably bite the bullet, and make do with this, but it's not as nice as simply writing Groovy tests and seeing them run.

Along the way, I found out that the Groovy plugin for Eclipse is quite primitive. Syntax or other errors in the Groovy code do not display in the editor, the tasks view or the problems view. Further, the old .class file is left behind, which further muddies the water. Basically, I'm left without a lot of confidence that what I've typed into the editor is what's running ... any errors in my Groovy code and some earlier version of the code runs instead. In unit testing terms, that means the potential for a lot of false positives on tests runs!

The prevaling wisdom in the Groovy community is that the best way to get your feet wet with Groovy is to start using it for unit tests. Sounds like a great idea, but I think the Groovy team needs to focus on this use case, especially with respect to Eclipse and other IDEs.

Saturday, December 18, 2004

Servlet mapping limitations

Had a bit of a disappointment this week, while working (from my hotel room) on Tapestry.

I've been blissfully unaware of some limitations of the URL mappings for servlets in the web.xml file. Tapestry has always had a single, simple mapping, typically to /app and everything else went into query parameters. That's changing for 3.1 and now I'm seeing some limitations in what I can do.

I'd like to support some mappings, such as /Home.direct/border.link. That is, the direct service, the Home page, and the component border.link.

Alas, this can't be done. The web.xml mapping <url-pattern>*.direct</url-pattern> will not match the above path. You'd think it would (and set the path info to /border.link) but it simply doesn't match at all.

Give how rich Apache mod_rewrite is (despite the limitation of being written in C), you'd think there would be more flexibility for matching paths to servlets in the Servlet API. I haven't read the 2.4 specs yet, do they address this? Regardless, Tapestry has an explicit goal to stay compatible with Servlet API 2.2. I suspect this can be overcome as well with container specific extensions, but that's another thing to avoid since it always leaves some people out in the cold.

One possibility is that I'll support pattern of /direct and create a URL like /direct/Home/border.link, but given that the page name may itself have slashes (i.e., admin/AdminMenu or some such), this introduces unwanted ambiguities.

Alternately, map /Home and use a URL of /Home/direct/border.link ... but suddenly, we start having a mapping for each page in the application!

In fact, I wish there was some delegate functionality in the servlet API, where application code could take over the identification of servlets for incoming paths. This would allow much richer paths. Alternately, it would be nice if the information provided in web.xml could be augmented, from Servlet.init(ServletContext) with additional information provided by the application. This would allow us to not repeat ourselves, by having Tapestry provide mappings based on the available EngineServiceEncoder mappings.

I have no taste for bureaucracy, but I may yet need to find a place on the Servlet API expert group.

Sunday, December 12, 2004

Updated PresentationExamples.zip

I've uploaded a new-and-improved PresentationExamples.zip. This is the file containing the source code from the examples I demonstrate at various user sessions. This update includes a bunch of new ideas from a recent road trip, and includes the Tapestry (and HiveMind) libraries. It's a zip of my local Eclipse workspace.

Tapestry 3.1 update

I've been squeezing in time to do a lot of updates to Tapestry 3.1 over the last few days, including on my flights to and from shopping.com.

I've been updating the Wiki with details on some of these changes ... at a high level the additions are:

  • Friendly URLs (!)
  • Modularity (put your pages in subdirectories)
  • Lots of new binding prefixes: asset:, component:, bean:, and listener:

Tapestry 3.1 continues to shape up as a radical rethinking of Tapestry 3.0 ... while staying about 95% backwards compatible. It will be both more efficient and more expressive (not to mention supremely extensible). Remember: Less is More!

Tapestry @ JavaPolis on Wednesday

I can't believe it, but I am flying all the way to Europe and back (plains, trains and automobiles!) for a one-hour Tapestry session at JavaPolis. I'm presenting on Tapestry from 16:55 to 17:55 on Wed December 15th 2004.

Back from shopping.com / Defer component

Just got back from a three day Tapestry training session with Jordan, Matt and the crew at shopping.com out in San Francisco. This was a fun, exhausting trip but everyone (myself included) got a lot out of it. Things I learned:

  • Be emphatic. The labs are good, even if they appear tedious. Nobody wanted to do them, and everyone was glad that they did.
  • Let Matt explain the problem before giving him the (wrong) solution.
  • Have a template Tapestry/Intellij project ready to go.
  • Flash memory keys are cool (we passed lots of files around, without having to get involved with network setup).

Along the way, we solved (to a degree) some problems with input validation (in Tapestry 3.0). If you've used validation, you know that you can get field labels to be decorated, along with the fields themselves. However, when using a loop around the input fields, there's an off-by-one error that causes the wrong label to be decorated.

The problem is that the FieldLabel component relies on the ValidField having the correct value in its name propery when the FieldLabel renders, so that it can work with the validation delegate to determine if and how to decorate the label. However, the name property isn't set until the ValidField renders itself (that's when it obtains its name from the Form component, and registers itself with the validation delegate as the active component).

So if the FieldLabel and the ValidField are in a loop that executes three times, and the 2nd rendering of the ValidField is the one that's in error, its the 3rd rendering of the FieldLabel that gets decorated. Woops.

So, the trick is ... we need to render the ValidField before the FieldLabel, but the output from that rendering must still occur after the VAlidField renders. Why that output order? Because that's that's how typical, western language forms are output, with FieldLabels rendering first (and thus, to the left of) ValidFields.

So, at the customer site, I created a component I call Defer. Defer takes a Block as a parameter. It renders the Block into a buffer, then renders its body, then outputs the buffered content from the Block. Something like:

public abstract Block getBlock();

protected void renderComponent(IMarkupWriter writer, IRequestCycle cycle)
{
  IMakrupWriter nested = writer.getNestedWriter();

  getBlock().renderBody(nested, cycle);

  renderBody(writer, cycle);

  nested.close();
}

How is this used? The FieldLabel is enclosed inside a Defer, and the ValidField is enclosed inside a Block.

  <span jwcid="@Defer" block="ognl:components.fieldBlock">
    <span jwcid="@FieldLabel" field="ognl:components.inputName"/>:
  </span>

  <span jwcid="fieldBlock@Block">
    <input jwcid="inputName" ... />
  </span>

So the inputName component renders first, then the FieldLabel renders, then the HTML from the inputName component is output. This is once of the many things I love about Tapestry ... its not just a stream of text, its actual objects. Once you see past the false reality of a text stream to the Matrix of the Tapestry component object model, you are free to twist time (or at least, control the order of rendering).

It's far from perfect ... as with using a ListEdit/ListEditMap combination inside a Form, it forces too much of Tapestry's internals into your lap ... but it is workable, and it is only needed for special cases where a ValidField component's name changes while the form renders -- which is to say, only when using the FieldLabel/ValidField combination inside a loop.

Monday, December 06, 2004

HiveMind and EasyMock

I've become a big fan of EasyMock. EasyMock is a way to create mock implementations of interfaces, for use in your unit tests. This fits in really well with the overall IoC concept, since when testing class A your can inject mock versions of interfaces B and C.

With EasyMock, you create a control for your interface. The control is an instance of MockControl. From the control, you can get the mock object itself. You then train the mock (I call it a "zombie"). As you invoke methods on the zombie, the sequence of operations is observed by the control.

When you invoke a non-void method, you tell the control what value to return. You can also have any method throw an exception.

You then switch the zombie into replay mode and plug it into the class you are testing. That code interacts with the zombie as if it was, well, whatever it should be.

HiveMind includes a HiveMindTestCase base class that improves this somewhat. Using EasyMock out-of-the-box, you write a lot of test code that just manages the controls . HiveMindTestCase does this for you, which makes it more practical when you have half a dozen pairs of controls and zombies.

For example, here's part of the test for Tapestry's ExternalService:

    public void testService() throws Exception
    {
        MockControl cyclec = newControl(IRequestCycle.class);
        IRequestCycle cycle = (IRequestCycle) cyclec.getMock();

        IExternalPage page = (IExternalPage) newMock(IExternalPage.class);

        Object[] serviceParameters = new Object[0];

        cycle.getParameter(ServiceConstants.PAGE);
        cyclec.setReturnValue("ActivePage");

        cycle.getPage("ActivePage");
        cyclec.setReturnValue(page);

        LinkFactory lf = newLinkFactory(cycle, serviceParameters);

        cycle.setServiceParameters(serviceParameters);
        cycle.activate(page);
        page.activateExternalPage(serviceParameters, cycle);

        ResponseOutputStream ros = new ResponseOutputStream(null);

        ResponseRenderer rr = (ResponseRenderer) newMock(ResponseRenderer.class);

        rr.renderResponse(cycle, ros);

        replayControls();

        ExternalService es = new ExternalService();
        es.setLinkFactory(lf);
        es.setResponseRenderer(rr);

        es.service(cycle, ros);

        verifyControls();
    }
This code tests the main code path through this method:
   public void service(IRequestCycle cycle, ResponseOutputStream output) throws ServletException,
            IOException
    {
        String pageName = cycle.getParameter(ServiceConstants.PAGE);
        IPage rawPage = cycle.getPage(pageName);

        IExternalPage page = null;

        try
        {
            page = (IExternalPage) rawPage;
        }
        catch (ClassCastException ex)
        {
            throw new ApplicationRuntimeException(EngineMessages.pageNotCompatible(
                    rawPage,
                    IExternalPage.class), rawPage, null, ex);
        }

        Object[] parameters = _linkFactory.extractServiceParameters(cycle);

        cycle.setServiceParameters(parameters);

        cycle.activate(page);

        page.activateExternalPage(parameters, cycle);

        _responseRenderer.renderResponse(cycle, output);
    }

Back in the test code, the newControl() method creates a new MockControl. The newMock() method creates a control and returns its mock .. in our unit test, we create an IExternalPage mock instance to stand in for a page named "ActivePage" and ensure that it is passed to IRequestCycle.activate().

The replayControls() and verifyControls() methods come from HiveMindTestCase; they invoke replay() and verify() on each control created by newControl() or newMock().

While replaying the zombie and the control work to ensure that each method is invoke in sequence and with the correct parameters.

The verifyControls() at the end is very important; an incorrect or out of sequence method call will be picked up in line (an exception is thrown), but an omitted method call can only be discovered by verifying the mock.

This technique takes a bit of getting used to; this "training" stage can easily throw you the first time through. Alternatives to EasyMock, such as jMock generally employ a different model, where you train the control (not the zombie), passing in method names and identifying arguments or other expectations in various ways. I suspect jMock is ultimately more powerful and expressive, but I find EasyMock more effective. Your mileage may vary.

Regardless of which framework you choose, this technique works really well! In fact, it is often useful to use this technique to define the interface; this is best explained by example. I was recently recoding how Tapestry performs runtime bytecode enhancement of component classes. The old code was monolithic, one big class. I broke that up into a bunch of "workers". Each worker was passed an object, an EnhancementOperation, that was a facade around a lot of runtime bytecode machinery. EnhancementOperation has methods such as addInterface() and addMethod() that are directly related to code generation, and a number of other methods, such as claimProperty(), that were more about organizing and verifying the whole process.

What's fun is that I coded and tested each worker, extending the EnhancementOperation interface as needed. Once all the workers were tested, I wrote the implementation of EnhancementOperation, and tested that. The final bit was an integration test to verify that everything was wired together properly.

Now, if I had sat down (as I might have done in the past) and tried to figure out all of, or even most of, the EnhancementOperation interface first, I doubt I would have done as good a job. Further, having EnhancementOperation be an un-implemented interface meant that it was exceptionally fluid ... I could change the interface to my heart's content, without having to keep an implementation (and that implementations' tests) in synch.

In sum, EnhancementOperation was defined through a process of exploration; the process of building my workers drove the contents of the interface and the requirements of the implementation. And all of this was possible because of the flexibility afforded by EasyMock.

Out of the box, EasyMock only supports interfaces; behind the scenes, it uses JDK proxies for the zombies. There's an extension that uses bytecode enhancement to allow arbitrary classes to be instrumented as mock object zombies.

This is cool as I rework the Tapestry test suite; often components collaborate and there is not an interface; I still want to be able to mock-up the other component, so I need to use the EasyMock enhancements.

Ultimately, I extended HiveMind's newControl() method to allow an interface or a class to be specified, and to do the right thing for each. Here's an example, where I'm testing how a Block and RenderBlock component work together:

    public void testNonNullBlock()
    {
        Creator c = new Creator();

        MockControl bc = newControl(Block.class);
        Block b = (Block) bc.getMock();

        RenderBlock rb = (RenderBlock) c.newInstance(RenderBlock.class, new Object[]
        { "block", b });

        IMarkupWriter writer = newWriter();
        IRequestCycle cycle = newRequestCycle();

        b.getInserter();
        bc.setReturnValue(null);

        b.setInserter(rb);

        b.renderBody(writer, cycle);

        b.setInserter(null);

        replayControls();

        rb.render(writer, cycle);

        verifyControls();
    }

I'm testing RenderBlock, so I use the Creator (an improved version of Tapestry TestAssist) to create and initialize a RenderBlock instance (even though RenderBlock is an abstract class).

The Block, which will have methods invoked on it by the RenderBlock, is a mock object (a zombie). Notice that its created exactly the same as creating a mock for an interface.

This is a huge improvement over the old approach for testing in Tapestry: build a simple application inside the integration test framework. The problem is, the integration test framework is very slow (several seconds per test) and cranky. A small amount of integration testing is important, but cumbersome for the amount of tests a product as sophisticated as Tapestry requires (at the time of this writing, Tapestry has approximately 545 tests, including about 30 integration tests). Unit tests, by definition, are more precise about failures.

I predict that the number of tests in Tapestry may well double before Tapestry 3.1 leaves beta! But then again, I'm pretty well test infected!

Tapestry URLs: Half way there

Did a lot of work this weekend on the Tapestry URL front. If you are familiar with Tapestry, you know that nobody likes the way the URLs are formatted. That's an issue, because you don't have control over it ... Tapestry is responsible for building the URLs (a very good thing) and does all the dispatching and so forth (again, very good thing). But Tapestry's butt ugly URLs are a problem for many.

  • All URLs are built off a single servlet, typically /app. This defeats J2EE declarative security, which is path based.
  • The URLs are longish. For example,
    /app?service=direct/0/Home/$Border.login&sp=T
    

    There's a rhyme and a reason to all that, but its very much oriented around the code and not the users.

  • The use of the slash ("/") character in the URLs is an impediment to breaking the application into modules (i.e., putting the admin pages into an "admin" folder).
  • The emphasis on query parameters means that most of an application, after the home page, will be "off limits" to any kind of web spider, such as Google.

I spent a large portion of the last few days working on this, and I'm halfway there. Of course, you'd hardly know it ... if you download Tapestry from CVS and built it right now, you'd see that the URLs have gotten longer! That service query parameter has been broken up into several smaller variables: service, page, component and, where necessary, container. Some Tapestry engine services add others (for example, the asset service has a path query parameter).

That means that the example URL from before would look like:

/app?component=$Border.login&page=Home&service=direct&sp=T

The next step (the work not yet done) is more fun ... what if we converted the page query parameter into more path info, and the service into an extension? Then our URL is much friendlier:

/Home.direct?component=$Border.login&sp=T

If we map the .html extension to the home service, then a Tapestry page looks like a normal HTML page:

/Home.html

From a client web browser, that's a reference to a Tapestry page, even though it looks like just the page's HTML template. With this kind of mapping, you might not even use the PageLink component in your templates, just ordinary HTML links:

<a href="misc/About.html">About</a>

Instead of:

<a jwcid="@PageLink" page="misc/About">About</a>

Getting all of this to work for your own application will require:

  • Adding path mappings to your web.xml:

    <servlet-mapping>
      <servlet-name>app</servlet-name>
      <url-pattern>*.html</url-pattern>
    </servlet-mapping>
    
  • Adding configuration data (HiveMind contributions) to your application, to tell Tapestry about those mappings:
    <map-extension extension="html" service-name="page"/>
    

Hopefully, I can get more of this going in the next couple of days (I have a long flight out to San Francisco this week).

Still, even with these improvements, the URLs aren't perfect. There's some dissatisfaction with bookmarking URLs with the external service; any extra data is encoded into the sp parameter, which has a prefixing system to identify the type of data. For example, "S" for string, "d" for double, "O" for serialized object.

Talking with Erik, I had the idea that we could simplify things by creating a service that bookmarked some set of page properties into the URL. So you might have a ShowAccount page, with an accountId property of type long. The accountId property gets converted into a string as part of the URL. Later, when the request is submitted, the accountId query parameter is converted back to a long and pluggined into the page's actionId property. The end result is a "pretty" URL:

/ShowAccount.html?accountId=972

This is much prettier than what you might accomplish in Tapestry 3.0 using the external service:

/app?service=external/ShowAccount&sp=l972

I think Erik wants to go further though ... he wants something like:

/ShowAccount/972

I think we can get there. I think we can allow one extension to map to different services (here, the extra query parameter hints that this is a bookmark service, not a page service, request). It's a matter of flexible interfaces and flexible HiveMind contributions.

Wednesday, December 01, 2004

Experiments with Tapestry and JDO

I'm working on some new examples to replace (or give context to) my existing set. I've decided that its is more useful to do some hand waving and build examples that hit a database than it is to stick to simple examples with no back end at all.

To that mind, I'm building a new example application, ePluribus, a survey site. I'll let you decide for yourself if the name is punny enough. ePluribus will be based on Tapestry 3.0.1 and HiveMind 1.0 (for the moment), and I'm using JDO for the back end. Once Tapestry 3.1 is stable, I will heave a great sigh of relief and switch over to that (the connection between Tapestry and HiveMind is somewhat jury rigged in Tapestry 3.0.1). I expect that ePluribus will be more relevant than the aging Virtual Library (discussed in the last chapters of Tapestry in Action).

I was very excited by JDO when I first heard about it back at JavaOne 1999. I've never understood why Sun didn't get fully behind JDO ... maybe because it doesn't require an application server. Talk about write once, run anywhere! Political questions aside, JDO seems ever more ready for prime time.

I had a good experience on TheServerSide.com, integrating Steve and Bruce's Kodo JDO code with the new Tapestry front end. Kodo is very clean, very well documented, and provided some great support to me. I've heard rumbles about performance, but TheServerSide.com is running faster than ever since the changeover (something I attribute to Kodo's query caching combined with Coherence's cluster-wide cache).

However, for a new, redistributable chunk of example code, Kodo (a proprietary product) is not the solution. Currently, I've turned to JPOX, a very well spoken for open-source product. It appears to use a variation of the Apache Software License 1.0.

They already have a port of the Virtual Library to JPOX. If they can handle the Virtual Library, they can handle ePluribus.

I've already implementing basic infrastructure, such as HiveMind interceptors to manage transactions. and injecting the PersistenceManager into other services. In fact, HiveMind's threaded service model is perfect for this kind of thing ... a fixed proxy is injected into other services. Invoking methods on the proxy delegates out to a per-thread implementation. This is what HiveMind is all about ... the other services can use the services provided by the PersistenceManager without worrying at all about its life cycle.

The JDOTransactionInterceptor ensures that transactions are committed after each method invocation (unless specifically told not to) and that any thrown runtime exception rolls back the current transaction.

Think about the amount of code you would typically write ... get the PersistenceManagerFactory, get the PersistenceManager from it, start and commit a transaction, close the PM. Don't forget rolling back transactions when exceptions are thrown. That's lots of code around your individual service methods that isn't necessary when using a HiveMind-based IoC approach.

As a side benefit, a PersistenceManager is only obtained from the PersistenceManagerFactory, and a transaction is only started, the first time (per request) that a method is invoked on the PersistenceManager proxy. The current production code on TheServerSide may start and commit half a dozen transactions per request. When that code is refactored around a "thin stack", there will never be more than one transaction per request for the majority of pages (that only read, not update, information).

JPOX is responsible for transactons and JDBC connection pooling ... and suddenly, there's no need for an application server at all. That's "thin stack" ... less code, same (or more) functionality.

So the question is ... why not Hibernate? I actually started this project a ways back, on Hibernate. In theory, JDO and Hibernate are largely equivalent ... in practice, I've just enjoyed JPOX and Kodo more than Hibernate. The documentation has been easier to follow, the XML configuration has been more natural to me, the error reporting is a notch better (but certainly not state of the art). I've been more effective, faster, with JPOX than I was with Hibernate, and whether the difference is in the two packages, or just the experience I've gained elsewhere in the last couple of months, I don't know or care. Viva choice! Viva Open Source!

Wednesday, November 24, 2004

Paint Shop Pitiful HiveMind Example

I surprised folks at ApacheCon by using a radically different example than what was shown in the printed handouts ... PaintShopPitiful, a very simple SWT application that reads in an image file and can perform some simple manipulations on it. The HiveMind part is all about the seperation between the GUI and the filter objects that do the manipulations, as well as the HiveMind services that connect the two together (creating an SWT Menu in the process).

Anyway, the source code (pitiful as it is) is now up in the downloads section as http://howardlewisship.com/downloads/paint-shop-pitiful.zip.

Friday, November 19, 2004

Latest Tapestry documentation, Death to Parameter Direction

After a fairly long struggle with Forrest, I've finally been able to get the Tapestry 3.1 documentation built properly. The upgrade from Forrest 0.5 to 0.6 has been very painful and, in fact, getting the documentation to build correctly requires the very, very latest from Forrest's SVN repository (what will become Forrest 0.7).

The results are pretty though (they're in a temporary location while the details get filled out, especially the new component reference).

In other news, parameter directions are dead. This is huge news if you use Tapestry 3.0 today ... understanding parameter directions was a real pain. Basically, you had to give Tapestry hints about how to synchronize the properties of your component against the properties of the page, effectively saying "I only access this parameter when rendering" vs. "I might access this parameter at any time." (vs. "Just let me deal with this in lots of Java code, please!").

That's completely gone in Tapestry 3.1. The new accessor methods for component parameter properties are now super-smart: they do what the old direction auto did, but handle optional parameters, type conversion, and really smart data caching. Less is more!

Tuesday, November 16, 2004

Rumor Control

So I'm here at ApacheCon and people keep coming up to me and asking me about my new job. And that's weird because I don't have one. Apparently, there's a strong rumor that JBoss is about to announce an existing Apache project moving under their umbrella, and from the hints, people think it's either Tapestry or HiveMind. How odd! Nice to be noticed, but there's no shred of truth there. I won't fan the flames of rumor with any speculations ... I guess we'll all see what's up in a day or two.

Thursday, November 11, 2004

A door closes, a door opens?

All good things must come to an end; alas, my work for The Middleware Company is rapidly drawing to a close, as part of the overall picture with the sale of TMC to TechTarget. That is unfortunate, because the Tapestry conversion of theserverside.com, while quite successful for all involved, was only the opening moves in a longer game that may not be played.

I was hoping that I could continue as I have for the last few months: working from home on TSS, speaking at JUGS and conferences, and building up a nest egg/warchest. I'm now back to doing things I should have been actively doing for the past few months ... calling back my contacts and working to find new ones.

So let's be clear ... if you are starting a Java web application project, you should be using Tapestry. It's just going to save you an awful lot of grief and worry. If you need help getting started, need professional support, or need an authoritative voice to help you convince your management (or all three at the same time) ... well folks, that's what I do for a living.

Knut's All Groovy on That HiveMind Thang

Knut Wannheden has just checked in Groovy support for HiveMind. The idea is that you can define your module descriptor using Groovy builder syntax rather than XML. I guess there are advantages when doing a lot of similar services. Anyway, that's one way of limiting the amount of XML.

Monday, November 08, 2004

Comparisons, Comparisons

Let's see, Mike Spille has done a fairly detailed comparison of HiveMind, Spring and PicoContainer. From a pure IoC (Inversion of Control) point of view, he likes HiveMind the most, which is heartening:

"All in all HiveMind looks to implement IoC in a very solid way, and generally only implement IoC and surrounding infrastructure (like configuration), and based on everything I've read is based on real project requirements (and I believe this shows). At the same time, some bits of HiveMind still seem a bit rough and appear to need polishing."

Meanwhile, Matt Raible has previewed his slides for his ApacheCon talk comparing web frameworks (here's a PDF link). He critiques Tapestry's documentation ... does nobody get that I spent most of a year of my life working on the Tapestry documentation ... beg, borrow or buy a copy, but don't claim there isn't documentation or good examples. Meanwhile, the community is creating its own sets of tutorials and you can find them all on the Wiki. Matt has a copy of the book but hasn't ready it yet, and I find that unfair.

Then there's the statement about Tapestry being impossible to test. That is simply not true, but the very question obscures a more relevant issue. You can test the methods, but there really is no such thing as unit testing a web application. Remember, you are supposed to test behaviors, not methods, and behaviors of web applications (or GUIs for that matter) are emergent from many things besides the raw code.

Given that, you had better have a plan for integration testing your application. There's a number of open-source and proprietary tools out there for this purpose, though I have yet to find one that uses a simulated servlet container that can be effectively run from within a unit test suite without any external requirements or servers.

For a servlets or Struts application, I have very, very little confidence that the application as a whole works, even if individual methods are tested and work. There's so much else that can go wrong. Tapestry is a much safer bet (because so much of the machinery is general purpose and tested) ... but still, what you are mostly testing in any kind of web or GUI application is that all the bits and pieces are configured correctly and hooked together just right ... things that don't show up in unit tests.

The best course of action for any kind of GUI is to embrace a proper separation of concerns and move important logic out of the presentation layer and into a more test-friendly environment. In Tapestry terms, keep those listener methods really small, and delegate out as much logic as possible into business objects. This will be a snap in Tapestry 3.1 ... just move all the logic into HiveMind and have the HiveMind services injected into your pages and components.

Matt does see something of worth in Tapestry:

"After working with Tapestry and JSF, I can see how component-based frameworks will be the wave of the future. I think as you develop more and more components, the code you write becomes less and less."

I maintain that this is much more dramatically true for Tapestry than for JSF, and Tapestry 3.1 will extend that difference. Further, I think Tapestry is the wave of the now. Anyway, look for the sparks to fly when I sit in on Matt's session at ApacheCon!

Friday, November 05, 2004

HiveMind AdapterRegistryFactory

Just did a little burst of work on HiveMind and added the AdapterRegistryFactory. What's it all about?

It's an implementation of the adapter pattern. The idea is that you have some common operation that should apply to all sorts of different types; perhaps its code that is used to output an XML representation of different objects, or performs some other common operation. In Tapestry, an example of this is the way different objects are evaluated as conditions for the Conditional component: a java.lang.Boolean is obvious, but others require some work: java.lang.String if it contains non-whitespace character; any Number if the value is non-zero, any java.util.Collection if not empty, etc.

With an AdapterRegistry, you define an interface for your adapters. The first parameter of each service method will be used to select the adapter.

public interface ConditionEvaluator
{
  public boolean evaluate(Object value);
}
This idiom reflects that the adaptors are singletons into which the object to operate upon is provided as a method parameter. This differs somewhat from the Gang Of Four useage, where a specific adaptor instance is created for a specific object instance.
You define a configuration and contribute classes and adapters:
<contribution configuration-id="ConditionEvaluators">
  <adapter class="java.lang.Boolean" object="instance:BooleanAdapter"/>
  <adapter class="java.lang.Number" object="instance:NumberAdapter"/>
  . . .

Your adapter implementations are simple:

public class BooleanAdapter implements ConditionEvaluator
{
  public boolean evaluate(Object value)
  {
    Boolean b = (Boolean)value;
 
    return b.booleanValue();
  }
}

Lastly, you define your service point:

<service-point id="ConditionEvaluator" interface="ConditionEvaluator">
  <invoke-factory service-id="hivemind.lib.AdapterRegistryFactory">
    <construct configuration-id="ConditionEvaluators"/>
  </invoke-factory>
</service-point>

At this point, you can reference this service and invoke methods on it, passing different instances into it. Internally, the service implementation will locate the matching adapter (BooleanAdapter for java.lang.Boolean, NumberAdapter for java.lang.Integer and friends) and let the adapter do the work. What's powerful is that your code just sees the one service proxy ... HiveMind connects the dots to get the correct adapter implementation invoked when you invoke a method on the proxy. This is similar to how the threaded and pooled service models expose just the one proxy. There are no questions such as "is this service threaded?" or "is this service an adapter?" ... it all just works.

For the moment, you are not allowed to pass null (you'll get a runtime exception). Still pondering the right approach to handling nulls.

Much like the PipelineFactory, this is a quick way to assemble a pretty sophisticated apparatus ... that all hides behind a single service and a single interface. I often talk about HiveMind's power coming from the mix of services and configurations. In a way that is familiar to Lisp hackers and the like, code (in HiveMind terms, services) gets all mixed up with data (configurations) to allow elegant, powerful solutions to arise.

Updated: Changed the naming from "Adaptor" to "Adapter".

Thursday, November 04, 2004

Tapestry 3.1 and HiveMind 1.1 work accelerating

I've been able to sneak in a little bit of work on Tapestry 3.1 and HiveMind 1.1 over the last week or so. It's been fun stuff; On the Tapestry side, I've been re-working the enhancement subsystem.

Enhancement is the process of taking your classes (which are usually abstract) and the corresponding page or component specification, and building a subclass that fills in all the Tapestry details. It is the sub-class that gets instantiated. The class is abstract because Tapestry needs to fill in some critical details on each property, to make the resulting page or component work properly within Tapestry's page pool.

In fact, the whole process has gotten much, much smarter for Tapestry 3.1. There's the <inject> element, which allows HiveMind objects to be pulled out of the registry and plugged into a page (or component) as a read-only property. Also, any abstract property on a page (or component --- jeez that gets repetative; pages are components) will turn into a transient property automatically.

The <property> element is now only needed for persistent properties, for properties that want to set an initial value, or for properties that aren't referenced in Java code (but are simply used to move data beteween components).

The approach, using a configuration of worker services, each with a specific responsibility, works great. It will be easy to extend it in the future with new properties; for example, I want to be able to assign a property name to a component or asset, and have a property, of the correct type, show up.

Some things are getting more efficient as well. In Tapestty 3.0, each specified property must have an object implementing PageDetachListener, whose job is to reset the property back to its default value. This is needed even if there isn't an OGNL expression for the property's initial value. For the moment, that hasn't changed ... for specified properties. For unspecied abstract properties, a more sophisticated changed takes place: the class is made to implement PageDetachListener (if it did not already), and the pageDetached() method is created or overriden to take care of resetting the properties; in addition, the finishLoad() method is overriden to snapshot the initial values of the properties (each property gets a pair of instance variables).

So we've broken down a complex job into several much simpler jobs; each individual worker is fully tested (100% code coverage). They each work with an EnhancementOperation object that encapsulates most of the work of analyzing the existing class and constructing the enhanced class.

By plugging more workers into the configuration, even application-specific ones, new types of enhancements will be added. It will be possible, for example, to create a library that understands some form of JDK 1.5 annotation and have it plug into the configuration ... the Tapestry framework can continue to be compiled against JDK 1.3 while using JDK 1.5 features. Hows that for late binding?

This big payback for all of this is coming up: elimination of parameter direction. The definition of dread is when I get to that part of my Tapestry presentation. Parameter direction is a hint, given to Tapestry, that describes when OGNL expressions should be evaluated to move data out of a container (typically, the page) and into a contained component's properties. The problem is, you have to either understand a lot of what Tapestry is doing under the covers to decide what the right direction should be, or at least, do certain things by rote ("if I access that parameter property in a listener method, then it must be direction auto").

Yuck. It should just work ... and in Tapestry 3.1 it just will. The generated code for parameter properties will be more involved ... it will have to be aware of whether the component is currently rendering or not, whether a cached value for the property is available, and some tricky logic about converting types as needed.

That's going to take a little work to get right, which is why I flipped back to the HiveMind side. HiveMind has the ClassFactory service, which puts a relatively pretty face on the arcane aspects of Javassist. However, given how much runtime bytecode enhancement is going to occur, and how much of it will be distributed across many objects, ClassFactory and ClassFab have a weakness: they did not implement toString(). Now they do, I spent some time last night getting them to generate a reasonable string representation that looks kind of like Java psuedocode, so that you can see what's going to be generated by the time the class is created from the ClassFab. The end result for an enhanced class can look like:

ClassFab[
public class $ExceptionDisplay_4 extends org.apache.tapestry.html.ExceptionDisplay
  implements org.apache.tapestry.event.PageDetachListener

private org.apache.tapestry.util.exception.ExceptionDescription[] _$exceptions;

private int _$index;

private int _$index$defaultValue;

private int _$count;

private int _$count$defaultValue;

public org.apache.tapestry.util.exception.ExceptionDescription[] getExceptions()
return _$exceptions;

public void setIndex(int $1)
_$index = $1;

public int getCount()
return _$count;

public void finishLoad(org.apache.tapestry.IRequestCycle $1, org.apache.tapestry.engine.IPageLoader $2, org.apache.tapestry.spec.IComponentSpecification $3)
{
  super.finishLoad($$);
  _$index$defaultValue = _$index;
  _$count$defaultValue = _$count;
}


public void setCount(int $1)
_$count = $1;

public void pageDetached(org.apache.tapestry.event.PageEvent $1)
{
  _$index = _$index$defaultValue;
  _$count = _$count$defaultValue;
}


public int getIndex()
return _$index;

public void setExceptions(org.apache.tapestry.util.exception.ExceptionDescription[] $1)
_$exceptions = $1;

]

It's not Java syntax, and its not even quite Javassist syntax ... but it certainly identifies what the new class is all about and that's what counts!

Tuesday, November 02, 2004

Back from the polls

It's 8:00am eastern time and I'm back from the polls. I think it is vitally important that everyone votes today (not just this election, but every election). I hear a lot of grousing about how unlikeable either candidate is, but that's just a smokescreen for neglect and laziness. Voting isn't just a right and a privilege, it is a responsibility. The more people vote, the more that special interests' power gets diluted, and that is good for democracy, good for America.

Saturday, October 23, 2004

Upgraded HiveMind to Forrest 0.6

The Forrest team recently announced Forrest 0.6. When I first switched HiveMind away from Maven, a key concern was documentation, and Forrest provides much more and better functionality than Maven.

However, like Maven, with Forrest you venture forth from the narrow blessed path at your own risk. I had much trouble getting HiveMind's documentation to build to my liking under Forrest. The tool is complex, with many, many, moving parts (including Cocoon !) and will often create wrong output, rather than report an error.

In 0.5, I discovered that I had to include <index href="index.html"/> in my site.xml file to get the tabbed navigation views to work properly.

The upgrade to 0.6 is supposed to be simple but I found a number of problems:

  • The imported Ant script doesn't work (details in the forrest-user mailing list). Even if it did, it would pollute my project with many additional targets. My workaround was to re-invoke Ant using the patch Ant libraries shipped with Forrest (!)
  • Had to completely rebuild the skinconf.xml file
  • cli.xconf had to be changed, and it was necessary to create a project.configfile entry in my forrest.properties file
  • Forrest 0.6 kept emitting the exact same file as index.html at each level of my project. This was not only the wrong information for the hivemind, hivemind-lib and hivemind-examples folders, but didn't render because relative paths to the style sheets were broken. The only workaround I found, and this is very ugly, is to change all the document hrefs in my site.xml to use complete paths to each file (i.e., <index href="hivemind/index.html"/>).
  • I've been getting various errors related to wholesite.html and wholesite.pdf, so I turned that off.

All told, I spent quite a few hours getting this working to this level, most of it the kind of blind thrashing that's necessary when tools don't validate properly, or report problems usefully. Many times I strongly considered working backwards to Forrest 0.5.

The results so far are a cut above what we had before. The smaller fonts are good (as is the JavaScript based font-size control). There are a number of layout glitches (at least under FireFox) related to long entries in the left-side menu. I with I had more control over what menu items start expanded (maybe I do have that, but Forrest documentation is paradoxically pretty poor and I've yet to find it).

Forrest 0.6 also gives you more control over style, by making it easy to integrate some custom CSS styles into the default stylesheet (this is part of skinconf.xml) and allowing all elements to accept a class attribute.

Overall, I'd give Forrest a C+ as a grade; the final results once working are a good B+/A- ... but the process for getting there (especially when factoring in the pain of an upgrade) has to be considered. The cost of adoption can be extraordinarily high, especially if you have an existing project and existing documentation.

Fortunately, unlike the Maven team, the Forrest team recognizes this at least as far as their version numbers; 0.6 indicates that it is still an alpha or beta release and they know they have some distance to go before they can call it a true product.

[FIXED] Abandoned by Google Mail

It's funny how quickly you can become dependent on a service ... and if it's a free service, with no support, that can be a problem. In my case, it's Google Mail, which no longer lets me log in (it hangs for a while with the "Loading ..." message, then pops up a window about trying again later).

I've let myself get dependent on GMail for nearly all of my Tapestry and HiveMind correspondence but there seems to be no support outside of this support group. Is it moderated? My posting about this problem hasn't shown up. I'm not the only one with this problem, but since Google Mail doesn't have any accessible support, I'm a bit hosed!

If and when I get this resolved, I may rethink my usage of GMail. It's fast, convienient, and searchable ... but none of that is helpful if you can't depend on it!

Updated 10/26: And GMail is back again. I added a bug to their support system (only accessible once logged in) complaining that ... their bug system is only accessible once logged in!

Thursday, October 21, 2004

Speaking at NoFluffJustStuff this weekend

I'll be speaking at the Boston No Fluff Just Stuff this weekend (Oct. 22 - 24th). My three sessions (Tapestry forms, Tapestry components, HiveMind) are all on Sunday, but I'll be around the entire weekend. Jay has been experimenting with alternatives to the expert panel, such as "Birds Of A Feather" or other smaller, more focused interactions and that I like and am more comfortable with.

Tuesday, October 05, 2004

Now that's handy

[Eclipse Clover View]Clover has always been useful (and Cenqua has always been generous with licenses for open source projects), but I finally got around to installing their Eclipse Plugin. It's not perfect, but it is simple and it does work. My old procedure was the run my tests using Ant to build the code coverage, then switch back and forth from a web browser to my source code. This is much, much easier (though the markers for lines that have not executed are a bit too subtle ... I'd prefer a change in background color, as with the HTML report).

Meanwhile, it's time to fill in some of gaps in my code coverage test suite!

Monday, October 04, 2004

Worst Eclipse Crash --- Ever

Somehow, after years of flawless service, my Eclipse workspace just self-destructed. Don't know what caused it, it just freaked out while I was switching between projects. My .log file says:

!SESSION Oct 04, 2004 18:44:07.894 ---------------------------------------------
eclipse.buildId=I200406251208
java.version=1.4.1_01
java.vendor=Sun Microsystems Inc.
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_US

!ENTRY org.eclipse.osgi Oct 04, 2004 18:44:07.894
!MESSAGE An error occured while automatically activating bundle org.eclipse.core.resources (27).
!STACK 0
org.osgi.framework.BundleException: Exception in org.eclipse.core.internal.compatibility.PluginActivator.start() of bun
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:975)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:937)
        at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:421)
        at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:293)
        at org.eclipse.core.runtime.adaptor.EclipseClassLoader.findLocalClass(EclipseClassLoader.java:110)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findLocalClass(BundleLoader.java:371)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.requireClass(BundleLoader.java:336)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findRequiredClass(BundleLoader.java:914)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findClass(BundleLoader.java:399)
        at org.eclipse.osgi.framework.adaptor.core.AbstractClassLoader.loadClass(AbstractClassLoader.java:93)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterFactory.(WorkbenchAdapterFactory.java:26)
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterBuilder.registerAdapters(WorkbenchAdapterBuilder.java:33)
        at org.eclipse.ui.internal.ide.IDEWorkbenchAdvisor.initialize(IDEWorkbenchAdvisor.java:155)
        at org.eclipse.ui.application.WorkbenchAdvisor.internalBasicInitialize(WorkbenchAdvisor.java:165)
        at org.eclipse.ui.internal.Workbench.init(Workbench.java:789)
        at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:1325)
        at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:254)
        at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:141)
        at org.eclipse.ui.internal.ide.IDEApplication.run(IDEApplication.java:96)
        at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:335)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:273)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:129)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.eclipse.core.launcher.Main.basicRun(Main.java:183)
        at org.eclipse.core.launcher.Main.run(Main.java:644)
        at org.eclipse.core.launcher.Main.main(Main.java:628)
Caused by: org.eclipse.core.internal.resources.ResourceException: The resource tree is locked for modifications.
        at org.eclipse.core.internal.resources.WorkManager.checkIn(WorkManager.java:93)
        at org.eclipse.core.internal.resources.Workspace.prepareOperation(Workspace.java:1628)
        at org.eclipse.core.internal.resources.Workspace.close(Workspace.java:298)
        at org.eclipse.core.resources.ResourcesPlugin.shutdown(ResourcesPlugin.java:324)
        at org.eclipse.core.internal.compatibility.PluginActivator.start(PluginActivator.java:52)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:958)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:954)
        ... 30 more
Root exception:
org.eclipse.core.internal.resources.ResourceException: The resource tree is locked for modifications.
        at org.eclipse.core.internal.resources.WorkManager.checkIn(WorkManager.java:93)
        at org.eclipse.core.internal.resources.Workspace.prepareOperation(Workspace.java:1628)
        at org.eclipse.core.internal.resources.Workspace.close(Workspace.java:298)
        at org.eclipse.core.resources.ResourcesPlugin.shutdown(ResourcesPlugin.java:324)
        at org.eclipse.core.internal.compatibility.PluginActivator.start(PluginActivator.java:52)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:958)
        at java.security.AccessController.doPrivileged(Native Method)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:954)
        at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:937)
        at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:421)
        at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:293)
        at org.eclipse.core.runtime.adaptor.EclipseClassLoader.findLocalClass(EclipseClassLoader.java:110)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findLocalClass(BundleLoader.java:371)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.requireClass(BundleLoader.java:336)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findRequiredClass(BundleLoader.java:914)
        at org.eclipse.osgi.framework.internal.core.BundleLoader.findClass(BundleLoader.java:399)
        at org.eclipse.osgi.framework.adaptor.core.AbstractClassLoader.loadClass(AbstractClassLoader.java:93)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterFactory.(WorkbenchAdapterFactory.java:26)
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterBuilder.registerAdapters(WorkbenchAdapterBuilder.java:33)
        at org.eclipse.ui.internal.ide.IDEWorkbenchAdvisor.initialize(IDEWorkbenchAdvisor.java:155)
        at org.eclipse.ui.application.WorkbenchAdvisor.internalBasicInitialize(WorkbenchAdvisor.java:165)
        at org.eclipse.ui.internal.Workbench.init(Workbench.java:789)
        at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:1325)
        at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:254)
        at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:141)
        at org.eclipse.ui.internal.ide.IDEApplication.run(IDEApplication.java:96)
        at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:335)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:273)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:129)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.eclipse.core.launcher.Main.basicRun(Main.java:183)
        at org.eclipse.core.launcher.Main.run(Main.java:644)
        at org.eclipse.core.launcher.Main.main(Main.java:628)

!ENTRY org.eclipse.osgi Oct 04, 2004 18:44:07.924
!MESSAGE Application error
!STACK 1
java.lang.NoClassDefFoundError: org/eclipse/core/resources/IProject
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterFactory.(WorkbenchAdapterFactory.java:26)
        at org.eclipse.ui.internal.ide.model.WorkbenchAdapterBuilder.registerAdapters(WorkbenchAdapterBuilder.java:33)
        at org.eclipse.ui.internal.ide.IDEWorkbenchAdvisor.initialize(IDEWorkbenchAdvisor.java:155)
        at org.eclipse.ui.application.WorkbenchAdvisor.internalBasicInitialize(WorkbenchAdvisor.java:165)
        at org.eclipse.ui.internal.Workbench.init(Workbench.java:789)
        at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:1325)
        at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:254)
        at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:141)
        at org.eclipse.ui.internal.ide.IDEApplication.run(IDEApplication.java:96)
        at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:335)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:273)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:129)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.eclipse.core.launcher.Main.basicRun(Main.java:183)
        at org.eclipse.core.launcher.Main.run(Main.java:644)
        at org.eclipse.core.launcher.Main.main(Main.java:628)

And that's no help.

From an hour or more of careful experimentation, I determined that deleting the file C:\workspace\.metadata\.plugins\org.eclipse.core.resources\.snap lets Eclipse start back up ... but loses all my projects. This includes my current project (no uncommitted changes) as well as my jakarta-tapestry project (yes, uncomitted changes!).

Sometimes the computer knows when you are having a bad day, and decides to make it worse!

Friday, October 01, 2004

Royalty Check

Wow! Tapestry In Action has sold enough copies over the last six months to not only pay back the original advance Manning gave me, but even go over and kick me back a modest royalty check. I'd forgotten that could even happen. With some big announcements in the works for Tapestry and for myself, all hopefully feeding into book sales, I bet this isn't the last check I see from them!

Competition is good, right?

Just noticed on the GoogleAd side bar of my own blog that ArcMind (Rick Hightower's consulting/training company) is offering Tapestry Training. The outline is exactly what I'd like to be presenting ... fortunately or not, I've been so bottled up working on a large Tapestry project that I've made no progress on my own equivalent course.

I've talked with Rick a few times and I've worked with Drew Davidson (whose also associated with ArcMind, and is the creator of OGNL). These are guys who have used Struts, JSF and Tapestry in anger (that is, as part of real production work). I've been encouraging Drew to write a fair comparison of Tapestry and JSF; I think it would be a valuable document for JSF and Tapestry users alike, as well as providing the Tapestry team with even more direction.

Hopefully, ArcMind's offering will be a success, and will grow the Tapestry community by more than he steals away from me :-).

Using Tapestry's Table component

Tapestry's Table component is very powerful and very flexible. The Table component can draw information from a variety of sources ... everything from in memory to an external database via SQL and everything in between. The component can handle column sorting automatically, can work inside a Form, and can be broken apart into smaller pieces so that you can provide your own look and feel for parts of it (such as the navigation control for moving between pages of results). You can configure it using a short string, or provide any of several modeling objects to control pagination, data access, column content and order and so forth. Whew!

All that power and flexibilty makes it hard to pick up; John Reynolds has taken up the guantlet and come up with a kind of interactive tutorial and guide to this important Tapestry component.

Maybe he'll follow up with a guide to the Tree component? We can only hope!

Tuesday, September 28, 2004

[Off Topic] Roller Coaster Tycoon 3 demo

I downloaded the Roller Coaster Tycoon 3 demo this morning. I've always been a big fan of the original Roller Coaster Tycoon (RC1), even though I never came close to finishing it. I just like building the parks and letting the little people run through it.

I've always thought it was marvelous that a game that was fully creative, non-violent (well, except for dropping the peeps in the pond) and intricate was so popular.

It looks like they've retained the essential nature of the original. You are still building your park on a grid, but your eye, your camera, is free to roam much more freely ... rotating and zooming to your heart's content.

The ability to get a peep's eye view of any ride, not just roller coasters, is wonderful.

The demo is a major power hog; my laptop (A Dell Inspiron 8200, 1 gig of ram, 2ghz P4, GeForce 4 440 Go) is somewhat taxed ... it will run full screen at 800x600 but only just. I suspect larger parks will be a problem. This game is going to cost me a lot of cash for a new system.

I haven't found where you get to review all your peep's thoughts (I hope that's still in there). They've also made the concession stands overly complex (you get to define a whole menu, control how many pickles are put on each burger and a bunch of other stuff nobody will ever do).

The biggest win is that you can pause the game to do construction. That was always my biggest beef with RC1 ... constant interruptions when building your big coaster.

The peeps are much more involved; different sexes and ages. There's some kind of group (or family) system.

Parks also simulate day and night as well as season; the park looks terrific at night, and there are fireworks shows!

Alas, back to work for now!

Wednesday, September 22, 2004

Feedback++: Going beyond Line Precise Error Reporting

This I like ... for Laurent Etiemble, even line precise error reporting is not enough, so he has made some improvements to display the actual text that is in error. Details in his blog. It looks like:

This is certainly something that could show up in Tapestry 3.1 (either by duplicating Laurent's work, or getting him to properly donate it). Additionally, Geoff has talked about some kind of hook that would make the stack trace clickable links ... clicking the links would open the correct file inside Eclipse.

HiveMind 1.0 final tomorrow!

The votes have been cast and, predictably, HiveMind 1.0 final release has been approved. The web site has been updated, an the Maven repository has been seeded with the 1.0 final jars. The distributions have been uploaded to Apache, and the only thing left is to wait about 24 hours for the Apache mirrors to pick up the files. Tomorrow morning I'll update the Jakarta web site and submit the news item to TheServerSide.

And still, so much more to do in HiveMind 1.1 and in Tapestry 3.1.

Thursday, September 16, 2004

Tapestry 3.1 and backwards compatibility

I've gotten a couple of concerned notes about backwards compatibility from Tapestry 3.1 back to Tapestry 3.0. People are concerned they may have to throw away some or all of their 3.0 applications when they upgrade to 3.1.

Tapestry has historically had a fuzzy distinction between APIs (application program interfaces, intended for exposure to, and use by, end-user developers) and SPIs (service provider interfaces, or at the very least, internal interfaces that are expected to change between releases). The interfaces in the org.apache.tapestry package are generally APIs , and the interfaces and classes is other packages are more often SPIs.

Despite the large amount of refactoring so far, nothing I'd consider an API has changed, though some SPIs have changed.

I'm working to make the upgrade path from 3.0 to 3.1 pretty painless for 90%+ of users.

  • It is possible that the <extension> element will no longer be honored (you'll get a runtime warning).
  • It is likely that the <service> element will no longer be honored.
  • It is possible that it will no longer be allowed to sub-class BaseEngine. Certainly the AbstractEngine and BaseEngine classes have changed significantly, and will likely merge together.
  • Parameter directions from 3.0 DTD specs will be ignored, and will always be implemented using the new smart caching connected properties code (yet to be written).

I think it will be a reasonable upgrade path from 3.0 to 3.1. For simple applications and pages, nothing will be necessary. For a minority of applications, there will be some localized changes.

It would be very nice to more cleanly separate internal interfaces and classes from the public APIs. However, doing this properly will certainly break backwards compatibility.

Part of the problem is the use of inheritance in Tapestry. In Tapestry 3.x, your pages and components must subclass from Tapestry base classes. This exposes much internal implementation details about those classes, dragging things that should be firmly in the "internals" category out into the open.

Part of my vision for Tapestry 4.0 (besides stripping the leading "I" off any remaining interfaces) is to break the inheritance requirement. In that release, your pages and components will be true POJOs. This will be an even better separation than is possible today. Something very much like todays IPage/IComponent hierarchy will exist .. but that will be largely be internal to Tapestry. The classes you write will be peers of the actual components. In general, your code will not access the component directly, but will instead have information, properties and services from the component injected into your classes.

I'm very excited by this potential, but first things first ... have to get 3.1 up and running and not leave all the 3.0 users behind!

Wednesday, September 15, 2004

Whew! Full speed ahead!

The vast refactoring of Tapestry continues. Today I completed my first pass. All the major subsystems of Tapestry (such as ISpecificationSource, ITemplateSource, DataSqueezer, IScriptSource, etc.) are now HiveMind services. Yes ... everything still runs pretty much identically, but under the covers the code is evolving into smaller, simpler classes.

In addition, the first real changes (still, not visible though) are now in place. There is now a collection of BindingFactory implementations, one for each type of binding (OGNL expression, message keys, and literals). There's a BindingSource service that knows how to take a string (from an HTML attribute value) and find the correct BindingFactory, based on prefix, to generate a IBinding instance from it.

This is now being used for all bindings created by the SpecificationSource and the ComponentTemplateLoader. Its driven by a configuration, which means that it will be super easy to add additional binding prefixes (even application specific ones). This is the first, approachable, step along the path to what will be Tapestry 3.1.

It's a great, iterative process. New code I create is truly testable ... I hope to start retiring many of the integration tests (the ones driven by XML script files) since those are pretty darn slow to execute compared to focused unit tests.

I had one great headache; I converted a utility class (BaseComponentTemplateLoader) into a service (ComponentTemplateLoader). BaseComponent used to create one instance of this every time a component loaded its template. I converted this to a threaded service, because it had some internal state.

After many headaches, I realized that BaseComponent was doing it right ... the state wasn't merely threaded, it was reentrant. Loading a component will often cause a new component to be created ... and if that new component has a template, the process goes recursive. I didn't realize that, and started getting bizarre, impossible results. Eventually, I realized what was going on ... and my final solution is very similar to where I started. ComponentTemplateLoaderImpl is a normal service (not threaded), but delegates just about all of its behavior to ComponentTemplateLoaderLogic, which is created (as before) for each component and discarded immediately.

This makes me think that HiveMind needs a reentrant service model. I think Spring has something like this, "prototype beans", where each method invocation causes a new implementation instance to be created. That would be easy enough to do in HiveMind.

It has been very slick seeing the mix of service implementations; most of the Tapestry services are ordinary singletons. A few, which hold client-specific state for the duration of a request, are threaded. However, when these are combined together, the singletons and threaded code doesn't have to know anything when they invoke each other ... its just objects and interfaces and the mechanics of locating a thread-specific instance to process a particular service method is entirely hidden.

So, Tapestry is getting more powerful and simpler at the same time. That's hard to beat. Not sure when I'll have time for more or what I'll hit next ... I'm following a kind of round-robin approach, where even the code I've created recently is ready to be refactored into smaller and smaller pieces. Probably next up is to start thinking about engine services. There's also refactoring around OGNL 3.0. Or getting ready for the major rework for modularity (splitting the application across multiple folders). It's all good!

Tuesday, September 07, 2004

[Off Topic] VoteOrNot.org

Everyone knows its important to vote. The stakes are extraordinarily high for the current presidential election ... but even so, voter turnout in this country is abysmally low.

I'm proud to say that I've voted in every presidential election since I was 18, including most primaries. I've also voted in every (I think) state-wide election. Too many people, though, let themselves get sidetracked.

How about if someone offered you $100,000 to vote? Oh, and $100,000 to a friend who got you to vote? That's what VoteOrNot.org is doing. Go to their site, promise to vote, and possibly win after the election.

Saturday, September 04, 2004

NEJUG - Erik Hatcher on Lucene

Erik Hatcher will be in town next week to present on the Lucene text searching/indexing tool to the NEJUG (North East Java User's Group). This is occuring 5:30pm on September 16th. If you can snag a reservation, you should ... Erik's a great presenter and Lucene is a great (and largely unrecognized) technology.

Progress on all fronts

Tapestry 3.1 is really getting into gear now; I've been working my way forward from the servlet, converting everything into HiveMind services, pipelines and configurations. I've also said sayonara to revision histories and been moving source code into "standard" directories (that is, src/java). I can see switching to Subversion for source control someday, just because it does a proper job of moves and renames -- history on files is maintained even when the file is renamed or moved to a new directory. It seems like CVS is pretty antagonistic towards refactoring, especially in context of the XP mantra of fearless, merciless, constant refactoring.

Tapestry's bug list is now under JIRA, which is so much easier, faster and more sensible than Bugzilla.

I've taken a pause from working on the Tapestry code to work on the Tapestry documentation. The awkward DocBook stuff is being phased out ... all the documentation will be converted to Forrest format, which means better and more consistent navigation. I realized that unless I had the Tapestry 3.0 documentation in a ready to edit form, I would not update it as I was making the real changes in the 3.1 code. As I found during Tapestry 3.0's too-long development cycle, if you don't keep the docs up to date as you work, your problems just multiply. With HiveMind, I've worked diligently to keep docs up to date at all times, and I want to start on the right footing for Tapestry 3.1 as well.

James Carman looks likely to be the newest HiveMind committer, the vote is in progress but opposition is unlikely. Meanwhile, I've just fixed a few nasty little bugs in HiveMind, in prep for release candidate 2. Nasty, of course, always means class loaders and/or thread concurrency. Now, throw dynamic class fabrication into the mix.

And I signed a contract for three months of work, possibly more (details to come). So the good news is that I have money coming in and am doing Tapestry work full time. The bad news is I don't have huge amounts of time to do things like build out the Tapestry lab course I've been dreaming of, or do the Hibernate research I've been hankering for (until and unless that becomes part of my contract). I've actually been full time with this client for almost two months; I've been letting others pick up the slack in terms of answering questions on the mailing lists while I do more and more heavy lifting on the code front (especially on the Tapestry side).

Much more urgent is the need to get out of the house and hit the beach! Summer's ending and I'm still pretty pasty-white. Time to break out the boogie boards and hit the waves.

Saturday, August 28, 2004

ScrapeSDL -- Convert SDL back to XML

With the release of HiveMind 1.0-rc-1, I've managed to "strand" a number of HiveMind users who had adopted SDL (Simple Data Language). SDL is no longer supported.

ScapeSDL.zip is an Eclipse project containing code that can convert an SDL file into an XML file. It's what I used to convert all the SDL in HiveMind itself back into XML (though I did have to do a bit of manual reformatting).

Tapestry and HiveMind at ApacheCon

Last year, I did a short, 60 minute, session on Tapestry at ApacheCon ... and I really think that launched things for me. I got a lot of good feedback on Tapestry and on my speaking style, and its opened a number of doors for me.

This year at ApacheCon, I'll be doing a long session on Tapestry on Sunday that will cover most of the material in my two normal Tapestry sessions (form building and component building). Additionally, I'll have a HiveMind session late on Wednesday. Details may someday be posted on the ApacheCon schedule.

Erik Hatcher, Matt Raible, and many others will be there. It was a good time last year (of course, I did bring my wife, Suzanne).

I'm spending too much time in Vegas, for someone who doesn't gamble. I was there in April for TheServerSide Symposium, I'll be there in October for a friend's birthday, and back in November for ApacheCon. Thankfully, I have my rolling laptop case (what a difference that makes when your laptop weighs in at about 15 pounds!).

Wednesday, August 25, 2004

HiveMind 1.0-rc-1

Following a successful vote (notable only for the absence of a vote from Knut), I'm building out HiveMind 1.0-rc-1 at this very moment. That's Release Candidate 1 ... and I expect to be rolling to a final 1.0 release very quickly.

There are a boat-load of plans for 1.1 and I hope that, unlike Tapestry's lethargic pace, we can get a 1.1 out in just a matter of months or even weeks.

"Invite 6 friends to GMail" -- It's open season!

I've built up six GMail invites ... drop me an email at hlship AT gmail DOT com if you'd like one. First come, first served.

Update: Sorry, all gone!

Monday, August 16, 2004

Dependency Injection -- the mirror of Garbage Collection

A ways back, the idea of a garbage collected language was tantamount to heresy. "It'll be slow." "It isn't necessary." "How hard is malloc()?" "Reference counting is all you need." etc.

Nowadays, garbage collection is the norm and there's no going back. By freeing us of concerns about allocating and freeing memory, new development techniques became possible. For me, personally, it was very liberating ... I've always coded in terms of small modules working together. Fifteen years ago I was writing almost-object-oriented code in a heavy duty procedural language (PL/1) ... it was just much, much harder.

Garbage collection liberates me to solve problems in my way ... using lots of small, well understood objects working together. When you have to be concerned about every object creation and destruction, you get miserly about them and before you know it, you have these big, monolithic objects that break all the now accepted rules about separation of concerns. You saw this inside the otherwise stellar NeXTSTEP libraries ... there were quite a few kitchen-sink objects and your main extension point was to subclass an existing class.

I used to be dismayed that the UML sequence diagrams did a really poor job of illustrating inheritance. It was really hard to properly diagram an object invoking a method in itself that's implemented in a super-class. Nowadays, I realize that the Amigos were ahead of the curve: aggregation trumps inheritance. And don't think I haven't been thinking about this in terms of Tapestry, which mandates inheritance (you must subclass AbstractComponent, BaseComponent or BasePage). I have been, and some future release of Tapestry will break that requirement.

But the story doesn't end with garbage collection (even if the individual objects do). Garbage collection is the last stage of an object's life cycle, but there's just as much going on at the start of the object's life cycle. That's why component frameworks and dependency injection containers (such as HiveMind, Spring, Picocontainer and Avalon) are so important.

"Why do we need an external descriptor?" "It just isn't necessary." "How hard is it to new Foo()?" Sound familiar?

Once you start thinking in terms of large numbers of objects, and a whole lot of just-in-time object creation and configuration, the question of how to create a new object doesn't change (that's what new is for) ... but the questions when and who become difficult to tackle. Especially when the when is very dynamic, due to just-in-time instantiation, and the who is unknown, because there are so many places a particular object may be used.

HiveMind didn't spring up out of thin air, it was based on me seeing an awful lot of code, production code, that was either a) inefficient, b) unnecessary or c) incorrect (often due to errors related to class loaders and multi threading). Too often it has been option d) all of the above.

Dependency injection acknowledges an initial state for objects ... the construction state, where objects are created and configured before being put into production. With HiveMind, you get the benefits of simplicity: your services are fundamentally POJOs. But you also get all the life cycle benefits: just-in-time, thread safe construction. Any reasonably sized system is going to need those benefits ... they are a cornerstone of providing the efficiency and robustness your clients and customers require. If you don't get them from a container such as HiveMind, you'll be writing that code yourself. Again and again, introducing bugs all the way.

You're willing to let something else manage the death of your objects because it, the Garbage Collector, can do a better job than you can. Likewise, you should accept that a container, whose only responsibility is to construct and configure your objects, will do a better job of it than your own code. Embrace that fact ... and get back to interesting work!

Monday, August 09, 2004

Groovy and Tapestry: Followup

As a follow up to my earlier post about combining Tapestry and Groovy, the folks involved (Michael Henderson and Richard Hensley) have combined their implementations and set up a SourceForge project.

It looks pretty sweet; you create a script and static methods in the script can be listener methods. Also, by using the correct method names (say, pageBeginRender), you script's static methods will be invoked at the right times. Also, static listener methods.

I think Hani had a diatribe not long about about flexibility. However, one of the core features of Tapestry is how it is divided into subsystems, precisely to provide this kind of flexibility ... such as changing the very nature of the framework! For the vast majority of users, that flexibility is masked as unnecessary complexity (which is probably the root of Hani's blog-arrhea). But often, that complexity is a reflection of a far reaching vision. Tapestry has always supported the notion that some pages or components would not be files in the WAR, but would be located from an external source, or even created dynamically at runtime. The bridge code that allows Groovy scripts to act like pages is one offshoot of that vision.

Friday, August 06, 2004

Atlanta Java User's Group Appearance

I'll be demonstrating Tapestry and Spindle on Aug. 17th at the Atlanta Java User's Group. Thanks to Jay Zimmerman of No Fluff Just Stuff for setting it up!

HiveMind happenings

So, by developer vote, SDL (Simple Data Language) has been removed, and the ocean may return to a gentle simmer. I learned a bit about JavaCC with SDL and was excited by it, but I'd rather concentrate on what HiveMind does really well, and not be sidetracked. Instead, we're looking for ways to further reduce the size and complexity of HiveMind's XML module deployment descriptors, and reduce the amount of code in general.

One thing else I've been working on is the ServicePropertyFactory. The idea here is that sometimes service A needs a property from service B to do its work. One approach would be to use service-property:B:foo to inject property foo from service B into service A.

But what if the value can change over time? In Tapestry, examples of this will be access to the HttpServletRequest, HttpServletResponse and HttpSession objects for the current user and current thread. These objects are specific to one thread and change on every request.

Your code could hold a reference to service B and get the property from B every time it is needed. But that's more code, more code paths, more Law of Demeter violations.

What would be nice would be if that property of B could be bound to service A? That's what ServicePropertyFactory does. It creates a service implementation, a proxy. The proxy keeps a reference to service B and, on each method invocation, gets the property from B and re-invokes the method on the property. This is service C, the binding between A and B, and is what gets injected into A.

This allows the code for A to treat a property (say, the HttpServletRequest) as if it was unchanging, when in fact, it is constantly changing. There are pitfalls ... the cost of accessing the methods of the proxy is higher than an ordinary object (it has to go get the real value from the underlying service), and you wouldn't want service A to pull any information out of the property and store it, since that kind of data will likely not be valid after the current request.

What this does do is allow the more dynamic parts of the HiveMind application to be isolated in their own services and allows the rest of the code to stay simple.

Friday, July 30, 2004

Ant best practice: One Compile, One Directory, One JAR

Do yourself a favor. When you are laying out your source tree, follow this rule: One Compile, One Directory, One JAR. Just because Ant supports includes and excludes on its <javac> and <jar> tasks doesn't mean you should actually use them!

A proper source tree structure may have multiple source code roots, and will run javac multiple times -- but each execution will compile all files under a particular source directory into its own, individual classes directory. The contents of that directory (perhaps combined with some deployment descriptors or other resources) will form a JAR. If anything depends on that code, it should add the JAR (not the classes directory, and certainly not the source directory) to its compile time classpath.

Why is the compile-and-split approach dangerous? It makes the build process and build scripts more complicated -- possibly lethally so. It ensures that the build process will not generate class files the same way (and into the same directories) as the IDE. It increases the likelyhood that some classes will be duplicated (compiled multiple times to multiple directories and packaged into multiple JARs), which can easily lead to mysterious ClassCastExceptions at runtime.

A particularly important case involves deployment into an application server. Once you get into complex EAR deployments (with multiple EJB JARs and WARs), the class loader issues can quickly get out of control. By using multiple compiles, you can cleanly ensure that the libraries available at build time match the libraries available to the runtime classloader for that artifact (that EJB JAR or WAR). Without this simple bit of discipline and organization, you can lose big chunks of time to tracking down ClassNotFoundExceptions.

I'm always amazed when I see this kind of thing in otherwise sane code and build environments. I've also seen just how far wrong this approach can go ... such as one source tree, one compile, 40+ JAR files, 10,000+ lines of build scripts!!

.Net vs. Struts ... and the winner is Tapestry!

Just something I noticed on the Cardsharp on Software blog:

Tapestry from the Jakarta Project. Yes, it's not JSF. Yes it's "non-standard." Yes, it requires you to learn a new library and a new way of doing things. But it's better. Far less configuration than Struts, and what is there makes sense. Instead of a jumble of disconnected concepts, the whole library is built around an event-driven mental model that just makes sense. Things that would take me 5 lines of code in Struts take one or two in Tapestry. Imagine being able to tie an event handler in a Java class to a link or a button with no configuration file changes! Think about forwarding to a new page and passing objects to the "action" by setting properties on it istead of hucking everthing into the request or session. Think about how much you like Tiles and imagine what it would be like if everything was a tile, but didn't require all the verbose <tiles:put> lines just to set parameters on your tiles. Think about all those times you'd wished you could put custom configuration information into your <ACTION> tags in the struts-config.xml (especially if you've tried to customize it using the <set-property> nonsense!). Think about how nice your life would be if your ActionForm and your Action were actually in the same class in a way that made sense, and think about how nice it'd be if the framework implemented all the getters and setters for you. Think about implementing complex interactions without jumping through all kinds of hoops to manage the state of your page.

And I'm only scratching the surface here.

Thursday, July 29, 2004

This Great Hacker Does Use Java

Contrary to Paul Graham's misguided treatise on Great Hackers, this great hacker (by his definition, I prefer the term "developer") absolutely does use Java and loves it. You can pry it out of my cold, dead hands kind of love.

Side note: coming from an Objective-C background, I was initially very dismissive myself of Java. I still prefer many aspects of the Objective-C syntax, and Objective-C has killer runtime libraries ... but I've grown to love and be reliant upon Java type safety, and garbage collection. Both are pre-requisites for solving problems in an object oriented way: using large numbers of small, simple objects.

I feel there is a kind of insipid game out there I call "chase the gigahertz". Every time the language (COBOL, then C, then C++, then Java, now Ruby and friends) catches up to the available computing power and the available libraries are in a state where you can actually get something accomplished, there's a lemming-like urge to jump ship and start all over again in the latest and greatest ... with a less efficient, less optimized environment that pushes the performance barrier back up and out of reach.

Wait. Stop. Ask yourself "why?". As the differences between generations of languages become more and more subtle, you have to ask yourself ... what do I gain by switching? What cost do I pay? Am I throwing out the Baby with the Bathwater? Sure, you'll be incrementally more effective at doing the kind of things that look good in demos and tutorials ... but the moment you reach for your handy XML parser and it isn't there, or a friendly GUI toolkit and you have to start from scratch. Database access. Command line parsing. Transaction management. Yes, analogues exist for all of these, but you'll be starting from scratch to learn their APIS, quirks, bugs and workarounds. Just how compelling is that nifty bit of syntax, those couple of characters you don't have to type, that convenient built-in function?

For me, Java hits a terrific sweet spot. Vast numbers of open-source libraries. Very competent language. Minimal amounts of stupid syntax. Of course, there are things I'd change ... but in my analysis, its not about the language syntax its about how you use the language. I'm a framework guy; when I get annoyed at writing dumb, repetitive code I build a framework to take care of that dumb, repetitive stuff for me. I write it once, test it, forget how it works. Tapestry saves me a lot of trouble when dealing with the ugly intricacies of web applications. HiveMind saves me a lot of trouble when building complex apps from lots of simple little pieces. I don't throw out years of effort and hope some new language will solve my problems for me ... I face up to my issues and fix them right here, right now, in Java and be done with it.

Java is good enough. Sometimes "good enough" today is better than "ideal" tomorrow. I can get insanely great things done in Java today, and tomorrow, and the day after.

I'm not saying you should be complacent. Dave Thomas recommends learning a new language at least every year, to give you more experience, more insight into new ways of accomplishing things in your language of choice. I hope the Java language designers to the same, and give us some stuff we could use (such as continuations and closures). As a software craftsman, your technique should transcend any one programming language.

But back to Paul Graham; he's making that tired claim about productivity, as if productivity is linked to characters typed. So typing productivity trumps everything else? As you can see, I don't think so. I think he's fallen prey to a bad analogy ... that because some great hackers chose to use scripting languages such as Perl, Python and Ruby then all productive developers should use those languages for all chores, great and small. He's decided that one language (or class of language) is universally correct, and he's using that as a litmus test for being a great hacker.

Whereas, I think "the appropriate tool for the job", scripting languages for simple, scripting things. Power languages, such as Java, for long term, high quality, maintainable development. Individuals can move mountains, if they have the right tools. For the applications and frameworks I build, the right tools are Java, the Java runtime and the innumerable Java libraries.