Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!

Sunday, December 19, 2004

Groovy, Junit, Eclipse

So, during my trip to Javapolis, I hit that point where I had typed one parenthesis too many. You know the one, it involves class casts, the very ones the compiler could so easily do for us automatically. Mine typically look like:

    protected IPage newPage(String name)
        MockControl control = newControl(IPage.class);
        IPage result = (IPage) control.getMock();


        return result;

That's from a bit of a unit test, where I'm using EasyMock to create unit test fixtures on the fly.

Regardless, for my unit tests, I really get sick of all the casting and types. So I'd like to start using Groovy. Recoding the above code snippet into Groovy should end up looking like:

    protected newPage(name)
        control = newControl(IPage.class)
        result = control.getMock()


        return result

No return types. No variable types. Fewer parenthesis (remaining ones are there for aesthetic reasons). Ultimately, I think this is easier to read.

Is Groovy ready for prime time? Not for me. It's very important for my development cycle to be able to hit the Run... button in Eclipse and see the clean, green sweep of the progress bar in the JUnit view.

No such luck with Groovy. I created the simplest test case I could think of:

package com.examples;

import  groovy.util.GroovyTestCase

class TestStuff extends GroovyTestCase

     void testSomething() {
       assertEquals (true, true)

I then used the JUnit launch configuration to find all of my tests in my project ... which should just be this TestStuff class. Instead:

java.lang.RuntimeException: No filename given in the 'test' system property so cannot run a Groovy unit test
	at groovy.util.GroovyTestSuite.loadTestSuite(GroovyTestSuite.java:97)
	at groovy.util.GroovyTestSuite.suite(GroovyTestSuite.java:85)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:324)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.getTest(RemoteTestRunner.java:364)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:398)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:305)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:186)
Failed to invoke suite(): java.lang.RuntimeException: Could not create the test suite: java.lang.RuntimeException: No filename given in the 'test' system property so cannot run a Groovy unit test

OK. From what I can tell in the documentation, this means that Groovy has a placeholder TestCase class that is dependant on a JVM system property to identify the actual, single (!) script to execute. We'll just focus in, and run tests in my src folder instead (already less than ideal, because I often have several folders of unit test classes).

Boom. JUnit sees no unit tests in the source folder.

The only way I've found to get my Groovy unit test to execute is to use Run ... -> Groovy. That's great for running one set of tests, available in a single unit test class. HiveMind has 523 tests, Tapestry has over 580 ... these tests are spread across a large number of individual unit test classes. I don't maintain a JUnit test suite any more because JUnit support in both Eclipse and Ant will "scan" for my test cases. When I make changes, I need to easily run all of my tests.

I don't doubt I could cobble something together using Ant to get my tests to execute, but my preferred work style is to stay in Eclipse. Shutting back and forth between Eclipse and the command line, or even running Ant from inside Eclipse is not acceptible. The JUnit support inside Eclipse blows away what's available at the command line ... the direct access to code lines, the ability to get a diff on failed assertions (something you may have missed ... double click on the mismatch message in the stack trace and a diff window pops up to show you exactly what didn't match). And the general pleasantness of the green bar. I don't want to run my unit tests outside of Eclipse ... but, for the moment, Groovy is trying to force me to.

What I did eventually find was that I needed to generate a TestSuite for my Groovy tests:

package com.examples;

import junit.framework.Test;
import junit.framework.TestCase;
import junit.framework.TestSuite;

 * @author Howard M. Lewis Ship
public class GroovySuite extends TestCase
    public static Test suite() throws Exception
        TestSuite suite = new TestSuite("Groovy Tests");


        return suite;

This works, but is not my ideal. I have to use the full qualified class name as a string, otherwise Eclipse's builder considers this an error (the Groovy builder doesn't seem to be well integrated into Eclipse, so Eclipse has no knowledge of Groovy classes). I also have to maintain this file as I add new Groovy test classes. Further, this must be coded in Java (so Eclipse knows about it). I'll probably bite the bullet, and make do with this, but it's not as nice as simply writing Groovy tests and seeing them run.

Along the way, I found out that the Groovy plugin for Eclipse is quite primitive. Syntax or other errors in the Groovy code do not display in the editor, the tasks view or the problems view. Further, the old .class file is left behind, which further muddies the water. Basically, I'm left without a lot of confidence that what I've typed into the editor is what's running ... any errors in my Groovy code and some earlier version of the code runs instead. In unit testing terms, that means the potential for a lot of false positives on tests runs!

The prevaling wisdom in the Groovy community is that the best way to get your feet wet with Groovy is to start using it for unit tests. Sounds like a great idea, but I think the Groovy team needs to focus on this use case, especially with respect to Eclipse and other IDEs.

Saturday, December 18, 2004

Servlet mapping limitations

Had a bit of a disappointment this week, while working (from my hotel room) on Tapestry.

I've been blissfully unaware of some limitations of the URL mappings for servlets in the web.xml file. Tapestry has always had a single, simple mapping, typically to /app and everything else went into query parameters. That's changing for 3.1 and now I'm seeing some limitations in what I can do.

I'd like to support some mappings, such as /Home.direct/border.link. That is, the direct service, the Home page, and the component border.link.

Alas, this can't be done. The web.xml mapping <url-pattern>*.direct</url-pattern> will not match the above path. You'd think it would (and set the path info to /border.link) but it simply doesn't match at all.

Give how rich Apache mod_rewrite is (despite the limitation of being written in C), you'd think there would be more flexibility for matching paths to servlets in the Servlet API. I haven't read the 2.4 specs yet, do they address this? Regardless, Tapestry has an explicit goal to stay compatible with Servlet API 2.2. I suspect this can be overcome as well with container specific extensions, but that's another thing to avoid since it always leaves some people out in the cold.

One possibility is that I'll support pattern of /direct and create a URL like /direct/Home/border.link, but given that the page name may itself have slashes (i.e., admin/AdminMenu or some such), this introduces unwanted ambiguities.

Alternately, map /Home and use a URL of /Home/direct/border.link ... but suddenly, we start having a mapping for each page in the application!

In fact, I wish there was some delegate functionality in the servlet API, where application code could take over the identification of servlets for incoming paths. This would allow much richer paths. Alternately, it would be nice if the information provided in web.xml could be augmented, from Servlet.init(ServletContext) with additional information provided by the application. This would allow us to not repeat ourselves, by having Tapestry provide mappings based on the available EngineServiceEncoder mappings.

I have no taste for bureaucracy, but I may yet need to find a place on the Servlet API expert group.

Sunday, December 12, 2004

Updated PresentationExamples.zip

I've uploaded a new-and-improved PresentationExamples.zip. This is the file containing the source code from the examples I demonstrate at various user sessions. This update includes a bunch of new ideas from a recent road trip, and includes the Tapestry (and HiveMind) libraries. It's a zip of my local Eclipse workspace.

Tapestry 3.1 update

I've been squeezing in time to do a lot of updates to Tapestry 3.1 over the last few days, including on my flights to and from shopping.com.

I've been updating the Wiki with details on some of these changes ... at a high level the additions are:

  • Friendly URLs (!)
  • Modularity (put your pages in subdirectories)
  • Lots of new binding prefixes: asset:, component:, bean:, and listener:

Tapestry 3.1 continues to shape up as a radical rethinking of Tapestry 3.0 ... while staying about 95% backwards compatible. It will be both more efficient and more expressive (not to mention supremely extensible). Remember: Less is More!

Tapestry @ JavaPolis on Wednesday

I can't believe it, but I am flying all the way to Europe and back (plains, trains and automobiles!) for a one-hour Tapestry session at JavaPolis. I'm presenting on Tapestry from 16:55 to 17:55 on Wed December 15th 2004.

Back from shopping.com / Defer component

Just got back from a three day Tapestry training session with Jordan, Matt and the crew at shopping.com out in San Francisco. This was a fun, exhausting trip but everyone (myself included) got a lot out of it. Things I learned:

  • Be emphatic. The labs are good, even if they appear tedious. Nobody wanted to do them, and everyone was glad that they did.
  • Let Matt explain the problem before giving him the (wrong) solution.
  • Have a template Tapestry/Intellij project ready to go.
  • Flash memory keys are cool (we passed lots of files around, without having to get involved with network setup).

Along the way, we solved (to a degree) some problems with input validation (in Tapestry 3.0). If you've used validation, you know that you can get field labels to be decorated, along with the fields themselves. However, when using a loop around the input fields, there's an off-by-one error that causes the wrong label to be decorated.

The problem is that the FieldLabel component relies on the ValidField having the correct value in its name propery when the FieldLabel renders, so that it can work with the validation delegate to determine if and how to decorate the label. However, the name property isn't set until the ValidField renders itself (that's when it obtains its name from the Form component, and registers itself with the validation delegate as the active component).

So if the FieldLabel and the ValidField are in a loop that executes three times, and the 2nd rendering of the ValidField is the one that's in error, its the 3rd rendering of the FieldLabel that gets decorated. Woops.

So, the trick is ... we need to render the ValidField before the FieldLabel, but the output from that rendering must still occur after the VAlidField renders. Why that output order? Because that's that's how typical, western language forms are output, with FieldLabels rendering first (and thus, to the left of) ValidFields.

So, at the customer site, I created a component I call Defer. Defer takes a Block as a parameter. It renders the Block into a buffer, then renders its body, then outputs the buffered content from the Block. Something like:

public abstract Block getBlock();

protected void renderComponent(IMarkupWriter writer, IRequestCycle cycle)
  IMakrupWriter nested = writer.getNestedWriter();

  getBlock().renderBody(nested, cycle);

  renderBody(writer, cycle);


How is this used? The FieldLabel is enclosed inside a Defer, and the ValidField is enclosed inside a Block.

  <span jwcid="@Defer" block="ognl:components.fieldBlock">
    <span jwcid="@FieldLabel" field="ognl:components.inputName"/>:

  <span jwcid="fieldBlock@Block">
    <input jwcid="inputName" ... />

So the inputName component renders first, then the FieldLabel renders, then the HTML from the inputName component is output. This is once of the many things I love about Tapestry ... its not just a stream of text, its actual objects. Once you see past the false reality of a text stream to the Matrix of the Tapestry component object model, you are free to twist time (or at least, control the order of rendering).

It's far from perfect ... as with using a ListEdit/ListEditMap combination inside a Form, it forces too much of Tapestry's internals into your lap ... but it is workable, and it is only needed for special cases where a ValidField component's name changes while the form renders -- which is to say, only when using the FieldLabel/ValidField combination inside a loop.

Monday, December 06, 2004

HiveMind and EasyMock

I've become a big fan of EasyMock. EasyMock is a way to create mock implementations of interfaces, for use in your unit tests. This fits in really well with the overall IoC concept, since when testing class A your can inject mock versions of interfaces B and C.

With EasyMock, you create a control for your interface. The control is an instance of MockControl. From the control, you can get the mock object itself. You then train the mock (I call it a "zombie"). As you invoke methods on the zombie, the sequence of operations is observed by the control.

When you invoke a non-void method, you tell the control what value to return. You can also have any method throw an exception.

You then switch the zombie into replay mode and plug it into the class you are testing. That code interacts with the zombie as if it was, well, whatever it should be.

HiveMind includes a HiveMindTestCase base class that improves this somewhat. Using EasyMock out-of-the-box, you write a lot of test code that just manages the controls . HiveMindTestCase does this for you, which makes it more practical when you have half a dozen pairs of controls and zombies.

For example, here's part of the test for Tapestry's ExternalService:

    public void testService() throws Exception
        MockControl cyclec = newControl(IRequestCycle.class);
        IRequestCycle cycle = (IRequestCycle) cyclec.getMock();

        IExternalPage page = (IExternalPage) newMock(IExternalPage.class);

        Object[] serviceParameters = new Object[0];



        LinkFactory lf = newLinkFactory(cycle, serviceParameters);

        page.activateExternalPage(serviceParameters, cycle);

        ResponseOutputStream ros = new ResponseOutputStream(null);

        ResponseRenderer rr = (ResponseRenderer) newMock(ResponseRenderer.class);

        rr.renderResponse(cycle, ros);


        ExternalService es = new ExternalService();

        es.service(cycle, ros);

This code tests the main code path through this method:
   public void service(IRequestCycle cycle, ResponseOutputStream output) throws ServletException,
        String pageName = cycle.getParameter(ServiceConstants.PAGE);
        IPage rawPage = cycle.getPage(pageName);

        IExternalPage page = null;

            page = (IExternalPage) rawPage;
        catch (ClassCastException ex)
            throw new ApplicationRuntimeException(EngineMessages.pageNotCompatible(
                    IExternalPage.class), rawPage, null, ex);

        Object[] parameters = _linkFactory.extractServiceParameters(cycle);



        page.activateExternalPage(parameters, cycle);

        _responseRenderer.renderResponse(cycle, output);

Back in the test code, the newControl() method creates a new MockControl. The newMock() method creates a control and returns its mock .. in our unit test, we create an IExternalPage mock instance to stand in for a page named "ActivePage" and ensure that it is passed to IRequestCycle.activate().

The replayControls() and verifyControls() methods come from HiveMindTestCase; they invoke replay() and verify() on each control created by newControl() or newMock().

While replaying the zombie and the control work to ensure that each method is invoke in sequence and with the correct parameters.

The verifyControls() at the end is very important; an incorrect or out of sequence method call will be picked up in line (an exception is thrown), but an omitted method call can only be discovered by verifying the mock.

This technique takes a bit of getting used to; this "training" stage can easily throw you the first time through. Alternatives to EasyMock, such as jMock generally employ a different model, where you train the control (not the zombie), passing in method names and identifying arguments or other expectations in various ways. I suspect jMock is ultimately more powerful and expressive, but I find EasyMock more effective. Your mileage may vary.

Regardless of which framework you choose, this technique works really well! In fact, it is often useful to use this technique to define the interface; this is best explained by example. I was recently recoding how Tapestry performs runtime bytecode enhancement of component classes. The old code was monolithic, one big class. I broke that up into a bunch of "workers". Each worker was passed an object, an EnhancementOperation, that was a facade around a lot of runtime bytecode machinery. EnhancementOperation has methods such as addInterface() and addMethod() that are directly related to code generation, and a number of other methods, such as claimProperty(), that were more about organizing and verifying the whole process.

What's fun is that I coded and tested each worker, extending the EnhancementOperation interface as needed. Once all the workers were tested, I wrote the implementation of EnhancementOperation, and tested that. The final bit was an integration test to verify that everything was wired together properly.

Now, if I had sat down (as I might have done in the past) and tried to figure out all of, or even most of, the EnhancementOperation interface first, I doubt I would have done as good a job. Further, having EnhancementOperation be an un-implemented interface meant that it was exceptionally fluid ... I could change the interface to my heart's content, without having to keep an implementation (and that implementations' tests) in synch.

In sum, EnhancementOperation was defined through a process of exploration; the process of building my workers drove the contents of the interface and the requirements of the implementation. And all of this was possible because of the flexibility afforded by EasyMock.

Out of the box, EasyMock only supports interfaces; behind the scenes, it uses JDK proxies for the zombies. There's an extension that uses bytecode enhancement to allow arbitrary classes to be instrumented as mock object zombies.

This is cool as I rework the Tapestry test suite; often components collaborate and there is not an interface; I still want to be able to mock-up the other component, so I need to use the EasyMock enhancements.

Ultimately, I extended HiveMind's newControl() method to allow an interface or a class to be specified, and to do the right thing for each. Here's an example, where I'm testing how a Block and RenderBlock component work together:

    public void testNonNullBlock()
        Creator c = new Creator();

        MockControl bc = newControl(Block.class);
        Block b = (Block) bc.getMock();

        RenderBlock rb = (RenderBlock) c.newInstance(RenderBlock.class, new Object[]
        { "block", b });

        IMarkupWriter writer = newWriter();
        IRequestCycle cycle = newRequestCycle();



        b.renderBody(writer, cycle);



        rb.render(writer, cycle);


I'm testing RenderBlock, so I use the Creator (an improved version of Tapestry TestAssist) to create and initialize a RenderBlock instance (even though RenderBlock is an abstract class).

The Block, which will have methods invoked on it by the RenderBlock, is a mock object (a zombie). Notice that its created exactly the same as creating a mock for an interface.

This is a huge improvement over the old approach for testing in Tapestry: build a simple application inside the integration test framework. The problem is, the integration test framework is very slow (several seconds per test) and cranky. A small amount of integration testing is important, but cumbersome for the amount of tests a product as sophisticated as Tapestry requires (at the time of this writing, Tapestry has approximately 545 tests, including about 30 integration tests). Unit tests, by definition, are more precise about failures.

I predict that the number of tests in Tapestry may well double before Tapestry 3.1 leaves beta! But then again, I'm pretty well test infected!

Tapestry URLs: Half way there

Did a lot of work this weekend on the Tapestry URL front. If you are familiar with Tapestry, you know that nobody likes the way the URLs are formatted. That's an issue, because you don't have control over it ... Tapestry is responsible for building the URLs (a very good thing) and does all the dispatching and so forth (again, very good thing). But Tapestry's butt ugly URLs are a problem for many.

  • All URLs are built off a single servlet, typically /app. This defeats J2EE declarative security, which is path based.
  • The URLs are longish. For example,

    There's a rhyme and a reason to all that, but its very much oriented around the code and not the users.

  • The use of the slash ("/") character in the URLs is an impediment to breaking the application into modules (i.e., putting the admin pages into an "admin" folder).
  • The emphasis on query parameters means that most of an application, after the home page, will be "off limits" to any kind of web spider, such as Google.

I spent a large portion of the last few days working on this, and I'm halfway there. Of course, you'd hardly know it ... if you download Tapestry from CVS and built it right now, you'd see that the URLs have gotten longer! That service query parameter has been broken up into several smaller variables: service, page, component and, where necessary, container. Some Tapestry engine services add others (for example, the asset service has a path query parameter).

That means that the example URL from before would look like:


The next step (the work not yet done) is more fun ... what if we converted the page query parameter into more path info, and the service into an extension? Then our URL is much friendlier:


If we map the .html extension to the home service, then a Tapestry page looks like a normal HTML page:


From a client web browser, that's a reference to a Tapestry page, even though it looks like just the page's HTML template. With this kind of mapping, you might not even use the PageLink component in your templates, just ordinary HTML links:

<a href="misc/About.html">About</a>

Instead of:

<a jwcid="@PageLink" page="misc/About">About</a>

Getting all of this to work for your own application will require:

  • Adding path mappings to your web.xml:

  • Adding configuration data (HiveMind contributions) to your application, to tell Tapestry about those mappings:
    <map-extension extension="html" service-name="page"/>

Hopefully, I can get more of this going in the next couple of days (I have a long flight out to San Francisco this week).

Still, even with these improvements, the URLs aren't perfect. There's some dissatisfaction with bookmarking URLs with the external service; any extra data is encoded into the sp parameter, which has a prefixing system to identify the type of data. For example, "S" for string, "d" for double, "O" for serialized object.

Talking with Erik, I had the idea that we could simplify things by creating a service that bookmarked some set of page properties into the URL. So you might have a ShowAccount page, with an accountId property of type long. The accountId property gets converted into a string as part of the URL. Later, when the request is submitted, the accountId query parameter is converted back to a long and pluggined into the page's actionId property. The end result is a "pretty" URL:


This is much prettier than what you might accomplish in Tapestry 3.0 using the external service:


I think Erik wants to go further though ... he wants something like:


I think we can get there. I think we can allow one extension to map to different services (here, the extra query parameter hints that this is a bookmark service, not a page service, request). It's a matter of flexible interfaces and flexible HiveMind contributions.

Wednesday, December 01, 2004

Experiments with Tapestry and JDO

I'm working on some new examples to replace (or give context to) my existing set. I've decided that its is more useful to do some hand waving and build examples that hit a database than it is to stick to simple examples with no back end at all.

To that mind, I'm building a new example application, ePluribus, a survey site. I'll let you decide for yourself if the name is punny enough. ePluribus will be based on Tapestry 3.0.1 and HiveMind 1.0 (for the moment), and I'm using JDO for the back end. Once Tapestry 3.1 is stable, I will heave a great sigh of relief and switch over to that (the connection between Tapestry and HiveMind is somewhat jury rigged in Tapestry 3.0.1). I expect that ePluribus will be more relevant than the aging Virtual Library (discussed in the last chapters of Tapestry in Action).

I was very excited by JDO when I first heard about it back at JavaOne 1999. I've never understood why Sun didn't get fully behind JDO ... maybe because it doesn't require an application server. Talk about write once, run anywhere! Political questions aside, JDO seems ever more ready for prime time.

I had a good experience on TheServerSide.com, integrating Steve and Bruce's Kodo JDO code with the new Tapestry front end. Kodo is very clean, very well documented, and provided some great support to me. I've heard rumbles about performance, but TheServerSide.com is running faster than ever since the changeover (something I attribute to Kodo's query caching combined with Coherence's cluster-wide cache).

However, for a new, redistributable chunk of example code, Kodo (a proprietary product) is not the solution. Currently, I've turned to JPOX, a very well spoken for open-source product. It appears to use a variation of the Apache Software License 1.0.

They already have a port of the Virtual Library to JPOX. If they can handle the Virtual Library, they can handle ePluribus.

I've already implementing basic infrastructure, such as HiveMind interceptors to manage transactions. and injecting the PersistenceManager into other services. In fact, HiveMind's threaded service model is perfect for this kind of thing ... a fixed proxy is injected into other services. Invoking methods on the proxy delegates out to a per-thread implementation. This is what HiveMind is all about ... the other services can use the services provided by the PersistenceManager without worrying at all about its life cycle.

The JDOTransactionInterceptor ensures that transactions are committed after each method invocation (unless specifically told not to) and that any thrown runtime exception rolls back the current transaction.

Think about the amount of code you would typically write ... get the PersistenceManagerFactory, get the PersistenceManager from it, start and commit a transaction, close the PM. Don't forget rolling back transactions when exceptions are thrown. That's lots of code around your individual service methods that isn't necessary when using a HiveMind-based IoC approach.

As a side benefit, a PersistenceManager is only obtained from the PersistenceManagerFactory, and a transaction is only started, the first time (per request) that a method is invoked on the PersistenceManager proxy. The current production code on TheServerSide may start and commit half a dozen transactions per request. When that code is refactored around a "thin stack", there will never be more than one transaction per request for the majority of pages (that only read, not update, information).

JPOX is responsible for transactons and JDBC connection pooling ... and suddenly, there's no need for an application server at all. That's "thin stack" ... less code, same (or more) functionality.

So the question is ... why not Hibernate? I actually started this project a ways back, on Hibernate. In theory, JDO and Hibernate are largely equivalent ... in practice, I've just enjoyed JPOX and Kodo more than Hibernate. The documentation has been easier to follow, the XML configuration has been more natural to me, the error reporting is a notch better (but certainly not state of the art). I've been more effective, faster, with JPOX than I was with Hibernate, and whether the difference is in the two packages, or just the experience I've gained elsewhere in the last couple of months, I don't know or care. Viva choice! Viva Open Source!