Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!

Friday, June 11, 2004

More on Tapestry and JSF

A recent article at OnJava, Improving JSF by Dumping JSP has been the source of some interesting discussions. The author, Hans Bergsten, is on the JSF expert group and is quite aware of Tapestry ... he even modified the Hangman example from chapter two of Tapestry in Action for this article.

Limitations of JSF/JSP

Interestingly, the JSF component tree is built initially during the JSP render, the first time a page is accessed. That means components early in a page may not be able to reference or interact with components later in the page on that first render, which results in some oddities. He gives an example of a label component and an input field component, where the label does not appear the first time a page is rendered. Whoops! That a violation of the consistency principal to me.

Then it turns out the JSF mandates lots of limitation on how you write your JSP. For example, in most cases, you must use a <h:outputText> tag rather than literal text to output literal text ... otherwise the text tends to end up in the wrong place!

Almost two years ago, I was on a quest to mix and match Tapestry and JSP, such that a Tapestry page could use a JSP instead of an HTML template to render its response. I made a lot of progress but hit some complex cases where I could not merge the two models together, and I instead went outside the box to create Tapestry 3.0 (which has JSP-like features, such as implicit components).

Apparent the JSF team hit the same snags and, because they are blessed by the powers that be, simply said "Thou shalt not do the things that make JSF break. If you do, it is your fault, not ours.". Wish I had it so easy!

Now, as much as the JSF leaders might say that the limitations discussed in Hans' article are merely the result of using JSPs, and that JSF transcends JSPs ... well, any non-JSP implementation of JSF is going to be proprietary, and non-portable or lack the IDE tool support that justifies JSF in the first place. This should sound familiar, its the same set of promises that came out five years ago for EJBs (tools will fix everything!), and the same pattern of complexity, lock-in, and inefficiency that resulted.

Stepping closer to Tapestry

Hans' article continues with a discussion of how to use JSF with a custom ViewHandler that mimics Tapestry's HTML templates and page specifications. That's great I suppose ... but what I'd rather see is a comparison to chapter six which rebuilds the simple Hangman example using custom components. Creating new components in JSF is very involved ... lots of classes, lots of XML, lots of fiddly bits. Components in Tapestry are streamlined and easy ... and powerful, in that they can have their own HTML templates. So as complimentary as this article is to Tapestry, it is also an unfair comparison.

More observations

A JSF application can act as an inversion-of-control container, creating and configuring managed beans. This is a feature of Tapestry as well, primarily through the <bean> element of the page and component specifications. I think this is a better solution, to keep the beans associated with the page (or component) and for the names to be local to the page (or component). A lot of thought in Tapestry has gone into allowing different developers to work on different parts of an application without conflict. Anything global is a measured risk ... in Tapestry, the only true globals are the names of the pages.

It was interesting to the the <component> tags for selectForm and selections, where selections was nested inside selectForm (this also meant that in the template, the selections component was referenced as selectForm:selections. In Tapestry, the two components would be peers (both children of the containing page) and the relationship (that the form encloses the selections) would be determined at runtime, during the render. This is important, because in Tapestry a Form cannot know statically about all the form element components that will be rendered inside its body ... those components may be in the body of the Form (the most typical case), or inside components within the body of the Form ... or, with clever use of Block and RenderBlock, potentially on an entirely different page of the application! Tapestry doesn't care, which is why artifacts such as form control ids must be allocated dynamically.

Now I may be nitpicking here, and I'm not sure which of these is related to the JSF standard, and which to Hans' ViewHandler implementation. The HTML template includes the same kind of previewable HTML as Tapestry, but its not clear how or if it is discarded. Additionally, the javax.faces.Command/javax.faces.Link combination (used to render the link around each letter) has a rendered parameter similar to Tapestry's disabled parameter ... but neccessitates a duplication of the javax.faces.Graphic to cover the cases where the link is not rendered (because that particular letter has already been guessed).

Summary

I just don't see JSF aspiring to any of Tapestry's guiding principals: Simplicity, Consistency, Efficiency, or Feedback. It's very gratifying that the JSF experts are looking to Tapestry for inspiration ... this is quite similar to the situation with the EJB 3.0 specification and Spring.

There has been some discussion of what Tapestry can do that JSF can't. That's a silly question ... there's nothing Tapestry can do that JSF can't. There's nothing either framework can do that a servlet can't, or a Perl script for that matter. It's a question of how easy it is to get it working, and how well it works at runtime, and how maintainable it is in the long run.

Based on everything I've seen, JSF still faces an uphill battle in these areas. Yes, FUD. Yes, I'm biased. But last night at the NEJUG, I coded two simple applications while explaining Tapestry and fielding questions and where I messed up, Spindle or Tapestry itself helped me fix my problems. I doubt I could have accomplished the same things using the JSF RI (even if I had the necessary experience) in the time available. I suspect, over time, we'll be seeing some better shoot-outs between Tapestry, WebWork and JSF. Struts, at this time, is turning into an also-ran kept around for legacy purposes. In the meantime, I repeat the rallying cry: Results Not Standards.

Wednesday, June 09, 2004

New in HiveMind: PipelineService

Just finished checking in the hivemind.lib.PipelineFactory service into HiveMind.

This is neat stuff; pipelines are a close cousin to interceptors, but tend to be things that are a) coded manually and b) specific to a particular service. Think Servlets and Servlet Filters.

The PipelineFactory is a service implementation factory that constructs a pipeline ... a series of filters that call each other, then eventually call an underlying service. It looks like this at runtime:

The bridge classes are fabricated at runtime and connected together with the filters. Each filter is passed the next bridge as a parameter (the last filter gets the terminator instead), and is free to invoke the bridge before or after doing some kind of work, and to change the parameters as it sees fit.

A minimal use of this shows up as a service, a configuration, and some contributions to the configuration:

service-point (id=MyPipeline interface=mypackage.MyService)
{
  invoke-factory (service-id=hivemind.lib.PipelineFactory)
  {
    create-pipeline (filter-interface=mypackage.MyFilter configuration-id=MyPipeline)
  }
}

configuration-point (id=MyPipeline schema-id=hivemind.lib.Pipeline)

contribution (configuration-id=MyPipeline)
{
  filter (service-id=FrobFilter)
  filter (service-id=BazFilter)
  filter (service-id=FooFilter after="*")
}

This will be very useful in Tapestry 3.1 because many different services (not just Tapestry engine services, but all kinds of infrastructure) will be implemented as pipelines of simpler services. These pipelines will be exposed as configurations that will be pluggable. Got a particular transaction management strategy that you want to apply to all requests? Plug into the correct pipeline. Want to add some authorization checks to some kinds of requests? Plug into those service's pipelines.

This concept is demonstrating two key selling points of HiveMind:

  • Libraries can provide configuration points that most applications can ignore, but certain applications can plug into.
  • Mixing services and configurations (services passed as data, or services defined in terms of contributed data) extends the power of post the service model and the configuration model.

Once again, this underscores the difference in philosophies between HiveMind and Spring ... in Spring you still code against the metal. It's a much better metal to code against than the raw Java APIs or the raw J2EE APIs, but to make use of a service provided by Spring, you need to know a lot of details about how and when to configure and instantiate it ... it all goes into your springbeans.xml file.

HiveMind's philosophy is that just by having a framework available on the classpath, it will provide services and configurations that you don't need to know about. You can find out about them via HiveDoc (and other documentation) and take advantage of them as needed.

Take the example above: you don't need to know about the implementation of the PipelineFactory:

service-point (id=PipelineFactory interface=org.apache.hivemind.ServiceImplementationFactory)
{
  invoke-factory (service-id=hivemind.BuilderFactory)
  {
    construct (class=org.apache.hivemind.lib.pipeline.PipelineFactory service-id-property=serviceId)
    {
      set-service (property=classFactory service-id=hivemind.ClassFactory)
      set-service (property=defaultImplementationBuilder service-id=DefaultImplementationBuilder)
    }
  }
}

This configuration, and its dependencies on the ClassFactory and DefaultImplementationBuilder services, are the PipelineFactory's concern. It might change at some point due to some refactoring within the HiveMind framework ... and it won't affect your code at all. You may use some hypothetical third party framework and it may make use of PipelineFactory in its module descriptor and you don't have to know about it.

This is not all to say you couldn't create some kind of pipeline factory in Spring. You absolutely could ... but its a lot of work. It would be easier to create a bunch of individual beans and configure them to each other ... you don't save a lot of effort by adding additional abstractions (though you'd probably have to code a "bridge" class manually) ... you don't save anything because you are responsible for defining and configuring every bean in your application, even when the code comes out of a third party framework.

Creating a pipeline like this in HiveMind is a bit of effort, a new recipe to follow. However, contributing into your own pipeline, or a pipeline provided by another framework, is very streamlined. The way HiveMind can mix and match configurations and services, and the way it can accumulate contributions from multiple modules, is the key to this ease of use.

Saturday, June 05, 2004

Getting ready for pipelines

In between some yardwork around the homestead, I've been coding up a storm on the HiveMind front. This morning, I hit all the remaining code inside the framework to handle the new localized message strategy (more on that in a later post). I then refactored ClassFab and friends to make them easier to use outside of HiveMind. ClassFab is my attempt to "tame" Javassist, which is a powerful bytecode enhacement framework used by Tapestry and HiveMind. ClassFab streamlines the work of creating new classes at runtime ... the heavy lifting is done by Javassist, of course, but ClassFab is a prettier face.

The refactoring allowed me to create a better test suite for ClassFab; previously, the tests were a bit indirect, more like integration tests, where ClassFab was used to create a running HiveMind Registry. I've been trying to get away from that as the primary method of testing in HiveMind (and, some day, Tapestry), based largely on listening to Dave Thomas and reading Pragmatic Unit Testing. I now try to test directly, instantiating the objects inside my unit tests (assisted by EasyMock). I then do an additional integration test or two to verify that my module deployment descriptors are correct. This has a lot of advantages; it's much easier to set up exceptional cases than trying to do so using the integration approach, so this more disciplined approach yields better confidence (and code coverage!)

The main effort was this evening, where I created a pair of related services. The first is passed an interface and it creates a default (or placeholder) implementation of that interface. More Javassist, of course. The second service is a service implementation factory, allowing you to easily create placeholder services.

This is leading up to something very powerful, that builds upon the concept in HiveMind of mixing configurations and services on an equal footing: processing pipelines. Think in terms of servlets and servlet filters. Some series of filters will call each other, and the last filter will call into the servlet itself.

The end result will be a way to easily define a service in terms of a configuration point and a pair of interfaces: the service interface and the related filter interface. The configuration will define a series of filters (with all of the neat ordering power that's already built into HiveMind). HiveMind will dynamically create the necessary classes and such that will make the pipeline operate efficiently. This will be great for Tapestry, as it will break apart a number of monolithic classes and code blocks into tiny pieces, and give end users the ability to easily hook into many different places to provide application specific extensions.

Thursday, June 03, 2004

HiveMind vs. Spring: Philosophy

I was struck this morning with just the right way to explain the difference between Spring and HiveMind.

Spring is based on a recipe philosophy. If you want a particular kind of behavior, there's a recipe for getting it. You want declarative transactions? You lookup up the recipe for it ... you define your bean implementation (in Spring terminology, a "target bean", the other beans you need (a transaction manager bean in this case), and then combine them into a final bean, ready for use in the application.

Along the way, you may have to cut-and-paste some details, particularly the beans (transactionManager and friends) provided by Spring. On the other hand, you have a single place (your beans XML file) to look for all the details.

HiveMind uses a very different philosophy: a component oriented approach. In HiveMind, modules (that is, JARs packaged with a HiveMind module deployment descriptor) provide services (and configurations) using well-known names (those fully qualified service and configuration ids). Just by having the JARs on the classpath, the services are defined and waiting to be used, directly or indirectly, in your application.

Is this difference that important? I think so. HiveMind has a greater opportunity to be an integration platform than Spring. Over time, underlying frameworks change and its easy to imagine a number of situations where a commonly used bean, such as a transaction manager in Spring, would change over time. These changes could be something as simple as adding new properties or methods, or something more dramatic such as renaming a class or moving it to a new package.

Under the Spring model, upgrading from one release to another may break existing bean configurations.

Under the HiveMind model, the provided service would still have the same fully qualified id and service interface even if its implementation is radically changed ... and any such changes are restricted to the JAR and its module deployment descriptor.

In practice, this is not such a big deal; people don't go willy-nilly changing versions of important frameworks for the applications without the expectation that they'll need to do a good amount of testing and perhaps some tinkering.

In fact, the core difference in the philosophies is not about managing change over time, but in managing configurations across different simultaneous deployments. For Vista, the product that birthed HiveMind, there were pending requirements to support multiple databases (Oracle, MS SQL Server, perhaps PostgreSQL) with a single code base. The idea was that the application code would call into a data access object layer (implemented as HiveMind services), and those services would, in turn, call into a database abstraction layer to do the real work. The abstract layer would also be HiveMind services, with a set specific to Oracle, to SQL Server, etc.

One option was to isolate the different abstraction layers into different JARs and just control which JARs were put into deployment.

A pending (and only potential) requirement for Vista was to be able to provide different scaled versions of the product to different clients. So, a small school might get the JBoss version with a small number of Vista tools. The Vista application is divided up into a large number, about 30, different tools, each focused on a particular part of the application, such as Mail, Discussions, Quizzes, Lab Assignments, and so forth.

A large implementation of Vista might want the fully suite of Tools, and be set up to deploy into WebLogic and Oracle.

The HiveMind vision is that the installer tool for Vista could simply decide which of the modules should be deployed based on licensing information. In this way, rather than maintain different installers and different configurations of the product (with potential duplication of code across source trees), we rely on HiveMind to integrate the application on the fly, at startup.

Alas, I switched over to consulting long before I had a chance to see any part of this vision take root at WebCT ... but if you are curious why HiveMind is the way it is, that vision is the root cause.

Wednesday, June 02, 2004

HiveMind is one year old!

Just checking the CVS access logs; I did the first check in of HiveMind code into the Apache CVS on May 30 2003. HiveMind is over one year old now.

Tuesday, June 01, 2004

Spring Integration in HiveMind

I've added some basic integration with Spring to HiveMind. You can now make Spring responsible for createing a service and, effectively, link to it from within HiveMind. That is, the SpringLookupFactory can create a core service implementation by accessing a bean defined within a Spring BeanFactory. This opens up access to all sorts of Spring functionality (especially Hibernate integration) but keeps the syntax simple (and in SDL). Rod has suggested that HiveMind somehow "use" Spring's transaction interceptor, but that's a project for another day.

We'll have to see if the Spring guys follow suit and provide some integration in the other direction; I would love to see something symmetric, where a Spring factory could defer bean creation to a HiveMind service. That would allow Spring beans to tap into HiveMind's much more advanced configuration management.

I've also been wasting time fighting with Forrest to get the menus for the HiveMind site generated properly. It's doing the right thing for non-index.html pages. All the index.html pages keep getting all the menus, not just the menus for the selected tab. Probably a bug deep, deep, deep inside Forrest.

Friday, May 28, 2004

HiveMind home page updated

I've put up the updated HiveMind home page, which now has the Forrest look to it, replacing the old Maven look.

Forrest has its own issues, but at least those issues are localized to documentation, mostly navigation. For example, I just could not get the tabbing thing working; I'd like each of the modules (hivemind and hivemind.lib) to be its own tab, but after struggling for a long time, I've decided to punt. Forrest falls very far short on simplicity, consistency, efficiency and especially feedback ... but at least I can get pretty much the results I want, the way I want them, which I could not manage using Maven. The Maven guys blame Jelly but it's all the same to me!

Despite some pointers to other sets of Ant build tools, I've continued to develop my own home brew stuff, and it's fitting my needs quite well. Even when the rest of HiveMind goes into beta, the build scripts will still be alpha for a while.

Thursday, May 27, 2004

Tapestry Test Assist

A frequent criticism of Tapestry, from the point of view of the Test Driven Development crowd, is that Tapestry is too hard to test ... because all your classes are abstract.

As a stop-gap measure, I've finally gotten around to creating Tapestry Test Assist. This is a simple class, that can be used inside test suites, to instantiate abstract Tapestry pages and components (or any other abstract class, for that matter).

Like Tapestry itself, the AbstractInstantiator will create new fields and methods in the subclass. Unlike Tapestry, it isn't driven by an external specification; it just finds each property that is abstract (i.e., has an abstract getter and/or setter method) and implements the property in a subclass, with a field and pair of accessor methods. Unlike Tapestry, these accessors are very simple, with no hooks into Tapestry persistent page property logic ... and that's fine for testing.

The source code is available as a zipped-up Eclipse workspace. The easiest thing is just to copy the couple of source files into your own test suite. Eventually, this will be part of the actual Tapestry distribution.

Tapestry in Action on java.net

Coincidentally, java.net is also running a discussion of Tapestry in Action. I don't think they are giving away a clopy of the book. I'll be monitoring this forum as well as the JavaRanch. The java.net discussion runs for a month.

Examples from NFJS Denver

I finally remembered to upload the examples from my presentations at NFJS. This is a ZIP file of the Eclipse project. Expect this to change over time as I add more examples for different situations (I also use this application with clients).

Tuesday, May 25, 2004

JavaRanch Radio - Giveaway of "Tapestry In Action"

Just a note:JavaRanch Radio - Giveaway of "Tapestry In Action"

I'm monitoring the forum and answering questions. Any additional help and postings by the Tapestry "faithful" would be most welcome.

Monday, May 24, 2004

Tapestry at NFJS Denver

Just got back from the Denver No Fluff Just Stuff, where I gave two Tapestry presentations and a HiveMind presentation. Matt Raible attended the basic Tapestry session and was impressed with Tapestry.

I didn't stick to my presentation at all; Before the session, I took my finished examples application, gutted the two pages (Login and AddAddress) back down to plain HTML, and either trimmed or removed the page specifications and the Java classes. I then put them back together live! during the session.

Most people really liked this, they saw how quickly the exception report page gets you to fix problems, and just how little goes into the HTML (and the page specification and the Java class for that matter). One evaluation claimed that "watching someone code is boring", but that's the exception to the rule ... the clueful people could see how easily Tapestry would fit into their development cycle, which is the whole point.

Also got to demo some great features in Spindle while I was at it. I did have some stumbling points ... mostly the same problem getting me Dell laptop to work with Jay's projectors (I can't synchronize my screen to the projected view, so I have to code while staring up at the screen). Matt also suggests creating templates for the code, rather than laboriously typing in everything and that's a great idea ... I'll just create templates for each method I'll add to each of the classes.

Had a little fun on the expert panel ... David Geary was pretty cantankerous about JSF vs. Tapestry, and brought out that tired line about "It's a Standard". Results Not Standards folks! More comments on this subject later ...

My other two sessions were underattended ... right now, competing against sessions on Spring and Groovy is a non-starter. In fact, for the Tapestry components session, we just gathered around my laptop, which was a lot of fun.

I'm going to be retooling my presentations and hopefully will have a Tapestry and Hibernate session ready soon. In addition, we will probably combine the two Tapestry sessions together into a "Tapestry Kickstart" double (three hour) session.

Friday, May 21, 2004

JAM -- Another alternative to Maven

I've been learning a lot about Ant 1.6 features, but others may have beaten my to the punch: JAM seems to do all the things I'm planning and, of course, it already exists. Need to check it out a bit more carefully, but if it's less work ... it's less work!

Moving Away from Maven

I've gotten some comments asking why I'm moving away from Maven.

I started to use Maven initially as part of the HiveMind experiment. Fundamentally, I liked two specific features:

Maven does those two things pretty well, though the documentation part has a large number of bugs that makes keeping the documentation up-to-date problematic.

Anyway, measured by my Four Principals, Maven falls shorts:

  • Simplicity - this isn't even on the horizon in Maven-land. If you deviate even a tiny bit from Maven's one-true-path, you are lost! The complexity of plugins, class-loaders, XML documents that are Jelly programs in disguise, lack of documentation, etc., etc. means that doing something trivial can take hours of guesswork. Some parts of this "release candidate" have obviously not even been given a cursory test. Certainly the internals of most plugins are a maze of Ant, Jelly, properties and such, seemingly without end.

    This could be somewhat addressed by documentation, but there is a pitiful amount and what there is, is out of date. Understanding multi-project (the whole point of Maven you would think) is a total challenge, addressed only by endless experimentation.

  • Consistency - I guess this is hard to guage; do you write your own plugin? Write ad-hoc Jelly script? Write and use an Ant task?
  • Efficiency - Maven is sluggish, chews memory, tends to repeat operations needlessly. I could comment on the volume of downloads that occur (for plugins, for libraries the plugins depend on) but that actually isn't an issue, after you run Maven the first time. However, for HiveMind, I've taken to leaving the room while I perform my dist build ... and it's only two small projects.

    One concrete example: when using Maven, I could only get my unit tests to work by turning fork on. With Ant, I'm able to run the unit tests without forking. That's a huge difference.

  • Feedback - Can you say "NullPointerException"?

What I've accomplished in two days using Ant 1.6 will serve me well for HiveMind and for Tapestry ... and beyond. I think we'll be able to get a significant amount of Maven's functionality in a small, finite, understandable package, and be able to use it on the vast majority of pure Java projects.

For example, Here's the build.xml for the framework:

<project name="HiveMind Framework" default="jar">

	<property name="jar.name" value="hivemind"/>
	<property name="javadoc.package" value="org.apache.hivemind.*"/>

	<property name="root.dir" value=".."/>
	<import file="${root.dir}/common/jar-module.xml"/>
	<import file="${common.dir}/javacc.xml"/>								
					
	<target name="compile">
		<ibiblio-dependency jar="commons-logging-1.0.3.jar" group-id="commons-logging"/>
		<ibiblio-dependency jar="javassist-2.6.jar" group-id="jboss"/>
		<ibiblio-dependency jar="werkz-1.0-beta-10.jar" group-id="werkz"/>
		<ibiblio-dependency jar="servletapi-2.3.jar" group-id="servletapi"/>				
		<ibiblio-dependency jar="oro-2.0.6.jar" group-id="oro"/>
		<ibiblio-dependency jar="log4j-1.2.7.jar" group-id="log4j"/>
				
		<ibiblio-dependency jar="easymock-1.1.jar" group-id="easymock" use="test"/>
			
		<run-javacc input="${javacc.src.dir}/SimpleDataLanguage.jj" package-path="org/apache/hivemind/sdl/parser"/>
		
		<default-compile/>
	</target>

</project>

And here's the the project.xml, project.properties, and maven.xml.

I hate to bash the Maven project ... but what I see in Maven is a good, simple, core idea that's spiralled down the wrong path by trying to be everything to everyone. That's a lesson I'm taking to heart as I build something that suits my needs better.

Thursday, May 20, 2004

Maven-like downloads for Ant

One of the key features of Maven that I like is that it will download dependencies for you automatically.

I'm just starting to convert HiveMind from Maven back to Ant and this was very hard to do. I want to download a file, just if it doesn't exist locally (or is out of date), compute the MD5 sum while it downloads, and compare that to an MD5 sum stored on the server.

There was an existing project, greedo that may have done some or all of that ... but it has stalled in that "nearly-done" open source state so many projects reach. No activity in the last nine months. Broken home page. No documentation. I got it to build, but I had to hack their broken Ant build files. Also, I think the <macrodef> features of Ant 1.6 trumps a lot of the functionality in greedo, and I wanted more flexibility with respect to where I get files and how they are stored.

Anyway, I took a peek at the existing Get task of Ant and created a Grabber task. That's a start and it will be necessary in order to build HiveMind in the future. For the moment, it is available as http://howardlewisship.com/downloads/AntGrab.zip. This includes source and a JAR, ant-grabber.jar, that must be placed in ANT_HOME/lib.

A request has come in to discuss how it is used. Now, Grabber is super-alpha, but here's a portion of my Ant-based build environment to demonstrate how it is used:

	<available classname="org.apache.ant.grabber.Grabber" property="grabber-task-available"/>
	<fail unless="grabber-task-available" message="Grab task (from ant-grabber.jar) not on Ant classpath."/>
	
	<taskdef classname="org.apache.ant.grabber.Grabber" name="grabber"/>

	<!-- macro for downloading a JAR from maven's repository on ibiblio. -->
	
	<macrodef name="download-from-ibiblio">
		<attribute name="jar" description="The name of the JAR to download."/>
		<attribute name="group-id" description="The Maven group-id containing the JAR."/>
		
		<sequential>
			<mkdir dir="${external.lib.dir}"/>

			<grabber
				dest="${external.lib.dir}/@{jar}"
				src="${maven.ibiblio.url}/@{group-id}/jars/@{jar}" 
				md5="${maven.ibiblio.url}/@{group-id}/jars/@{jar}.md5"
				/>


		</sequential>
	</macrodef>

Later I use the macro as follows:

		<download-from-ibiblio jar="commons-logging-1.0.3.jar" group-id="commons-logging"/>
		<download-from-ibiblio jar="javassist-2.6.jar" group-id="jboss"/>
		<download-from-ibiblio jar="xml-apis-1.0.b2.jar" group-id="xml-apis"/>
		<download-from-ibiblio jar="servletapi-2.3.jar" group-id="servletapi"/>
		<download-from-ibiblio jar="werkz-1.0-beta-10.jar" group-id="werkz"/>
		<download-from-ibiblio jar="oro-2.0.6.jar" group-id="oro"/>
		<download-from-ibiblio jar="easymock-1.1.jar" group-id="easymock"/>
		<download-from-ibiblio jar="log4j-1.2.7.jar" group-id="log4j"/>

Wednesday, May 19, 2004

Comments enabled for the blog

I've enabled comment on my blog ... not exactly sure how Blogger implements comments, so we'll see what happens! I think you need to be registered with Blogger to post (I'd really not like to see my blog filled up with penis-enlargement ads).

Why separate bin and src distributions?

Something that struck me as I was preparing the latest HiveMind release just now. Why do we in the open-source world bother with separating the binary and source distributions?

Take HiveMind. The binary distribution follows standard procedure: it includes all sorts of documentation. Because of the use of Maven, the documentation set is out of control, but even so, what we have is a 281KB (uncompressed) JAR distributed inside 16,526KB (uncompressed) of documentation. Meanwhile, the source code is just another 1,257KB (uncompressed).

The binary distributions are 3.1MB/1.5MB (.zip vs. .tar.gz) and the source distributions are 556KB/229KB. In other words, adding the source to the binary distribution would not be particularly noticeable ... just an additional second or two at broadband speeds.

If I had my say (which, count to think of it, I largely do) I would produce a combined binary/src distribution and have the documentation as the add-on. A combined binary/source distribution would be approximately 50%/100% larger (since the JAR file is already itself compressed). If you assume that most people download the binaries and source together but largely read the documentation on-line (at least until they get serious about a package) ... then a combined bin/src distro is a win.

Certainly when I've used other packages, I've wasted a lot of time unpacking the binary distribution, using the jar, then having to get the source jar and connect it up inside Eclipse to I could actually debug code that uses the library.

This approach would be better for slow connection users as well; they would get what they need to work (the binary and the source) and could cherry pick the documentation they need from a live web site. Certainly, anyone serious about a package would want the full documentation on their own hard drive ... but why pay that cost just to take a peek? Distributing binaries with (full) documentation makes every user pay that download cost ... or keeps some users from bothering to evaluate the package at all.

It's open-source. The point is to buck tradition and think for ourselves.

HiveMind 1.0-alpha-5

I've just tagged the release, and will have downloads available shortly.

Lots of cool stuff between alpha-4 and alpha-5.

  • Simple Data Language
  • Improved HiveDoc
  • Initializable interface is gone, replaced with an initialize-method attribute on the construct element passed to BuilderFactory
  • Some minor renames and refactorings ... more work to seperate the "public face" (in org.apache.hivemind) from internals that user code shouldn't care about
  • Ability to define service models via hivemind.ServiceModels configuration point
  • Ability to define translators via hivemind.Translators configuration point
  • hivemind.Startup extension point for executing code when Registry is constructed
  • hivemind.EagerLoad extension point for forcing services to be instantiated early
  • Registry.cleanupThread() as a convienience for invoking ThreadEventNotifier

I believe HiveMind is ready to go forward; I would like to see a short beta period and a ramp up to GA release. During that period I hope to devote some time to converting from Maven to Ant and Forrest. Fixing up various link errors in the Javadoc would be nice as well. Generally, documentation is in excellent shape.

HiveMind will currently do everything I need it to do for Tapestry 3.1. That's *my* standard.

There are still a few debates out there; I've seen people strongly pro-and-con SDL, XML, Scripting ... but nobody's stepped up to the bat to do any work or even provide a really solid proposal. I'm still strongly in the declarative vs. procedural camp (i.e., no scripting) and vastly prefer SDL syntax to XML syntax.

Learning to love EasyMock

I've finally started using EasyMock with the HiveMind testing ... that's a huge amount of power in a tiny, little package!

If you haven't heard of this, the idea is that you can create mock implementations of services easily. First you create your control, and obtain the mock from it.

Next, you "train" your mock object, by invoking methods on it. The mock object and the control work together to remember the order of methods you invoke, and the argument values passed in. You use the control to specify return values.

Finally, you use the control to replay the mock and then test like it a real object.

Here's an example for the HiveMind suite:

    public void testSetModuleRule()
    {
        MockControl control = MockControl.createStrictControl(SchemaProcessor.class);
        SchemaProcessor p = (SchemaProcessor) control.getMock();

        Module m = new ModuleImpl();
        Target t = new Target();

        p.peek();
        control.setReturnValue(t);

        p.getContributingModule();
        control.setReturnValue(m);

        control.replay();

        SetModuleRule rule = new SetModuleRule();

        rule.setPropertyName("module");

        rule.begin(p, null);

        assertSame(m, t.getModule());

        control.verify();
    }

Here I'm testing a SetModuleRule, which is dependent on the SchemaProcessor. SchemaProcessor are complex to create, and tied into the whole framework ... a lot of work for the two methods that the SetModuleRule will invoke on it!

This is great stuff, because it lets me easily mock up parts of the framework that are normally pretty inaccessible. Some of my tests use two or three mock/control pairs. This is still a big improvement over my existing approach, which is to feed a HiveMind module descriptor into the framework and test that it does the right thing. That's important, but its more of an integration test than a unit test ... it can be hard to tell precisely what failed.

Monday, May 17, 2004

HiveMind -- ready for beta?

I've done some moderately involved refactorings of HiveMind lately and, in my opinion, everything is just about in place for HiveMind to go beta. I want to clean up some stuff in the Registry, RegistryInternal, Module triad of interfaces, and that will allow me to add a configuration point for eagerly (instead of lazily) initializing services. But onces that's in place, I think its finally time to move from rapidly adding features to finding gaps and fixing any holes. I don't think there are going to be too many (famous last words), but really, I've been writing tests and keeping documentation 95% up to date right on through the process. I want a stable HiveMind so that I can get more work done on Tapestry 3.1.

HiveMind work

Squeezed around the edges of my work in Germany, I got a bunch of work on HiveMind done. I've been doing a bit of refactoring, moving code around and splitting the Registry interface into two interfaces (Registry and RegistryInternal).

I changed <configuration-point> and <service-point> to not take a <schema> (or <parameters-schema>) element, but instead have a schema-id and parameters-schema-id attribute. I then made <schema&;gt; top-level only, and made its id attribute required. I found that a bit ponderous, though, and made changes to allow <schema> inside <configuration-point> (without an id attribute, and likewise for <service-point>/<parameters-schema>). So you can do it "in place" or "top level" but not mix the two.

Some big improvements to HiveDoc as well. The new HiveDoc splits the documentation across more files; separate files for each top-level schema, each service-point and each configuration-point, as well as for each module. Much less cluttered.

I also did an experiement; I copied-and-pasted the hivemind.sdl descriptor 26 times (as a.hivemind.sdl, b.hivemind.sdl, etc.) to see how well the XSLT would go with a fairly large input. The combined registry.xml (built by reading and combining all the descriptors) was half a megabyte but the generation of HTML was still under five seconds (to generate about 2.5 MB of HTML).

Since I was on the road, only some of this has been checked in. I'm in the middle of adding some more AOP-lite functionality; the ability to choose, with the LoggingInterceptor, which methods get logging. It'll look something like:

interceptor (service-id=hivemind.LoggingInterceptor)
{
  include (method="get*")
  exclude (method="*(foo.bar.Baz,int)")
  exclude (method="set*(2)")
  include (method="set*")
  exclude (method="*")
}

This will cause all methods with names starting with "get" to be logged, as well as most methods starting with "set". Methods with certain parameters, or a certain number of parameters, will be excluded.

Back from Germany

Just back from a quick visit to Germany and startext, an IT shop that is getting heavily into Tapestry. They brought me out for 3 1/2 days of training and mentoring and it was a blast. We went through my available presentations quickly, but the fun started with live coding ... by them and by me. They learned a lot about Tapestry and I learned a lot about teaching Tapestry ... such as, dive right into the code as fast as possible!

We hit a lot of subjects quickly, getting right into things like creating new components, and generating JavaScript dynamically. They had some interesting requirements, such as having disabled text fields submit anyway (we had to hook the form's onsubmit event handler to re-enable the fields just before the form submitted). Then things got even wackier when we tried to combine that, with ValidFields using client-side validation, and a drop-down list that forced a page refresh (and caused a second drop-down list to update to a different set of values). In fact, some of the stuff I learned can be rolled into Tapestry 3.1.

I felt bad that I didn't have a lot of time to study up on Tree and Table; as it turns out, they really liked seeing me puzzle it out as I went; they picked up from me some tips about how to do it themselves. All in all, a succesful trip, but getting to Bonn and back (via train and jet) was brutal (and took longer than I had power in my iPod).

Now I'm home for a couple of days to try and resynchronize my internal clock, then off to Denver.

By Request: How Line Precise Error Reporting is implemented

Hi Howard,

I read your blog and I'd like to make a request.  I'd really like to read 
more technical details on techniques you might use to get line precise 
error reporting.

This is one of the best features of tapestry and if you can pass on some of 
the details of how to go about it and get people excited about doing it 
themselves that could only be a good thing.  This is typically an area 
where most other open source projects fail badly.  The normal error 
reporting seems to be the null pointer exception.

Regards,


Glen Stampoultzis
gstamp@iinet.net.au
http://www.jroller.com/page/gstamp

I agree with all of this, but it's not just open source projects which fall flat in this area ... and this is a vitally important area: Feedback, one of my four key aspects of a useful framework (along with Simplicity, Efficiency and Consistency). Without good feedback, the developer will be faced with a challenging, time-consuming puzzle every time something goes wrong in the framework code. It's not enough to say "garbage in, garbage out" ... if the framework makes getting the job done harder, or even just makes it seem harder, then it won't get used, regardless of what other benefits it provides.

Line precise error reporting is not magic, but it is a cross-cutting concern (Dion is thinking about how to make it an aspect), so it touches a lot of code.

It starts with the Resource interface, which is an abstraction around files; files stored on the file system, at a URL, or within the classpath. Tapestry extends Resource further, adding the concept of a file within a web application context. The Location interface builds on this, combining a resource with lineNumber and columnNumber properties.

The XML, HTML and SDL parsers used by both HiveMind and Tapestry carefully track the location (in most cases, by making use of the SAX Locator to figure out where in a file the parser currently is).

All the various descriptor (in HiveMind) and specification (in Tapestry) objects implement the Locatable interface (having a readable location property), or even the LocationHolder interface (having a writable location property), typically by extending from BaseLocatable. As the parsers create these objects, they are tagged with the current location provided by the parser.

Later, runtime objects (services and such in HiveMind, components and such in Tapestry) are created from the descriptor/specification

You can see how my naming has been evolving. Descriptor is a better term, I don't remember where "specification" came from, but it's now entrenched in Tapestry terminology.
objects. The runtime objects also implement LocationHolder, and the location of the descriptor object is copied into the location property of the runtime object.

The next piece of the puzzle is that exceptions need to have a location as well! When an exception occurs in a runtime object, the runtime object throws an ApplicationRuntimeException that includes the correct location.

There are a couple of utility methods on the HiveMind class used to help determine what the correct location is:

    /**
     * Selects the first {@link Location} in an array of objects.
     * Skips over nulls.  The objects may be instances of
     * Location or {@link Locatable}.  May return null
     * if no Location can be found. 
     */

    public static Location findLocation(Object[] locations)
    {
        for (int i = 0; i < locations.length; i++)
        {
            Object location = locations[i];

            Location result = getLocation(location);

            if (result != null)
                return result;

        }

        return null;
    }

    /**
     * Extracts a location from an object, checking to see if it
     * implement {@link Location} or {@link Locatable}.
     * 
     * @returns the Location, or null if it can't be found
     */
    public static Location getLocation(Object object)
    {
        if (object == null)
            return null;

        if (object instanceof Location)
            return (Location) object;

        if (object instanceof Locatable)
        {
            Locatable locatable = (Locatable) object;

            return locatable.getLocation();
        }

        return null;
    }

This method is particularly handy to the ApplicationRuntimeException class, since it may want to draw the location from an explicit constructor parameter, from a nested exception, or from an arbitrary "component" associated with the exception:

    public ApplicationRuntimeException(
        String message,
        Object component,
        Location location,
        Throwable rootCause)
    {
        super(message);

        _rootCause = rootCause;
        _component = component;

        _location = HiveMind.findLocation(new Object[] { location, rootCause, component });
    }

That's pretty much all there is to it ... but it's all for naught if all this location information is not presented to the user. The location will generally "bubble up" to the top level exception, but you still want to be able to see that information. Tapestry's exception report page does a great job of this, as it displays the properties of each exception (including the location property), and then tunnels down the stack of nested exceptions (this is actually encapsulated inside the ExceptionAnalyzer class).

Line precise error reporting isn't the end all of Feedback. Malcolm Edgar, a one-time Tapestry committer, has been working on his own web framework (I believe for internal use at his company) ... it goes one step further, actually displaying the content of his equivalent to an HTML template and highlighting the line that's in error. That's raising the bar, but perhaps Tapestry will catch up to that some day.

Further, simply reporting locations isn't enough. If I pass a null value into a method that doesn't allow null, I want to see a detailed exception (You must supply a non-null value for parameter 'action'.) rather than a NullPointerException. A detailed exception gives me, the developer, a head start on actually fixing the problem. Explicit checking along these lines means that the location that's actually reported will be more accurate as well, especially considering that there's no way to attach a location to a NullPointerException.

Monday, May 10, 2004

Thinking about Flex

I was very impressed by the Flex presentation at TheServerSide.

Flex is a rich client toolkit. Flash and ActionScript on the client. Java and XML on the server. RMI or Web Services in between. The client contains all the state, the server is wonderfully stateless.

I first learned a bit about Flex from Matt Horn, who works for Macromedia; they hired a bunch of J2EE developers in one of their acquisitions, and none of them took to Flash. Flex is their take on how to have their cake and eat it too. It's another XML-based user interface scripting language but seems to have a lot more going on inside, especially with respect to data binding and client-server communication.

Here's a good intro to it: Building Rich Internet Applications with Macromedia Flex: A Flash Perspective.

I've been browsing the docs and playing with the samples. It was easy enough to set up inside Eclipse; I just created a new project, created a content folder, and expanded the flex.war into the content folder; this gives a web.xml, and the necessary libraries and such. Next, I used Geoff's Jetty Launcher to serve up the context folder and could start creating the .mxml files. Flex uses servlet filters to converts .mxml into .swf (Flash movies) in much the same way that .jsp files are converted into compiled Java classes.

Running this on my laptop (Dell Inspiron 8200, 512mb ram, Pentium 4 2gz) generally worked well; yet many key presses had a noticeable (though minute) delay ... but at the same time, there was also a good amount of fading, zooming, sliding images. It's as if user-input in Flash is given lower priority. Anyway, it was all still quite impressive and I expect to do more experimentation.

The documentation I've read so far was very good. Rich hypertext and PDF and lots of detail and examples. If I can get my head around the data binding and communication to the server, I could probably put stuff together right now. The default skin is clean and simple. The default components do a better, easier job of creating simple, clean interfaces than Swing or AWT (though the XUL variants there probably help). Again, it isn't just Flash and ActionScript and XML ... the data binding was given significant thought (I'll be able to tell if that thought was worthwhile at some point soon).

My initial reservations:

  • All your client-side logic is written in ActionScript (really, ECMAScript). Unit testing this is going to be at least as much of a challenge as unit testing Tapestry pages.
  • There's some form of debugger for ActionScript, but it doesn't look integrated ... it looks like it might be command-line oriented.
  • Applications are ultimately monolithic; you can break your .mxml files into smaller pieces, and create components as well (a clever use of XML namespaces), but there's just the one application object. How long is the initial creation of the .swf file for complex apps? How much of that .swf must be downloaded before the user sees the initial page? How will teams of developers work together?

So why is the Tapestry guy interested in this stuff? Because HTML is, ultimately, a dead end. Tapestry squeezes the most out of HTML that can be done, but I firmly believe that in three to five years, some form of rich client will supplant HTML for the zero-delivery cost web applications that are currently created using Tapestry, JSP, ASP or whatnot. I predict that a lot of stuff now down exclusively with HTML will be done using a mix of HTML and Flex (or whatever client-side technology emerges, should Flex fail). This could be good news for Tapestry ... the sites I envision dominating the market will consist of "boutique" HTML (HTML created by non-Java developers) and Tapestry shines at integrating that kind of HTML into a dynamic application. The HTML parts of applications will be much more focused on readable documentation (news sites, some form of community sites, blogs and the like) where the ability to print and book mark are important. The kind of applications currently shoe-horned into the HTML world (help desks, all kinds of corporate infrastructure, CRUD applications) will be easier and better using a rich-client alternative.

At TSS, people were dismissive because of licensing costs ($25K per server, give or take). Well, licensing costs come down (think WebObjects, which went from $50,000 to $750). And software licensing is the smallest piece of the development cost compared to developer time and hardware.

More of concern is the single-vendor aspect. This is anathema in the Java world ... and there's the looming possibility of Microsoft buying Macromedia and killing Java support within it. I don't know what the solution to this is ... perhaps Macromedia needs to open-source Flex with a mixed GPL/proprietary license like Sleepycat's ... for-profit users pay Macromedia, non-profit don't. Alternately, seed flexible APIs into the product to ensure that third-parties will be able to provide Java support regardless of ownership of Macromedia and Flex.

In any case, the concept is exciting, regardless of which vendor finally makes it all work. Factoring out the presentation layer from web-deployed applications would be a great thing in differentiating J2EE from .Net. A stateless server-side (mated to a richly stateful client-side) means that simple, efficient solutions (based on HiveMind, Spring, and Hibernate) will have their power multiplied.

A short break between TSS and Germany

So I'm back from TheServerSide Symposium and have just enough time to catch my breath before heading out to a multi-day engagement in Germany.

I was "in character" even before I arrived in Las Vegas, kibitzing with a group of developers across the aisle from me on the flight in.

My sessions, as well as a "TSS tech talk" video shot went pretty well. The HiveMind presentation still went a bit roughly; I'm beginning to think that, up against the Goliath of Spring, my little David had better differentiate itself quickly ... the distributed configuration (for both data and services) is the key distinguishing feature, and a solution that mixes HiveMind with Spring is a likely winner.

Much more interest in the Tapestry presentation, which is the advanced one convering component creation. Again, this is really an area where Tapestry differentiates itself most strongly from the other similar frameworks (if such things exist). I did a bit more live presentation, which was a chance to show and discuss Spindle and the great Tapestry exception reporting page (and line precise error reporting).

Both sessions filled the small room I was in; I counted about 55 attendees in each session. The Tapestry session might have overflowed that room had it now been up against the very contentious "What's in EJB 3.0?" talk.

I thought the sessions I attended were very good. The keynotes were, alas, ignorable (except for the Flex presentation, which rocked). Most presenters were quite good; I think Rod Johnson did a very good job and was quite gracious towards me personally and towards HiveMind, to the point of quoting my HiveMind presentation during his "J2EE without EJB" session. We also talked frequently between sessions; overall, we respect that our frameworks address different needs and that combining them should be made as painless as possible. Meanwhile, I'm jelous that he's in a position to use his framework in production, something that identifies problems and limitations fast. It really underscores how I was languishing at WebCT; I need to get involved in some real work on a real project and be in the technical architecture driver's seat again; it's been too long.

I talked to so many people over the course of a few days and lost track of it all quickly ... I have to start jotting down notes on business cards. I know I promised copies of the book to Jason Carreira and Kito Mann, anybody else had better send me a reminder! Also, a lot of people are really pushing for Tapestry to support the Portal API somehow, someway.

People have been talking themselves blue over everything that went on and I don't have any additional, deep insights to add. Supposed "thought leaders" (like myself) have (publically and privately) questioned the status quo for quite a while, challenging the usefulness of EJBs, the practicality of separating the layers so profoundly, and identifying the needless complexity as a platform threatending problem (Rod has examples of major banks throwing away $20 million investments in Java and J2EE in favor of .Net). This philosophy has now gone completely mainstream. One has to question the relevance of EJB 3.0 at this time (and more so when it is ready for release). A phrase I came up with while talking to Rod is "Results Not Standards". Tapestry users are getting great results, even though Tapestry is not a standard (though it is compliant with the useful and reasonable parts of the J2EE standard). Likewise, WebWork, Spring, Hibernate (which is trying to become a standard) and so forth.

Tuesday, May 04, 2004

SDL, Testing, and the Scripting Debate

I'm really loving the effort I invested in SDL, it's very clean, very useful stuff. I'm beginning to productize Tapestry test framework and I'm targetting SDL, not XML, as the language for the scripts. Background: for Tapestry 3.0, I developed a "mock unit test" suite, more of an integration suite, where I simulate a servlet container around Tapestry pages and components. There's no HTTP involved, but none of the Tapestry objects know that ... they see the Servlet API and, for Tapestry's needs, it works just like the real thing.

Each script consists of a few definitions, and a series of sequential requests. Each request passes up query parameters, and makes assertions about the result of the request: mostly in terms of asserting text in the output (sometimes in the form of regular expressions).

However, the tests are pretty ugly; because a lot of the assertions are looking for HTML, there's lots of CDATA sections. The execution and parsing code is all twisted together and built on top of JDom. Line-precise error reporting came later, so it can be a challenge to find where, inside a test script, a failure occured. In addition, all the code is inside the junit folder, not part of the Tapestry framework itself, so it can only be used by Tapestry itself.

I'm currently starting to rebuild this support for use by Tapestry and for end-user applications. I'm building a better parser, using SDL as the script language, and building tests for the testing framework itself as I go (Who watches the Watchmen? Who tests the Testers?). Lots of work up front, but will easily pay for itself when we start adding more complicated tests for some of the 3.1 features ... I also expect it to run much faster.

Meanwhile, the debate about replacing XML and SDL in HiveMind with scripting rages on. I don't see the advantage to the scripting approaches ... they're all more verbose than the equivalent SDL. It's more code to do the same thing that you'd normally do by referencing builder factories. It won't document as HiveDoc, it raises many issues about multithreading. It adds unwanted dependencies to the HiveMind core framework. No one has made a compelling argument ... certainly not compelling enough for me to spend any time on that when I have other priorities, and so far, nobody else is checking code into the HiveMind CVS. So ... we have a fairly active HiveMind community but I'm still the only developer ... do I like this, or not?

Monday, May 03, 2004

Goodbye, Digester!

Ah, the evolution of XML parsing in Tapestry. Tapestry is very much driven by validated XML files (for page and component specifications, application specifications, and library specifications). In the earliest days, Tapestry was tied directly to Xerces. Later, it switched over to JAXP. I had reams of code that would walk the DOM tree and construct the specification objects from the XML.

As a nod to efficiency, I switched over in 3.0 to use Digester, but that's caused a lot of grief in its own right. It seems like the version Tapestry uses was always in conflict with whatever version was in use by the servlet container, especially Tomcat. That caused a lot of grief.

Meanwhile, Digester drags along some of its own dependencies, jakarta-collections and jakarta-beanutils. More JAR hell, keeping all those JARs and versions straight.

No more; I replaced Digester with an ad-hoc parser derived from (and sharing code with) the HiveMind module deployment descriptor parser. It uses a stack to track the objects being constructed (that's borrowed from Digester), but uses a simple case statement and some coding discipline, to deal with recognizing new elements and processing them. I haven't done any timings, but comparing this code to the Digester code leads me to think that this will have a substantial edge ... which will be even more important once Tapestry supports reloading of page templates and specifications. In addition, inside the monolithic SpecificationParser class, it's a lot clearer what's going on. The old code had to create some number (usually three, sometimes six) Digester rule objects for each Digester pattern (patterns are matched against elements on Digester's stack to determine which rules fire). The new code is almost entirely just private methods: beginState() methods that decide what state to enter based on the current element, enterState() methods that create new specification objects, push them onto the stack, and change to a new parser state, and endState() methods invoked when a close tag is found, to finalize created objects and pop them off the stack, and return the parser state to its earlier value.

Next up; the JDom-based parser for the Tapestry mock unit test suite. First, I want to use SDL, not XML, for these scripts. Second, I want them to run much, much faster and I suspect that a lot of time is being spent in JDom. Third, I want thrown assertion exceptions to have line-precise error reporting.

And fourth ... part of Tapestry 3.1 will be to productize this approach to testing Tapestry applications.

I'm the Seven of Clubs

On the just published Who's Who in Enterprise Java List (by The Middleware Company), I'm in the Clubs ("Pot Pourii") category, as the Seven of Clubs. If they do this again, I'd love to be recognized in the Hearts ("Contribution") category, along with Gavin King, Rod Johnson, Craig McClanahan and many others.

It's interesting just how many names on the list I don't recognize!

Someday, I'll have to find another picture of myself that I like ... this one was taken in 1999 atop Mt. Haleakala, Maui, Hawaii.

Sunday, May 02, 2004

Introduction to Jakarta Tapestry

And interesting link from Object Computing, Inc.: Introduction to Jakarta Tapestry.

The author, Rob Smith, is very positive on Tapestry ... he find's it fun! That should be the fifth goal of Tapestry (after simplicity, efficiency, consistency and feedback).

Thursday, April 29, 2004

Amazon.com reviews of Tapestry In Action

The on-line reviews of Tapestry in Action at Amazon.com are really quite nice; very positive but also fair. As I've stated before, everyone wants something different out of the book ... for example, should there be more of a comparison to other frameworks? I think not ... the book was already too big, and there's books on that exact subject already.

I gave away many copies of the book this weekend at the No Fluff Just Stuff conference ... I hope a few of those people (you listing Bruce and David?) might have a chance to post a review as well.

Starting on Tapestry 3.1

I've been putting this off ...and I have many other things I could/should be doing. But before it goes any further, I need to move my refactorings of Tapestry out of the branch and into the head. Most of the refactorings concern classes that were moved from Tapestry into HiveMind. It's going to be a mess to put humpty dumpty back together again at this point.

Meanwhile I've very happy with how the Simple Data Language has turned out. I've been converting documentation and examples. HiveMind will support both formats indefinately. The debate still rages, though, and its boiled down to: declarative vs. procedural. I prefer the former, and I'm waiting for folks like Harish to come up with some form of design to support the latter. To me, the scriping/procedural approach looks harder to read. The SDL format is, I think, quite natural.

Finally, on the iTunes/iPod front I continue to be quite happy. The new "party mix" mode seems to work quite well, and I love the way iTunes merges songs together.

Monday, April 26, 2004

Let's boil the ocean: Simple Data Language

I'm having some fun today thinking about how to get the XML out of HiveMind.

I've long defended the use of XML inside Tapestry (and HiveMind), because it was a convienent, standard way to store hierarchical data. But it is hard to type, hard to read, hard to edit and has some challenges to parse. So I'm thinking about other options.

So ... I still want elements and attributes and nesting, though I don't actually care about nested character data. Nested character data is for documents, not hierarchical data files (despite the fact that Sun's DTDs never use attributes, just character data).

What I have so far (a couple of hours hacking around) is something small and simple. An element is a name. It may be followed by a list of attributes inside parens. An element terminates with a semicolon, or uses curly braces to denote nested elements. Attribute values don't have to be quoted if they look like identifiers, or are numeric literals. Looks something like:

module (id=foo.bar.baz version="1.0.0")
{
  service-point (id=Startup interface=java.lang.Runnable)
  {
    create-object (class=foo.bar.baz.StartupImpl);
  }
}

Of course, use of whitespace is totally at your discretion. For comparison, the equivalent XML document:

<?xml version="1.0">
<module id="foo.bar.baz" version="1.0.0">
  <service-point id="Startup" interface="java.lang.Runnable">
    <create-object class="foo.bar.baz.StartupImpl"/>
  </service-point>
</module>

This is a chance for me to look at JavaCC, a common tool for writing language recognizers and parsers. Good experience in and of itself.

On the other hand, it may be smarter to follow others and simply implement the similarily themed YAML -- YAML Aint Markup Language. When I gave a HiveMind presentation a few month's back at a local Java user's group, I was asked about YAML then, and had a vague understanding of it. YAML has a few more features, better mimicing most (if not all) of XML.

I think the YAML version would be:

--- #YAML:1.0
module:
  id:          foo.bar.baz
  version: 1.0.0
  - service-point:
      id:                Startup
      interface:     java.lang.Runnable
      - create-object:
         class:              foo.bar.baz.StartupImpl 

For either of these two formats, it would be straight-forward to create a filter that produces equivalent XML.

What a week!

So, on short notice, I traveled down to North Carolina to do two days of Tapestry training and mentoring for RoleModel Software. This was a really great experience for me ... RoleModel is a very dynamic group. When they like something, they really let you know. RoleModel is headed up by Ken Auer, an early, vocal proponent of XP (having writing at least one book on the subject). RoleModel is all virtual, it's Ken and a bunch of sub-contractors ... but these guys are pretty darn sharp. In addition, a couple of the junior members are apprentices ... one is still in high school. I've been dubious about pair programming, but it was interesting to see it in action. Their layout is one computer per pair, but with duplicated monitors, mice and keyboards. Anyway, the whole arrangement really resonated with me when I later listened to Dave Thomas' session at NoFluffJustStuff, because it is wrong to treat people at radically different levels of experience identically. Dave identified five levels of experience. At RoleModel, you could see this in action, as the apprentices would work with more experienced folk a level or two above them on the ladder.

I even tried my hand, helping Ken work out how to use the Tapestry Tree component. He was doing a lot of typing, I was doing research and writing down diagrams to understand how the Tree's data model works. Even with two deeply experienced coders, it was working surprisingly well (but we only had an hour to try this). It's important to keep an open mind.

Back from North Carolina Thursday night, on Friday morning I picked Erik Hatcher up at Logan airport and we hooked up with a bunch of other folks for the Boston NoFluffJustStuff event. We were largely scheduled opposite each other so I didn't see him present at all. I did attend David Geary's sessions on Java Server Faces. Good, clear presentation, ambitious if awkward technology. I did learn that at least some folks on the JSF committee are aware of Tapestry ... and I promised him that I'd be raising the bar ever higher. I also gave him a copy of the book (he's going to send me a copy of Core JSF when it is published).

I also attended a session on Groovy. Groovy is a mix of Java, Python and Ruby. It's a scripting language ... but it compiles directly into Java bytecode. Types are optional. It has closures (like Ruby). It adds methods to existing JDK classes like String and Collection. You can completely mix and match Java code and Groovy code. Richard did a great job of describing the language (and even identifying its faults), though the consensus was that he should have fired up a window and just typed the examples live, rather than use slides. The fact that Groovy is in the Java Community Process is odd ... since it is so compatible with Java, there doesn't seem to be a need. Even so, once it stabilized (and if it doesn't go into kitchen-sink hell), I can actually see using it for real code inside Tapestry and HiveMind. I did have a few ideas, and Richard encouraged me to join the mailing lists.

Those were the only technical sessions I attended; however Bruce Tate's session on simpler, faster, better Java was fun. It fits in with my own philosophy (and I'd like to see Tapestry and/or HiveMind as another example, beside Spring, WebWork and Hibernate). This was a continuing (and occasionally divisive) theme all weekend ... J2EE as the enemy not the solution. My philosophy is obvious on this blog:

  • Use only what you understand / cherry pick what's necessary
  • Use simpler solutions, don't buy into the whole stack
  • J2EE APIs are just starting points; don't build directly on the metal ... add a layer between

The other great, non-technical session I attended was Dave Thomas's session "Herding Racehorses and Racing Sheep". He's a great, entertaining, dynamic speaker but also well researched. The session was any number of things; what's wrong with our industry, how to cope with outsourcing, how to advance in your career properly. Some of it tied back to my earlier experience with Ken Auer, since his team is doing at lot of what Dave was preaching.

I did three sessions, 90 minutes each, all on Sunday. During the Tapestry intro ("Building Web Forms With Tapestry") Erik was there to help field questions. I also gave away a copy of the book, which went over quite well. It was pretty lively and, based on the session evaluations, quite well received. I do want to try more of Erik's approach, whereby I develop parts of the application live and rely on pre-written slides less. First I have to figure out my problem's with Jay's projection monitors (currently, I can get it to work by extending my desktop onto it ... if I'm facing the audience, I can't see what I'm typing). I neglected to count attendance, but the room was pretty full ... maybe thirty people total.

The HiveMind session was more sparsely attended but equally lively. I think everyone in the audience had a specific axe to grind but saw how HiveMind could help them. Again, maybe a little more live, or a more realistic example, would be good.

The final session of the day was the Creating Components session ... fewer people than earlier in the day (fifteen or so ... I have to remember to count them). Complicated stuff, but people were digging the concepts. Again, live would be better ... in fact, people were very impressed with Spindle, which I used in just the last few minutes. I kept finding new features in it, while I was demoing it!

I am concerned that my sessions ran too long ... right up to the ninety minute mark. That means I'll run over at The Server Side Symposium. I think I've been so concerned about running short that I haven't accounted sufficiently for questions from the audience.

Sunday, April 18, 2004

Tapestry 3.0 Final

The final release of Tapestry 3.0 is now available. Real announcements go out tomorrow (giving the Apache mirrors time to synchronize).

Got a lot of very, very tough work planned for 3.1. Time to roll up those sleeves!

Friday, April 16, 2004

Change Proposals on the HiveMind Wiki

HiveMind, for its entire history (nearly a year now), has been an ongoing experiment. It was a chance to start a project, fresh, in the Jakarta Commons (though that's not where it ended up). It was an experiment in using Maven as a build tool. It was an experiment in test driven development (give or take), with the goal to keep code coverage over 90% (it's at about 94% right now). I've also been pushing myself to keep the documentation up-to date with every change.

But the big experiment was to try and build a new community (overalapping Tapestry's community, of course) around it. That's been a bit shakey, largely because HiveMind was up, then down for a long time, then up, then frozen, and is only in the last week that the home page and all the infrastructure is in its final location, fully operational, and ready to work.

What's exciting now is that we're using the Wiki to Design In Full View. You can see this on the HiveMind Wiki, on the ChangeProposals page.

What do I mean by "Design In Full View"? It means that we are designing new aspects of the framework on the Wiki. Not just me, but others. And it's not "Design By Committee", we are having real discussions about approaches, requirements, relative merits, costs and benefits and so forth. The end result will be a living document in the Wiki that can be referenced, or used as the basis for final documentation.

We'll see where this heads. I'm confident that we have enough smart people on the team now, and more waiting in the wings, that we'll be accomplishing some insanely great things with HiveMind.

Thursday, April 15, 2004

New England Software Symposium

So after missing a couple of speaking engagements, I'm back on track for the New England Software Symposium. Checking the agenda I can see I'm on a couple of expert panels and giving three sessions:

  • A souped-up version of my "Basic Components" presentation, now titled "Creating Powerful Web Forms With Tapestry".
  • Creating Tapestry Components
  • Introduction To HiveMind (since my Tapestry + Hibernate presentation isn't ready yet).

The best part, for me, is that I can drive to this event (it's less than an hour from my house) and sleep in my own bed.

Wednesday, April 14, 2004

Vote to release Tapestry 3.0

Just submitted a vote to release Tapestry 3.0 FINAL. Quite a few people have been demanding a release (yes, that means you, Erik) and I think Tapestry is in excellent shape to proceed forward. Not quite all my goals for 3.0 have been reached ... there's still some gaps in the documentation, and the test suite isn't quite at the level of code coverage I'd like (it stands at 81.3% overall, and should be in the 85 - 90% range). Regardless, Tapestry 3.0 is stable and more than ready for production.

The vote will run into Saturday, with an announcement following on Monday. There's a chance that one of the committers will decide that a bug is too important to wait, and will veto the release.

The team in general, and myself in particular, have really big plans for release 3.1. Everyone's hot button item is modularity ... in 3.0, all your page specs go in WEB-INF and all your page templates go in WEB-INF (for some people) or in the root folder (for me). That causes two problems. First, those directories can get crowded when you have dozens or hundreds of pages (especially when you add in messages files and other assets to the mix). Secondly, J2EE prefers that you split your application across folders, because it (effectively) applies declarative security at the folder level. Therefore, you want to be able to put all your administrative pages, with access restricted to an admin group, into an admin folder.

This will require a bit of a rethink on how Tapestry generates and interprets URLs, and likely, some significant changes to the IEngineService interface.

The other significant change relates to component parameters; I would like to deprecate directions "in", "form" and "custom" and always use (the equivalent of) direction "auto" ... but add in hints that determine how long values can be cached. That is, once you evaluate an OGNL expression, is the value valid just while the component renders, until the page finishes rendering, or until the end of the request? I have a rough mental outline of how this all will work, and it's going to be a lot of fun with Javassist. Components will have to know when they start and finish rendering, whether parameters are bound, whether they are bound to invariants or to expressions and so forth.

That, combined with Drew's upcoming improvements to OGNL should be very exciting. Mind Bridge has profiled Tapestry applications and they can spend around 50% of their time evaluating OGNL expressions. The majority of those OGNL expressions are simple property references. They're working on a plan to make use of Javassist to replace reflection with direct method invocations where appropriate, and Drew's seen about a 10x improvement.

This leads me to the following prediction: In Tapestry 3.1, Tapestry applications will be significantly faster that servlet, JSP, Struts or JSF applications with equivalent functionality. Of course, it's pretty much impossible to find applications with "equivalent functionality", but none the less. By improving the efficient of OGNL, and making use of the optimizations possible because of the Tapestry component object model, I believe the overall processing (especially with respect to Java reflection) will decrease noticeably.

Now if I only had my own personal performance lab ...

Tuesday, April 13, 2004

Two new toys

This is way off topic, but I couldn't resist.

Toy #1

Bought myself a 15 gig iPod. Basically, this is me throwing in the towel on even trying to do work on my laptop while I travel (and I have a lot of travel coming up in May). Since I find the tedium of air travel to be agonizing, this is my hi-tech pacifier. And I'm already in love with it. I've already moved about 1.5 gig of MP3s onto it (all legit, by the way) and it works like a champ. Eventually I'll compare it's native AAC format.

It came in a perfect black cube (something of a Steve Jobs trademark) and every aspect of it so far has been done right ... except I can't get it to charge off my laptop's firewire port. Another plus: the visualizer for iTunes (the desktop program) is a licensed version of G-Force. I guess it's time to uninstall MusicMatch.

Amazingly, I ordered this from Apple at the set-price ($299) late in the afternoon on Sunday and it arrived here this morning (Tuesday!) and it has been custom engraved. And it was shipped FedEx from Shanghai! Oh, if you find an iPod at some airport engraved with "Howard Lewis Ship howardlewisship.com" ... that's mine. Give me a call so I can get it back!

Apple knows how to do these things right. Easy to use, and I keep finding new features. Sound quality with my Boise noise-canceling headphones is terrific. I'm ready to travel anywhere.

Toy #2

This one's virtual ... it's the Google AdSense banner over on the right side of this page. I rather doubt I'll drive enough traffic to earn more than some spare change but to me it's fascinating how it scans the content of the page (presumably from Google's cache) and produces pretty on-target ads. I'm slightly annoyed by the Struts and JSF training ads I've seen on a few of the archive pages, but so be it.

Thursday, April 08, 2004

TheServerSide Java Symposium Schedule

I was just looking at the TSS Symposium schedule. I'm up twice on Thursday, May 6th: HiveMind from 11:30 to 12:45, and Creating Tapestry Components from 3:30 to 5:00.

That leaves the question of which other sessions to attend! It will be a chance to learn more about AOP, and I want to see Gavin King's presentation on Mobile Data. It will also be a chance to learn a bit more about Porlets OR JBoss Cache OR J2EE without EJB (all scheduled against each other, alas).

The dinner keynote by JBoss, on "Professional Open Source" will be interesting. Will they be able to come up with a useful definition of this term?

I'm also interested in the Macromedia presentation on rich clients. At some point, (two years? five?) HTML applications will be marginalized, and some form of rich client within the web browser will take over. You won't do so much HTML, but you'll still need HiveMind to implement the server side of your application.

Wednesday, April 07, 2004

Many pots coming to a boil

So, HiveMind after months and months of being left out in the cold, is finally turning into a real project again. It's a top-level Jakarta project, all of its infrastructure is in place, and the home page is now working (without all the broken links). I've also moved the model attribute out of the <service-point> descriptor element and into the <invoke-factory> and <create-instance> elements (which makes a lot more sense).

Tomorrow will be the next Tapestry release, 3.0-rc-3 (that's "release candidate #3"). I'm really expecting this to be the final release candidate ... but you can never tell.

Time to shift my attentions (and growing energies) back to important things: preparing for The Server Side Symposium and NoFluffJustStuff (big revisions necessary for my presentations --- to reduce the amount of fluff!) and otherwise preparing for upcoming public and private sessions about Tapestry.

Tuesday, April 06, 2004

HiveMind up and humming!

We now have an empty Wiki, the full set of mailing lists, the new home page, and the Jira project. Time to get back to work on HiveMind.

Still need all the developers voted in to the project to get their CLAs to Apache before CVS access rights can be extended. Knut? Prashant? You guys listening?

Monday, April 05, 2004

HiveMind Home Page

The HiveMind home page is now up, in its proper location: http://jakarta.apache.org/hivemind/.

This is about half the infrastructure for HiveMind. The Jakarta Jira project has been created, and the CVS repository is in place. All that's left is the wiki and the mailing lists.

Friday, April 02, 2004

TheServerSide.com - Review of Tapestry In Action

The Server Side has just posted a short Review of Tapestry In Action. It is interesting (and frustrating) that every reader wants something completely different from the book. This review is reasonably accurate (through I wish Kris, the author, had used a spell checker). He faults the book on the one hand for being verbose ... then praises it on the other for being detailed. His summation is to read the first two chapters to decide if you like Tapestry, then keep the rest handy for when you implement an application. Not bad advice.

Thursday, April 01, 2004

Berkeley DB Java Edition

My pal Greg work for Sleepycat Software, who just released Berkeley DB Java Edition. Berkeley DB is some cool technology; quite a few popular sites use the C version of Berkeley DB in their presentation layer (in fact, Berkeley DB is used in a huge number of places, including Subversion), as a cache against raw data stored in Oracle or a backend system. It's anything but a relational database ... its a high-performance, transactional, embedded database. It's a way of storing large amounts of binary data, which just might happen to be serialized Java objects.

The Java Edition adds a Java-friendly API ... Berkeley DB databases end up looking like Maps and other Java collections. Or you can get more down and dirty and control exactly how objects are serialized into, and de-serialized out of, Berkeley DB entries.

There's a lot of talk about J2EE without the complexity of J2EE, thus things like Spring and HiveMind ... the sacred cow of EJBs (both session and entity) fall away when you find a different, better way to organize and componentize your code. Most of us still assume that the only way to store our data is in a relational database, and then get involved in an Entity EJB vs. Hibernate vs. JDO vs. TopLink battle ... when, in many cases, all we want is a simple way to store and retrieve Java objects. The relational database becomes an assumption, somehow evading questions of cost (of licensing, of adminstration, of maintenance, of hardware), use cases, requirements ... it's another sacred cow waiting to be slain. If you are creating a typical application, where the application itself is the only use of the database, then an embedded database may be the better, faster, more efficient, cheaper, zero-administration approach.

"Tapestry In Action" in bookstores

Just called over to the local Barnes & Nobel and they claim they have Tapestry in Action in stock; time for a little field trip (and I'm bringing a camera). Times like this really make me feel the need for a digital camera.

I also understand that a review of the book will be appearing soon on TheServerSide.

Meanwhile, Tapestry 3.0-rc-2 is ready (a formal announcement will wait for 24 hours, for the many Apache distribution mirrors to pick up the new distro files). In addition, I'm waiting for the HiveMind infrastructure to be set up: for some reason, this is taking a while.

I've continued to play around with Hibernate; I really want to get some good patterns and best practices together, both for presentations at NoFluffJustStuff, and as part of my on-site presentations. I don't know that I'll have the Hibernate stuff together in time for the New England Software Symposium, however. I'm having trouble getting started and staying focused (not surprising for someone who had his kidney removed just 16 days ago).

Update: Barnes and Noble had a copy in stock (but misfiled with the HTML books, so I moved it to the Java section myself). Borders didn't have a copy yet, they were expecting it in the next day or so. Still, it was a fun rush to see the book in a book store! In addition, the HiveMind infrastructure tasks are finally in progress. Should have a new and better HiveMind in place today or tomorrow!

Sunday, March 21, 2004

Back and recovering

I'm back from the hospital and starting to recover. It'll be a couple of weeks before I'm anywhere near full strength. The operation executed perfectly; no complications, nor problems expected down the road.

Monday, March 15, 2004

Back to full RSS feed

Now that I'm hosting this blog at my domain, I've switched back to the full RSS feed. JavaBlogs.com no longer seems to have an issue ... something about the mix of JavaBlogs.com and BlogSpot was toxic. I suspect that somewhere between them was a mis-configured web server or firewall. Hosting myself will also give me better access to statistics on page viewing.

Moving off of javatapestry.blogspot.com

I've got my own domain, why don't I use it?

This blog is now housed at http://howardlewisship.com/blog/.

Sunday, March 14, 2004

Back to short site feed

JavaBlogs.com once more can't process this blog's RSS site feed. I'm pretty sure its a problem on their side and the fingers are a-pointing. Using the short site feed seems to fix things.

This, of course, causes another problem: the formatting of the site is geared towards a fairly large display using Internet Explorer. Smaller screens and other browsers (such as Safari) are having problems. Given everything else going on right now, I won't have a chance to address that for a bit.

Saturday, March 13, 2004

Tapestry In Action: on my kitchen table

It's strange how things can coincide. Right on top of the HiveMind announcement, I now have on my kitchen table twenty-four copies of Tapestry in Action. That's copies for me, for parents, for the Quincy library, for friends ... and a bunch held back for other purposes. Suzanne and I are just giddy!

My big concern was the readability of some of the larger screen captures from later in the book ... turns out, they look terrific. The book reads great, and even though a couple of juicy chapters are available online, to get the best stuff, you're going to have to pick up a copy! Should be in book stores and available online in about two weeks.

In other news ... I have a sudden, serious medical condition. I don't want to go into details or be melodramatic; it's entirely treatable and I have an excellent prognosis: we won't be testing how well Tapestry can survive without Howard at this time. It does mean I'll be largely off-line for at least a couple of weeks, starting next Wednesday. It also means I'll be missing my speaking engagements in New York and Wisconsin.

My other appearances, including Boston, Virginia, Colorado and Las Vegas, should not be affected.

Friday, March 12, 2004

HiveMind is in

From: Geir Magnusson Jr 
Subject: [VOTE RESULT] HiveMind as Jakarta sub-project
Date: Fri, 12 Mar 2004 15:03:44 -0500
To: Jakarta General List 

The vote has been running a week now (actually longer), the count has 
been unanimously supportive (there were two +0 votes, all the rest were 
+1), so HiveMind is now a Jakarta sub-project.

Congratulations to Howard and the rest of the HiveMind community.

-- 
Geir Magnusson Jr                                   203-247-1713(m)
geir@4quarters.com 

Pretty much says it all. We have that, we have the book (expected on my doorstop shortly), life is pretty good!

Wednesday, March 10, 2004

Sample Chapters: Tapestry in Action

Manning has published two sample chapters for Tapestry in Action.

Chapter 02 introduces the basics of Tapestry, and shows how to use DirectLink, Conditional, Foreach and Image components.

Chapter 05 skips ahead a bit and discusses Tapestry input validation subsystem, which is very powerful. Alas, the screenshots were done a couple of months ago, and show the older, uglier version of the DatePicker component.

I think it was Geoff who suggested chapter 05, something along the lines of "let's blow the tops of their heads off!"

Tuesday, March 09, 2004

Rolling toward Tapestry 3.0

I've been spending the day working on Tapestry 3.0 ... fixing as many bugs as possible, updating documentation, and so forth. I've also been categorizing which fixes and enhancements will go into 3.1. A bit more of this, and I'll be ready to call a vote to release 3.0-rc-1. I'd really like to have 3.0 in at least RC (release canddiate) stage before the book shows up in stores.

Along the way, I made a very useful enhancements to the DatePicker component. It now uses a clickable button to show and hide the pop-up calendar (and the button can be customized to your app's look and feel). This is a big improvement over the old look, which used a button labeled with a "V". That was a bit tacky, the new look is pretty much ideal.

Friday, March 05, 2004

Once more, the importance of error reporting

A major feature of Tapestry that has been ported into HiveMind is line precise error reporting. When errors occur, Tapestry and HiveMind will isolate the problem down to a particular line or a particular file ... not just when parsing an HTML template or a specification file (or a HiveMind module descriptor), but later, at runtime. Line precise error reporting is a major part of an overall attitude, a discipline, about reporting errors. It takes great effort to consistently catch and report errors, and to maintain that location information. Its very important however, because without it, your user's frustration levels will go through the roof.

I know I'm going to alienate a couple of people here so I apologize in advance. I may be a whiz in my own world, but at the moment, I'm a trembling newbie in the Hibernate world. So I'm putting together a couple of simple mapping files, and making use of the Ant schemaexport task. Now, I've done something wrong, and at this time, I don't yet know what. But see what I have to go on:

bash-2.05b$ ant export
Buildfile: build.xml

compile:
     [copy] Copying 1 file to C:\workspace\ProjectRegistry\target\classes

export:

BUILD FAILED
file:c:/workspace/ProjectRegistry/build.xml:44: Schema text failed: net.sf.hibernate.MappingException: invalid mapping

Total time: 1 second

See what I mean? Not enough effort has been expended on a) describing the actual problem (which mapping? what's actually wrong with it? how do I fix it?) and b) telling me what file is broken (never mind the line!).

This is one important aspect of HiveMind: when using HiveMind's XML support, you get much checking and much reporting for free. A hypothetical HiveMind message for this might be:

Unable to process attribute cascade (of element class/set, at file://..../Project.xbm.xml, line 27, column 35): 'deleted' is not a recognized enumerated value.

Again, I don't want this to be considered a dig at Hibernate. The core competency of Hibernate and its crew is to marshall data between Java objects and databases, something they have years and years of experience at. The core competency of HiveMind is to organize services and parse XML into objects and report errors while doing it. It might be nice for Hibernate (as well as virtually every other framework, open-source or closed) to prioritize a little bit of effort into these issues. I also look forward to a day when more frameworks leverage HiveMind for these purposes, offloading this XML parsing and error reporting burden onto it.

Thursday, March 04, 2004

HiveMind downloads are available again

I've restored the pre-built distributions of HiveMind available at the temporary download location. However, those only go as far as alpha-3, and the current code (provisionally alpha-4, but likely to become beta-1) is only available via CVS and has many cool features not available in the earlier versions. Get Maven and build it yourself.

Back online at JavaBlogs

After a bit of experimentation, I was finally able to get this blog back up on Java Blogs. I changed my RSS preferences to "short" (just the first paragraph of each post, no markup) and that makes Java Blogs happy. Previously, the RSS included the full text of each posting, with markup.

Upcoming New York Trip

I'll be giving a talk about HiveMind at a one-day seminar in New York City on April 3rd: Developing Web Apps using Open Source Tools. In fact, I'll be presenting on HiveMind, not Tapestry, to sort of celebrate the fact that HiveMind is now back on track. It'll be a chance to mix it up with some folks like Rod Johnson, Ted Husted, and the like.

Starting in with Hibernate

At long last, I'm getting a chance to really dig into Hibernate. I took a cursory look at it about a year ago, but realized I didn't have enough time to design and build a new application (for the end of the Tapestry book) and stay on schedule. Now that I've promised Jay Zimmerman a session on using Tapestry and Hibernate, I need to actually learn the technology.

So far, I'm liking what I'm seeing. I'm also, in the back of my mind, thinking in terms of Hibernate/HiveMind integration (Hivernate?). I could peek at what the Spring folks have done, but that would be cheating. Likewise, I'm not looking at what others have done in terms of Tapestry / Hibernate integration either, at least until I take my first pass at it.

It's clear that Gavin and team have hit the same issues and requirements I've encountered over the last 10 (!) years. It's kind of fun ... I'm seeing the echos of their design decisions over the last few years, the rough edges they've smoothed out by extending their DTDs and APIs. Someone on the outside of Tapestry looking in would see the same thing.

Monday, March 01, 2004

HiveMind Proposal Redux

Just posted to the Jakarta General mailing list, the new HiveMind proposal.