Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!
Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Thursday, February 23, 2012

Tests: Auto Launch Application, or Expect Running?

Does your test suite launch your application or expect it to be running already? This question came up while working on a client project; I launched the Selenium-based test suite and everything failed ... no requests got processed and I spent some time tracking down why the test suite was failing to launch the application.

I chatted about this with my client ... and found out that the Selenium test suite expects the application to already be running. It starts up Selenium Server, sure, but the application-under-test should already be running.

And suddenly, I realized I had a Big Blind Spot, dating back to when I first started using Selenium. For years, my testing pattern was that the test suite would start up the application, run the Selenium tests, then shut it down ... and this makes perfect sense for testing the Tapestry framework, where the test suite starts and stops a number of different mini-applications (and has to run headless on a continuous integration server).

But an application is different, and there's a lot of advantages to testing against the running application ... including being able to quickly and easily reproduce any failures in the running application. Also, without the time to start and stop the application, the test suite runs a bit faster.

Certainly, the command line build needs to be able to start and stop the application for headless and hands-off testing. However, the test suite as run from inside your IDE ... well, maybe not.

So ... how do you handle this on your projects?

Thursday, August 19, 2010

Groovin' on the Testin'

I'm at the point now where I'm writing Groovy code for (virtually) all my unit and integration tests. Tapestry's testing code is pretty densely written ... care of all those explicit types and all the boilerplate EasyMock code.

With Groovy, that code condenses down nicely, and the end result is more readable. For example, here's an integration test:

    @Test
    void basic_links() {
        clickThru "ActivationRequestParameter Annotation Demo"
        
        assertText "click-count", ""
        assertText "click-count-set", "false"
        assertText "message", ""
        
        clickAndWait "link=increment count"
        
        assertText "click-count", "1"
        assertText "click-count-set", "true"
        
        clickAndWait "link=set message"
        
        assertText "click-count", "1"
        assertText "click-count-set", "true"
        assertText "message", "Link clicked!"        
    }

That's pretty code; the various assert methods are simple enough that we can strip away the unecessary parenthesis.

What really hits strong is making use of Closures though. A lot of the unit and integration tests have a big setup phase where, often, several mock objects are being created and trained, followed by some method invocations on the subject, followed by some assertions.

With Groovy, I can easily encapsulate that as templates methods, with a closure that gets executed to supply the meat of the test:

class JavaScriptSupportAutofocusTests extends InternalBaseTestCase
{
    private autofocus_template(expectedFieldId, cls) {
        def linker = mockDocumentLinker()
        def stackSource = newMock(JavaScriptStackSource.class)
        def stackPathConstructor = newMock(JavaScriptStackPathConstructor.class)
        def coreStack = newMock(JavaScriptStack.class)
        
        // Adding the autofocus will drag in the core stack
        
        expect(stackSource.getStack("core")).andReturn coreStack
        
        expect(stackPathConstructor.constructPathsForJavaScriptStack("core")).andReturn([])
        
        expect(coreStack.getStacks()).andReturn([])
        expect(coreStack.getStylesheets()).andReturn([])
        expect(coreStack.getInitialization()).andReturn(null)
        
        JSONObject expected = new JSONObject("{\"activate\":[\"$expectedFieldId\"]}")
        
        linker.setInitialization(InitializationPriority.NORMAL, expected)
        
        replay()
        
        def jss = new JavaScriptSupportImpl(linker, stackSource, stackPathConstructor)
        
        cls jss
        
        jss.commit()
        
        verify()
    }
    
    @Test
    void simple_autofocus() {
        
        autofocus_template "fred", { 
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
        }
    }
    
    @Test
    void first_focus_field_at_priority_wins() {
        autofocus_template "fred", {
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
            it.autofocus FieldFocusPriority.OPTIONAL, "barney"
        }
    }
    
    @Test
    void higher_priority_wins_focus() {
        autofocus_template "barney", {
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
            it.autofocus FieldFocusPriority.REQUIRED, "barney"
        }
    }
}

That starts being neat; with closures as a universal adapter interface, it's really easy to write readable test code, where you can see what's actually being tested.

I've been following some of the JDK 7 closure work and it may make me more interested in coding Java again. Having a syntax nearly as concise as Groovy (but still typesafe) is intriguing. Further, they have an eye towards efficiency as well ... in many cases, the closure is turned into a synthetic method of the containing class rather than an entire standalone class (the way inner classes are handled). This is good news for JDK 7 ... and I can't wait to see it tame the class explosion in languages like Clojure and Scala.

Thursday, December 03, 2009

TestNG and Selenium

I love working on client projects, because those help me really understand how Tapestry gets used, and the problems people are running in to. On site training is another good way to see where the theory meets (or misses) the reality.

In any case, I'm working for a couple of clients right now for whom testing is, rightfully, quite important. My normal approach is to write unit tests to test specific error cases (or other unusual cases), and then write integration tests to run through main use cases. I consider this a balanced approach, that recognizes that a lot of what Tapestry does is integration.

One of the reasons I like TestNG is that it seamlessly spans from unit tests to integration tests. All of Tapestry's internal tests (about 1500 individual tests) are written using TestNG, and Tapestry includes a base test case class for working with Selenium: AbstractIntegrationTestSuite. This class does some useful things:

  • Launches your application using Jetty
  • Launches a SeleniumServer (which drives a web browser that can exercise your application)
  • Creates an instance of the Selenium client
  • Implements all the methods of Selenium, redirecting each to the Selenium instance
  • Adds additional error reporting around any Selenium client calls that fail

These are all useful things, but the class has gotten a little long in the tooth ... it has a couple of critical short-comings:

  • It runs your application using Jetty 5 (bundled with SeleniumServer)
  • It starts and stops the stack (Selenium, SeleniumServer, Jetty) around each class

For my current client, a couple of resources require JNDI, and so I'm using Jetty 7 to run the application (at least in development, and possibly in deployment as well). Fortunately, Jetty 5 uses the old org.mortbay.jetty packages, and Jetty 7 uses the new org.eclipse.jetty packages, so both versions of the server can co-exist within the same application.

The larger problem is that I didn't want a single titanic test case for my entire application; I wanted to break it up in other ways, by Tapestry page initially.

I could create additional subclasses of AbstractIntegrationTestSuite, but then the tests will spend a huge amount of time starting and stopping Firefox and friends. I really want that stuff to start just once.

What I've done is a bit of refactoring, by leveraging some features of TestNG that I hadn't previously used.

The part of AbstractIntegrationTestSuite responsible for starting and stopping the stack is broken out into its own class. This new class, SeleniumLauncher, is responsible for starting and stopping the stack around an entire TestNG test. In the TestNG terminology, a suite contains multiple tests, and a test contains test cases (found in individual classes, within scanned packages). The test case contains test and configuration methods.

Here's what I've come up with:

package com.myclient.itest;

import org.apache.tapestry5.test.ErrorReportingCommandProcessor;
import org.eclipse.jetty.server.Server;
import org.openqa.selenium.server.RemoteControlConfiguration;
import org.openqa.selenium.server.SeleniumServer;
import org.testng.ITestContext;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;

import com.myclient.RunJetty;
import com.thoughtworks.selenium.CommandProcessor;
import com.thoughtworks.selenium.DefaultSelenium;
import com.thoughtworks.selenium.HttpCommandProcessor;
import com.thoughtworks.selenium.Selenium;

public class SeleniumLauncher {

  public static final String SELENIUM_KEY = "myclient.selenium";

  public static final String BASE_URL_KEY = "myclient.base-url";

  public static final int JETTY_PORT = 9999;

  public static final String BROWSER_COMMAND = "*firefox";

  private Selenium selenium;

  private Server jettyServer;

  private SeleniumServer seleniumServer;

  /** Starts the SeleniumServer, the application, and the Selenium instance. */
  @BeforeTest(alwaysRun = true)
  public void setup(ITestContext context) throws Exception {

    jettyServer = RunJetty.start(JETTY_PORT);

    seleniumServer = new SeleniumServer();

    seleniumServer.start();

    String baseURL = String.format("http://localhost:%d/", JETTY_PORT);

    CommandProcessor cp = new HttpCommandProcessor("localhost",
        RemoteControlConfiguration.DEFAULT_PORT, BROWSER_COMMAND,
        baseURL);

    selenium = new DefaultSelenium(new ErrorReportingCommandProcessor(cp));

    selenium.start();

    context.setAttribute(SELENIUM_KEY, selenium);
    context.setAttribute(BASE_URL_KEY, baseURL);
  }

  /** Shuts everything down. */
  @AfterTest(alwaysRun = true)
  public void cleanup() throws Exception {
    if (selenium != null) {
      selenium.stop();
      selenium = null;
    }

    if (seleniumServer != null) {
      seleniumServer.stop();
      seleniumServer = null;
    }

    if (jettyServer != null) {
      jettyServer.stop();
      jettyServer = null;
    }
  }
}

Notice that we're using the @BeforeTest and @AfterTest annotations; that means any number of tests cases can execute using the same stack. The stack is only started once.

Also, notice how we're using the ITestContext to communicate information to the tests in the form of attributes. TestNG has a built in form of dependency injection; any method that needs the ITestContext can get it just by declaring a parameter of that type.

AbstractIntegrationTestSuite2 is the new base class for writing integration tests:

package com.myclient.itest;

import java.lang.reflect.Method;

import org.apache.tapestry5.test.AbstractIntegrationTestSuite;
import org.apache.tapestry5.test.RandomDataSource;
import org.testng.Assert;
import org.testng.ITestContext;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.BeforeMethod;

import com.mchange.util.AssertException;
import com.thoughtworks.selenium.Selenium;

public abstract class AbstractIntegrationTestSuite2 extends Assert implements
    Selenium {

  public static final String BROWSERBOT = "selenium.browserbot.getCurrentWindow()";

  public static final String SUBMIT = "//input[@type='submit']";

  /**
   * 15 seconds
   */
  public static final String PAGE_LOAD_TIMEOUT = "15000";

  private Selenium selenium;

  private String baseURL;

  protected String getBaseURL() {
    return baseURL;
  }

  @BeforeClass
  public void setup(ITestContext context) {
    selenium = (Selenium) context
        .getAttribute(SeleniumLauncher.SELENIUM_KEY);
    baseURL = (String) context.getAttribute(SeleniumLauncher.BASE_URL_KEY);
  }

  @AfterClass
  public void cleanup() {
    selenium = null;
    baseURL = null;
  }

  @BeforeMethod
  public void indicateTestMethodName(Method testMethod) {
    selenium.setContext(String.format("Running %s: %s", testMethod
        .getDeclaringClass().getSimpleName(), testMethod.getName()
        .replace("_", " ")));
  }

  /* Start of delegate methods */
  public void addCustomRequestHeader(String key, String value) {
    selenium.addCustomRequestHeader(key, value);
  }

  ...
}

Inside the @BeforeClass-annotated method, we receive the test context and extract the selenium instance and base URL put in there by SeleniumLauncher.

The last piece of the puzzle is the code that launches Jetty. Normally, I test my web applications using the Eclipse run-jetty-run plugin, but RJR doesn't support the "Jetty Plus" functionality, including JNDI. Thus I've created an application to run Jetty embedded:

package com.myclient;

import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.webapp.WebAppContext;

public class RunJetty {

  public static void main(String[] args) throws Exception {

    start().join();
  }

  public static Server start() throws Exception {
    return start(8080);
  }

  public static Server start(int port) throws Exception {
    Server server = new Server(port);

    WebAppContext webapp = new WebAppContext();
    webapp.setContextPath("/");
    webapp.setWar("src/main/webapp");

    // Note: Need jetty-plus and jetty-jndi on the classpath; otherwise
    // jetty-web.xml (where datasources are configured) will not be
    // read.

    server.setHandler(webapp);

    server.start();

    return server;
  }
}

This is all looking great. I expect to move this code into Tapestry 5.2 pretty soon. What I'm puzzling on is a couple of extra ideas:

  • Better flexibility on starting up Jetty so that you can hook your own custom Jetty server configuration in.
  • Ability to run multiple browser agents, so that a single test suite can execute against Internet Explorer, Firefox, Safari, etc. In many cases, the same test method might be invoked multiple times, to test against different browsers.

Anyway, this is just one of a number of very cool ideas I expect to roll into Tapestry 5.2 in the near future.

Thursday, March 26, 2009

1819

Nope, that's not a date. 1819 is the current number of individual tests run every time a change is committed to Tapestry. Over 180 of those are Selenium-based integration tests. That's a lot of testing!

Wednesday, February 20, 2008

Unit Testing: Crisis of Faith

I'm hitting a bit of a quandary with response to testing: I'm losing some faith in unit testing, or at least busywork unit testing relative to integration testing.

At issue is the way Tapestry is constructed; there's a lot of small moving parts that work together (integrate together might give you some foreshadowing). Sometimes testing the small pieces won't accomplish much. Case in point:

public class SecureWorker implements ComponentClassTransformWorker
{
    public void transform(ClassTransformation transformation, MutableComponentModel model)
    {
        Secure secure = transformation.getAnnotation(Secure.class);

        if (secure != null)
            model.setMeta(TapestryConstants.SECURE_PAGE, "true");
    }
}

This piece of code looks for the @Secure annotation on a component class, and records a bit of meta data that will be used elsewhere.

In the past I've written a chunk of tests even for something as simple as snippet; Following that pattern, I'd write two tests: 1) what if the annotation is missing? 2) what if the annotation is present? I'd use mock objects to ensure that the correct methods of the correct dependencies are invoked in each scenario. However, to write those tests, even leveraging existing base classes to manage the mock objects I'd need, would require several times as much test code as the production code I want to test.

But what would that prove? This bit of code will be useless without a large number of other pieces yet to be written: code that uses that meta data and forces the use of secure HTTPS links and so forth. Code that integrates this worker into the chain of command for processing component classes. Code that verifies that requests for pages that are marked as secure are, in fact, secure.

In other words, to verify the behavior I need to validate that all of those pieces are working together, that they are all integrated.

That doesn't mean I'm calling for the abandonment of unit tests. What I'm saying is that pointless unit tests don't accomplish much. I've found that unit tests are invaluable for testing edge cases and error conditions. Even the most rabid test driven pundits don't expect you to unit test your accessor methods ... I think tiny classes like this SecureWorker fall under the same general umbrella. I intend to focus my efforts on the integration tests that will encompass this code in concert with other code. And maybe write a few unit tests along the way, as long as they actually have value.

Update: It's not that I can't write the test, it's that doing so is not always worth the effort. Here's a similar test I wrote in the past:

public class SupportsInformalParametersWorkerTest extends InternalBaseTestCase
{

    @Test
    public void annotation_present()
    {
        ClassTransformation ct = mockClassTransformation();
        MutableComponentModel model = mockMutableComponentModel();
        SupportsInformalParameters annotation = newMock(SupportsInformalParameters.class);

        train_getAnnotation(ct, SupportsInformalParameters.class, annotation);
        model.enableSupportsInformalParameters();

        replay();

        new SupportsInformalParametersWorker().transform(ct, model);

        verify();
    }

    @Test
    public void annotation_missing()
    {
        ClassTransformation ct = mockClassTransformation();
        MutableComponentModel model = mockMutableComponentModel();

        train_getAnnotation(ct, SupportsInformalParameters.class, null);

        replay();

        new SupportsInformalParametersWorker().transform(ct, model);

        verify();

    }
}

Great: this detects whether @SupportsInformalParameters is on the class ... but I have other integration tests that actually stress the behavior when the annotation is present and when the annotation is absent. Ultimately, that tests both of these cases, and the code that ensures that this code is even executed, and the desired behavior of the component (with and without the annotation). Failures are unlikely to occur in this code, but likely in other code. What is the value of this test? Not even code coverage, and the integration tests hit both cases already.

Thursday, August 23, 2007

Named mocks in EasyMock 2.3

This is a great new feature of EasyMock 2.3: you can name your mocks as you create them.

This is wonderful news, I can't wait to try it out. Why? Because once you get going with half a dozen different mocks that implement the same interface, it's a lot of work to figure out which one actually failed. This is going to make creating and maintaining some of my more complex tests much easier.