Testing is one of the areas for applying aspects that I have been excited about for the longest time. On some of the first consulting projects where I used AOP, I used a very simple form of "virtual mock objects," e.g.,
public class TestAccount {
... pointcut inFailureTest() : cflow(execution(* TestAccount.test*Failure());
before() : call(* Statement+.*(..) throws SQLException) && inFailureTest() {
throw new SQLException("account access failed");
}
}
This was the same kind of testing that Nicholas Lesiecki wrote about in "Test flexibly with AspectJ and mock objects." Subsequently, when I was working on the aTrack project, I created a small framework to simplify this kind of testing for virtual mock objects. First I and subsequently Nick also presented about this approach and framework several times (e.g., see the slides).
Around the same time I created the virtual mocking framework in the aTrack library, Chad Woolley created his own virtual mocking framework, and Monk and Hall wrote about a simple framework. The biggest difference between the aTrack approach and the other two was that aTrack used inter-type declarations to allow defining mock behavior on the actual collaborating objects (and monitoring behavior of tested objects), e.g., Customer.setThrowable(new RuntimeException())
. This also meant that when using it, tools like today's AJDT can statically determine a fairly limited scope for virtual mocking (at least unless you use it to replace many classes in your system), whereas the other approaches tended to advise every join point in the system with a runtime check. In future, I expect commercial AOP will be better able to statically determine where pointcuts can match.
Part of this story, though, is the parallel development of mock objects as a technology. Mocks are still fairly new, and most of the people who use them practice agile methodologies. Whenever I presented "Testing With Virtual Mock Objects," I found that people who actual had tried to use mocks and those who used TDD were excited. Many others were excited by ideas that were really based on the idea of mocks.
One important development in mock objects has been the jMock project. It uses dynamic proxies and features a concise and readable syntax for writing mock tests.
As I've thought about the existing implementations of virtual mocking I realized that they all suffered from trying to reinvent a good, clean API for mocks. And with jMock being a big improvement on past APIs, it would be a lot better to extend jMock to support virtual mocks...
My thinking is that there are really five interesting types of mocking:
- direct collaborators with public interfaces (classic mocks)
- direct collaborators with non-OO interfaces (singletons, static methods, private methods, etc.)
- indirect collaborators (unit tests of closely coupled classes and integration tests)
- aspects
- monitoring the object under test
I think the right unit of abstraction for a virtual mock is ... the pointcut and not a type. (Aside: this is related to my contention that it is better to define and configure loggers by pointcut rather than type). With this in mind, I wrote some proof of concept code that successfully extended jMock with the concept of a virtual mock in two different ways. I've been calling this ajMock...
- Dynamic
class TestInitialization extends VirtualMockObjectTestCase {
...
public void testInitialization() {
VirtualMock mockStart = virtualDynamicMock("call(* startUp()) && target("+ApplicationLifecycleAware.class.getName()+")");
mockStart.expects(once()).will(doProceed());
...
Static
class TestInitialization extends VirtualMockObjectTestCase {
...
public void testInitialization() {
VirtualMock mockStart = virtualMock(StartInitialization.aspectOf());
mockStart.expects(once()).will(doProceed());
...
}private static aspect StartInitialization extends VirtualMockAspect {
public pointcut mockPoint() : call(* startUp()) && target(ApplicationLifecycleAware);
...
It was relatively straightforward to integrate these extensions into jMock (albeit incompletely and as more of a test than a complete framework). One really nice thing is that you can use the normal jMock APIs and should be able to mix and match proxy and even CGLIB proxy mocks with virtual mocks. There are a few areas where jMock assumes that mocks have proxies, and where virtual mocks can do differently.
I've been using this code to test my performance monitoring code and was pleased that it has helped me (mostly with integration tests). I started off with the dynamic approach, extending the new AspectJ 5 reflection library. However, I found that tools issues made this unpalatable. The small issue was the annoyance of runtime checking of pointcuts and having no tools information about them. I adopted the idiom of using Type.class.getName() when writing the mock strings to get compile-time checking rather than using runtime checks to help a little with that. The big issue that led me to change to the static approach was the impact on tools support everywhere else. The dynamic version had to advise almost every join point in the system with dynamic tests for whether to mock it. This resulted in a painful experience when in stepping a debugger, it added too much clutter to gutter tips, cross-reference views, etc. for advice (since everything was advised), and even made stack traces harder to read. The static version seems like a pretty reasonable compromise.
I think dynamic virtual mocks are a great candidate for load-time weaving: you could apply them to a limited scope for a given test case and not impact tools support when editing or debugging outside the scope. The part I haven't yet designed is how to effectively integrate load-time weaving into Eclipse's jUnit launcher.
These type of virtual mocks can be a great complement to, e.g., jMock.
- It makes it a snap to mock things that are traditionally hard to mock (static, final, or private methods). (case 2 above: non-OO interfaces)
- It also makes it a snap to track behavior of objects under test, without in any way mocking them. (case 5 above)
- Pointcuts are great for expressing systematic mock behavior where jMock method patterns names aren't robust enough (I see their matchers as just a baby step towards the power of pointcuts). E.g., what if I want to specify a null return value for any method returning an Object or subclass, and false for any returning a boolean? What if I want to throw SQLExceptions from any database call? (adding to the value that jMock provides in case 1)
- Pointcuts also make it really natural to integration test an isolated subsystem. Often you want to plug in dummy behavior that gets invoked a few calls down. With ajMock, you can specify this easily. In integration tests, it is sometimes a big win to use an object's own behavior as a starting point, and just to change it a bit to make your test case work (rather than dummying out all the interactions).
- I think this approach can be extended to achieve many of the goals of, e.g., aUnit, but in a way that lets you integrate with other mocks. I think you'd need a different kind of virtual mock that let you generate (stub) pointcuts, rather than just matching them in the normal flow. However, the jMock-style API seems to be a useful one to allow testing pointcuts meaningfully, through representative examples. This is a rich area for further exploration. (case 4 above)
I'd be interested in thoughts from others about this? For jMock users, where would you see this as a useful complement. Obviously I need to flesh out the design and explain more of it. I also think there's a lot of room to extend the simple virtual mocks with common idioms (e.g., plugging in a regular jMock of a class that is invoked indirectly, mocking an assembly of objects).