An introduction to Maven and Flexmojos

27 07 2011

I presented an introduction to Maven and Flexmojos last night. The talk is a varient of the one I will be giving at FITC@MAX this year.

The talk starts off discussing Maven, the hierarchical structure of projects, POMs and the build life cycle. We then discuss the Flexmojos plugin to build Flex applications. After that, we talk about repositories – both local and remote, and discuss how Nexus can perform the role of remote repository within your organisation, proxying to others on the web.

We work through 6 main examples. All code is on Github.

  1. The simplest application possible – a custom Hello World that uses the latest Flexmojos (4.0-RC1) and Flex SDK (4.5.1)
  2. Adding automated unit tests to the build
  3. Installing custom dependencies that aren’t hosted on the web
  4. Using the Flasbuilder goal to create a Flashbuilder project from a build script
  5. Starting Flex applications from the supported archetypes (templates)
  6. A basic application that has custom dependencies and its own class library.

Source files: https://github.com/justinjmoses/flexmojos-introduction





Book Review: The Big Short

19 07 2011

The Big Short (on Amazon.com)

If you’re looking for a different perspective on the Credit Crisis of 2008, then look no further. The Big Short follows four of the main players who saw the impending doom of the mortgage bond market and betted against it.

Outside finance, the concept of buy low and sell high – or going long – is well understood. Of course, the industry is more than just about investing in growth. The converse to going long – or selling short – involves selling high and buying lower. In essence, shorting is quite simple. A trader who senses a downward shift in the market, may sell items at the current market price without actually owning them. They do this by selling borrowed securities that they obtain from a third party (paying fees for the privilege) until the market falls enough for them to buy low and replace their borrowed securities. While this may seem great in theory, the downside is that your losses are infinite – as the final market price you buy at has no ceiling.

Unlike the share market, shorting in the debt market isn’t always as straight  forward. In order to bet against the market, a new type of Credit Default Swap (essentially insurance against something or someone going bust) was invented to quench the thirst of those desperate to short the subprime bubble. By paying a regular premium, they were assured they’d receive compensation if a number of loans within the security defaulted. However, in the case of The Big Short, these weren’t traders hedging their books, this was pure speculation.

As the sub-prime mortgage market took off in the mid-2000s, few people thought it could end. Investments banks were printing their own money, selling off mortgage-backed securities chock-full of subprime (or suboptimal) loans. As the market ballooned, so did these loans, and as anyone living in the US at the time can contest, every man and his dog could get a mortgage for $500K plus with no deposit and no repayments for 2 years. It sounds ludicrous, but it really happened.

The Big Short follows the movements of four separate groups, intertwined and yet operating on their own, that shorted the sub-prime market. On the way, they had to fight off endless criticism from their peers, and eventually made truckloads of money.

The first is Mike Burry, founder of the hedge fund Scion and a brilliant recluse who first cottoned on to the intrinsic flaws within the market back in 2005 – well before his peers. Next comes Steven Eisman, the cynical money manager who, independent of Burry, saw the signs on the horizon and moved into shorting the market. Following is Greg Lippmann, a Deutsche Bank trader  who cottoned on to the situation fairly early in the piece and had to fight his employer to continue paying those CDS premiums against the movements of the entire market. Finally, there’s the Wall St success story of Cornwell Capital – originally a Californian-based outfit that had made its mark scouring for mis-priced securities in the stock market.

This book, by the author of Liar’s Poker, flows well and is, for the most part, chronological. It’s easy to follow even with a rudimentary understanding of financial markets. The characters are thorough and detailed; indeed, the sense of empathy he creates towards subjects is so compelling, the book is difficult to put down. The pace picks up around the meltdown itself and both the climax and denouement are well-handled – especially for a very small subset of a very large, and shockingly all too true, story.





What’s the deal with Signals?

7 07 2011

Signals. Heard of them? What’s the big deal you say?

Simple. The event system in AS3 is both limited and antiquated. True, native AS3 events offer a convenient way of messaging (bubbling) withinUI hierarchies. Yet, at an abstract API level, they more often as not restrict the developer than aid them.

Chiefly, what Robert Penner has done with as3-signals is create a way to represent events as variables, rather than as magical strings firing off at the type (class) level. It sounds simple. It is. Yet the implications for your architecture is vast.

Consider the following interface of asynchronous methods:

public interface IServiceStream
{
  function open():void;
  function close():void;
}

Now, as the contract is asynchronous, we’ll need some events to notify us when methods have completed. Let’s say we have the following events:

  • OPENED
  • CLOSED
  • ERROR
  • TIMEOUT

In keeping with the native AS3 model, the best we can hope for is using the following metadata at the type level:

[Event(name="streamOpened",type="...")]
[Event(name="streamClosed",type="...")]
[Event(name="streamError",type="...")]
[Event(name="streamTimeout",type="...")]
public interface IServiceStream
{
 //...
}

There are four problems with this approach:

  1. Decorating via metadata does not enforce that implementors of the interface actually dispatch these events.
  2. We should, for completeness, define the events somewhere as static constants. This means we can no longer simply write interfaces, and need to write event implementations and deploy them with our API;
  3. We’re using magic strings, and as there is no compile-time checking of the metadata, we’re opening ourselves up to illusive runtime errors, if the wrong events are dispatched.
  4. There is nothing to specify which events fire when – and which events belong to which method, and which belong to the class itself.
The first two are fairly straightforward, so let’s focus on the latter two.

Magic Strings and No Contract

We have no way of tying the event type to the constant in some Event class it will eventually correspond to. “streamOpened” may map to ServiceStreamEvent.OPENED, and yet we cannot know this at the metadata level (not for the interface or even the implementor). From #1, it is evident that although we can put these requirements in, we cannot enforce their usage.

Method vs Type-level Events 

Anyone listening to an implementor of our interface, would listen at the type level for all events, and deal with them as they occurred.

For example:

var service:IServiceStream = new ServiceStreamImplementation(...);

service.addEventListener(ServiceStreamEvent.OPENED, function(evt:Event):void { ... } );
service.addEventListener(ServiceStreamEvent.CLOSED, function(evt:Event):void { ... } );
service.addEventListener(ServiceStreamEvent.ERROR, function(evt:Event):void { ... } );
service.addEventListener(ServiceStreamEvent.TIMEOUT, function(evt:Event):void { ... } );

//later when required
service.open();
We’ve been forced to declare all our handlers in one point, early enough to precede the calling of any event-dispatching methods. Anyone reading the code will have no real knowledge at which point the implementing class dispatches which event – hence why all the listeners need to be adding initially. As the interface writer, all we can do is say “this interface can dispatch any of these events” – we cannot even enforce that they are used. From #1 above, the metadata is not enforced, it’s just decoration.

Here is where Signals come in. Let’s rewrite the interface using simple signals.

public interface IServiceStream
{
  function open():ISignal;
  function close():ISignal;
}
Now let’s look at a partial implementation.
import mx.rpc.events.ResultEvent;
import mx.rpc.http.HTTPService;

import org.osflash.signals.ISignal;
import org.osflash.signals.Signal;

public class ServiceStream implements IServiceStream
{
	public function open():ISignal
	{
		var signal:Signal = new Signal(Object);

		//do something asynchronously...
		var httpService:HTTPService = new HTTPService();
		httpService.addEventListener(ResultEvent.RESULT,
			function(evt:ResultEvent):void
			{
				//use the closure to access your signal and dispatch it async
				signal.dispatch(evt.result);
			});

		httpService.send();

		return signal;
	}

	//function close();
}

Now, the usage of this implementation can become:

var service:IServiceStream = new ServiceStreamImplementation(...);

service.open().addOnce(function(result:Object):void
{
    //do something with your returned "result"
});

In one fell swoop we fixed all four of the problems with events. We even have the convenience methods addOnce() and removeAll() from the ISignal interface. The former ensures your listener is removed after it is first used, the latter is pretty self-explanatory. If you look even closer, you’ll see we just implemented the Fluent interface for free.

Imagine this in your mediator pattern – your UIs by definition have no reference to their mediator. Now they have a prescribed way of notifying their mediators that something has occurred.

Wait a second. What about those other events?

How do you return multiple items from a regular method call – compose a type for your requirements.

You could write the following signal collection:

public class ServiceSignals
{
	public var open:ISignal = new Signal(Object);
	public var error:ISignal = new Signal(String);
	public var timeout:ISignal = new Signal();
}

and change your interface to:

public interface IServiceStream
{
  function open():ServiceSignals;
  //...
}

Better yet, you could keep your interface and simply conform your Signal collection into an ISignal with a default listener/dispatcher:

public class ServiceSignal extends Signal
{
	var open:ISignal = new Signal(Object);
	var error:ISignal = new Signal(String);
	var timeout:ISignal = new Signal();

	override public function add(listener:Function):ISignalBinding
	{
		return open.add(listener);
	}

	override public function addOnce(listener:Function):ISignalBinding
	{
		return open.addOnce(listener);
	}

	override public function dispatch(...parameters):void
	{
		open.dispatch();
	}

	override public function remove(listener:Function):ISignalBinding
	{
		return open.remove(listener);
	}

	override public function removeAll():void
	{
		open.removeAll();
	}
}

Then you could use it as such:

var service:IServiceStream = new ServiceStreamImplementation(...);

var signal:ServiceSignal = service.open();

signal.addOnce(function(result:Object):void
{
    //do something with your returned "result"
});

signal.error.addOnce(...);

signal.timeout.addOnce(...);

Perhaps you’re not a huge fan of this solution. You may find that the error & timeout signals are type-level events, and you don’t want to have to add handlers for both open() and close(). OK – so what about this implementation?

public class ServiceStream implements IServiceStream
{
	public var error:ISignal = new Signal(String);
	public var timeout:ISignal = new Signal();

	private var _time:int = 30000;
	private var timer:Timer;

	public function open():ISignal
	{
		var signal:Signal = new Signal(Object);

		//do something asynchronously...
		var httpService:HTTPService = new HTTPService();
		httpService.addEventListener(ResultEvent.RESULT,
			function(evt:ResultEvent):void
			{
				//use the closure to access your signal and dispatch it async
				signal.dispatch(evt.result);
			});

		httpService.addEventListener(FaultEvent.FAULT,
			function(evt:FaultEvent):void
			{
				error.dispatch(evt.fault.faultString);
			});

		timer = new Timer(_time,1);

		var timerHandler:Function = function(evt:TimerEvent):void
			{
				timeout.dispatch();
				timer.removeEventListener(TimerEvent.TIMER_COMPLETE, timerHandler);
			}
		timer.addEventListener(TimerEvent.TIMER_COMPLETE, timerHandler);

		httpService.send();

		timer.start();

		return signal;
	}

	//function close();

}

Notice how we define the handler function as a variable so we can remove it in the listener. This is replicating the ISignal.addOnce() functionality. True, we could have used weak event listeners to allow for garbage collection, however this way is closer to our approach with Signals, so we’ll keep it for consistency.

Your implementor could then be used like this:

var service:IServiceStream = new ServiceStreamImplementation(...);

service.open().addOnce(function(result:Object):void
{
    //do something with your returned "result"
});

service.error.addOnce(function(message:String):void
{
    //handle error
});

service.timeout.addOnce(function():void
{
    //handle timeout
});

service.close().addOnce(function():void
{
    //now closed
});

Whichever way you decide, Signals give you the choice you need to make the best decision for your API.





Why bother writing unit tests?

3 07 2011

It’s funny to me how much of a fait accompli unit tests, continuous integration and agile methodologies have become. And, how those outside the scene understand so little of the intrinsic value. The more I talk with those developers on the outside, it becomes apparent just how wide the gap has become.

You see, I was never a fan of unit tests. Never.

Like many during the dotcom boom, I cut my teeth as a web developer, surrounding myself with clients and designers. I thought I was the bees knees. Why would I bother writing tests that would have to be refactored along with my code?

Deep down, I knew there was value. I knew it. There was no way that so many talented and capable developers were advocating their use if there wasn’t some significant value-add. But writing those tests felt like going to the dentist – and damn it if I was going to go willingly.

Soon enough, I found myself cornered into writing them. I gnashed my teeth, wrote the code, and then pointed and laughed gleefully each time I was forced to refactor. “See!” “See!” I would shout emphatically, and then blithely announce that the rest of my day was now effectively gone.

Initially, I saw test writing as most newbies do – as a way to ensure my code still worked as prescribed given an ever-changing system. I had my doubts to the effectiveness of such a principle, but I kept them to myself.

Then I thought – OK, so the PM wants these tests so they can sell it to the client. Work done, tests done – acceptance tests passed. “Done done”, and all that jazz. Makes sense, right? If you’re going to go agile, you better be able to ensure you’re actually building something every sprint, rather than just tinkering away and demoing little POCs.

It soon became apparent however, that I’d seriously underestimated the practice. Most of the discussions around software development in the post-web world – such as dependency injection, inversion of control and the problem with global state (ie. statics) – started to have a lot more meaning. I could no longer use aggregation or static singletons as helpers – even if I wanted to. Everything had to be injected and mocked, and the dependencies themselves had their own unit tests. Loose coupling became less of a nice-to-have, and more of a must-have.

It all suddenly made a lot of sense. My unit tests were enforcing best-practices on me and were actually helping me create my classes. Encapsulation, separation of concerns – each class had its purpose and my tests helped ensure that they remained that way.

Of course, I still find myself refactoring. I wouldn’t be much of a coder if I didn’t. You know what though – I’m finding it a lot less painful than it used to be. Everything is a lot more compartmentalised, and the code is self-describing. It’s no wonder there are scores of developers evangelising test-driven development (and, consequently, a counter-culture warning against over-dependence on test coverage).

If you don’t write unit tests, do yourself a favour. Start.