Tuesday, October 11, 2011

Yet Another Micro Architecture for Flex RIA and RMA

The Holistic micro-architecture framework for building RIAs and RMAs (Rich Mobile Applications) "borrows" from Robotlegs, PureMVC, Cairngorm, SWIZAS3Signals and the SpringFramework. Each has one or two things that I really like about it, but what I wanted is to bring all these things into one simple, very lightweight framework that enables me to quickly and more importantly methodically build mobile, web and desktop applications. This Holistic API is designed to work on top of either the Flex web or mobile framework and takes advantage of the Flex compiler, environment and lifecycle. Several "design patterns" are utilized in the API, such as Model-View-Controller, Loose Coupling, Locator, Usage of Interface, Delegation, Dependency Injection, Separation of Concerns and Inversion Of Control, not sure if some of latter are pure design patterns per GoF, but bear with me :-) One thing that is heavily relied on, is programming to convention versus programming to configuration.

MVC

Looking at the above diagram, the state of an application resides in the Model. The model is a set of non visual properties, where some properties are annotated using the [Bindable] metadata, in such that a change event will be dispatched whenever that property is mutated. This [Bindable] metadata is an indication that this property is represented by a view.
public class Model {
    [Bindable]
    public var text:String;
}
Views are subclasses of UIComponents and are bound using curly braces (I call them 'magic' braces) to the Model to represent the state of the application based on the view capabilities.
<s:label text="{model.text}"/>
In the above example, this Label instance is representing the model 'text' property and any changes to the 'text' value will be auto-magically shown in the label location on the screen. For a more complex example; a property that is a list of features can be bound to a map view and the features will be drawn as points on the map. At the same time, that same list can be bound to a data grid where each feature will be represented as a row in that grid. A model can have multiple view representations in an application.
Now, if a view wants to modify the model, it does so using a controller. A view is not, I repeat, is not allowed to mutate the model. Only a controller is allowed to mutate the model. This is very important as a convention. All the logic to mutate the model should reside in a controller even if it means that the logic is a single line implementation. Trust me on this, when developing the application, in the beginning this might be a single line. But... along the application development process, this will get more elaborated per the application requirements. You will be tempted by the programming devils to "touch" the model from the view and you will come to regret it later on. So be resilient and do the right thing !
Enough preaching. So how does a view tell a controller to mutate the model ? Simple, using signals. A signal is a glorified event with integrated event dispatching enabling a loose coupling between the view and the controller.
The following is the signature of the static 'send' function in the Signal class:
public static function send(type:String,...args):void
The first required string argument is the signal type. This string is very important in our convention over configuration design as we will see later on. A signal can optionally carry additional information such as in the following example:
<s:textinput id="ti"/>
<s:button click="Signal.send('submit',ti.text)" label="{model.text}"/>
When the user clicks on the Button instance, a signal of type 'submit' is sent along with the text that was entered in the TextInput instance.
Now, I need 'something' to receive that signal and act on it. This something is a controller. Again, relying on convention over configuration, I will create a controller named 'SubmitController'. Note the prefix 'Submit', it is the same as the signal type. Again this is the convention over configuration that is working in my favor where by writing pseudo-self documenting code. I can look at my list of controllers in my IDE and can tell immediately from the names what signal is handled by what class. Yes, I will have a lot of controllers, but this divide and conquer approach enables me to do one thing and one thing very well and separate my concerns.
In the controller class implementation, to handle the 'submit' signal, I should and must have a function named 'submit' that accepts one argument of type String like the following:
[Signal]
public function submit(text:String):void
{
  ...
}
Note the [Signal] metadata on the function declaration. See, as a Flex developer, you are already familiar with and using the built-in annotations such as [Bindable]. But Flex enables a developer to create his/her own metadata that will be attached to the class in question for introspection, cool, eh ? Back to signals, one more example to solidify the association of signals to controllers - if you send a signal of the form:
Signal.send('foo', 123, 'text', new Date());
To handle that signal, you should have the following controller declaration:
public class FooController {
    [Signal]
    public function foo( nume:Number, text:String, now:Date):void {
      ...
    }
}
Note that the order of the handler function arguments should match the order and type of the signal arguments. 123 -> nume, 'text' -> text, new Date() -> now. What makes this pretty neat is the independence of the hardwiring signal dispatching mechanism and the handler is just a function that can be unit tested, more on that later.
Applications need to communicate with the outside world, say for example you want to locate an address using an in-the-cloud-locator service. Controllers do not communicate with the outside world, they delegate that external communication to a service. That service will use the correct protocol and payload format to talk to the external service be SOAP, REST, RemoteService in XML, JSON or AMF or whatever. To enable different implementations of these protocols, an interface is declared and is injected into the controller for usage like as follows:
public class LocateController {

    [Inject]
    public var locateService:ILocateService;

    [Signal]
    public function locate(address:String):void
    {
        locateService.locate(address,
            new AsyncResponder(resultHandler, faultHandler));
    }
}
The locateService variable is assigned at runtime using inversion of control and when the 'locate' signal is sent, it is handled by the 'locate' function who delegates it to the ILocateService implementation. The [Inject] metadata is for more than injecting service implementations. Here is another usage to overcome AS3 language constraints and make your code more testable. Say you start a project and Signal A is sent, you go and you write Controller A to handle the signal. Now you have to write another controller B to handle signal B (remember SoC :-) but you find that Controller A and B will share some code. Since you are a good OO developer, you create a super class S that has the common code and make Controller A and Controller B subclass S. You feeling pretty good, onto Controller C to handle signal C. But wait a minute, some code from Controller B can be shared with Controller C. Ok, you create a super class D and subclass. But wait a minute..., AS3 is a single inheritance model, than means Controller B cannot subclass super class S and D at the same time. This is where composition is better that inheritance where now I can move the common code to class S and class D and inject those classes into controller A,B and C.
public class AController {
    [Inject]
    public var refS:ClassS;

    [Signal]
    public function doA(val:*):void {
        refS.doS(val);
    }
}

public class BController {
    [Inject]
    public var refD:ClassD;
    
    [Inject]
    public var refS:ClassS;

    [Signal]
    public function doB(val:*):void {
        refS.doS(val);
        refD.doD(val);
    }
}

public class CController {
    [Inject]
    public var refD:ClassD;

    [Signal]
    public function doC(val:*):void {
        refD.doD(val);
    }
}
Cool ? Onward, something _has_ to wire all these pieces together and that something is a Registry instance that is declared in the main application mxml as follows:
<fx:Declarations>
    <h:Registry id="registry">
        ...
   </h:Registry>
</fx:Declarations>
The children of the Registry are all the application controllers and all injectable delegates and services. So using the above example:
<h:Registry id="registry">
   <m:Model/>
    <c:ClassS/>
    <c:ClassD/>
    <s:AController/>
    <s:BController/>
    <s:CController/>
</h:Registry>
Taking advantage of the declarative nature of Flex, I declare the registry children that gets translated into ActionScript instantiation, whereupon creation completion, the registry will introspect each child for [Inject] metadata and invokes the setter with the appropriate type instances. Next, the [Signal] metadata are located and a proxy object is created wrapping the annotated function as event listener to the Signals (remember, signals are nothing more than glorified events). All this introspection by the Registry is perform using the as3-commons-reflect library (url). Going back to programing to interfaces and having multiple implementation of an interface in the Registry, how is the injection resolved ? Well, by default the first implementation is injected. But what if I want a specific implementation ? here is the solution:

<h:Registry>
    <c:RestService/>
    <c:SoapService id="soapService"/>
    <c:FooController/>
    <c:BarController/>
</h:Registry>

[Register(name="restService")]
public class RestService Implements IService {
  ...
}

public class FooController {
  [Inject]
  public var restService:IService;
  ...
}

public class BarController {
  [Inject(name="soapService")]
  public var service:IService;
  ...
}

There is a lot packed in this example and there is a lot of conventions, so stay with me. The registry is declared with a couple of services and controllers. Note that the SoapController is registered with the "soapController" id. This enables the BarController to be injected with that specific implementation of the IService interface via the name attribute in the inject metadata. Next, the RestService is registered with the Registry with the name "restService" as declared in the class metadata. Now (magic time), the FooController is injected with the RestService instance despite the absence of the name attribute in the inject metadata because the _variable_ name is same as the class registration. Pretty powerful, I know, mind blowing!

Ok, last but not least, unit testing. Actually, if you do TDD, that should be first. The holistic framework looks for simple interfaces, classes and functions, and with the built-in capabilities of unit testing and code coverage add-on to FlashBuilder, there is no excuse not to test your code. Whole books and articles have been written about Flex unit testing so google them.

Like usual all the source code is available here. I drink my own champagne, what you will find is the Flex unit test project that includes the holistic library.

Have fun.

Update: I created a very simple project that demonstrated the usage of the Holistic framework. As I said it is a simple application that displays a data grid that is bound to a list property in the model. Below the grid is a form that enables you to enter a first name and last name. When you click on the submit button, a signal is sent with the entered info. A handler will received the info and will delegate it to a service that uppercases the values and adds them to the list.

Map Tiles For Offline Usage Using ArcGIS API for Flex

So… Google introduced an offline feature to their mobile mapping application, enabling you to view map tiles when you are disconnected from the network. This is pretty neat and very useful now that local storage is so “abundant” on mobile devices. In this post, I would like to show you how to use the mobile device local storage for offline tile retrieval using the ArcGIS API for Flex. When we built the API, we always had the vision of extensibility to enable people to do things that we did not think about. One of then was to enable the control of the URL from where the tiles will be retrieved. A while back, I did such an implementation using Amazon S3. So, I rehashed that code using Adobe AIR File capabilities. The demo application that I am featuring here operates in two modes; an online mode and an offline mode. In the online mode, I keep a set all downloaded tiles for a particular viewing session. Before I go offline, I download the map server metadata and all the visited tiles to my device local storage. The AIR runtime can notify an application when the network connectivity changes. This enables me to put the application in offline mode and when I start panning and zooming, rather than retrieving the tiles from the cloud, I retrieve the tiles from my local storage. Pretty neat, eh ? So here is the code:
public class OfflineTiledMapServiceLayer extends ArcGISTiledMapServiceLayer
{
  override protected function getTileURL(
     level:Number,
     row:Number,
     col:Number
  ):URLRequest
  {
    var urlRequest:URLRequest;

    if (Model.instance.isOffline)
    {
      urlRequest = new URLRequest(
        "app-storage:/l" + level + "r" + row + "c" + col);
    }
    else
    {
      urlRequest = super.getTileURL(level, row, col);
      if (urlRequest.url in Model.instance.cacheItemDict === false)
      {
        const item:CacheItem = new CacheItem();
        item.urlRequest = urlRequest;
        item.level = level;
        item.row = row;
        item.col = col;
        Model.instance.cacheItemDict[urlRequest.url] = item;
      }
   }

   return urlRequest;
 }
}
The OfflineTiledMapServiceLayer extends the ArcGISTiledMapServiceLayer class and overrides the getTileURL function. This function is invoked to get the tile URL for a particular map level, row and column. If the application mode is offline, then the “app-storage” url scheme is used and the path is in the form of “l”+level+”r”+row+”c”+column. If the application mode is online, then the super.getTileURL is invoked and we keep a set of visited URLs. Using the application settings view, the application has the option to download the map server metadata and iterate over the visited tiles and save the bitmap images to the local storage as defined by File.applicationStorageDirectory. The AIR runtime has the capability to notify the application of a network change. When this occurs, I ping a URL (www.google.com) using the HTTPService to determine if this is a connect or a disconnect change thus putting the application in an online or offline state.
The application can be written in such a way that any visited tile can automatically be saved to the local storage, I leave that as an exercise for the reader :-)
Like usual, all the source code is available here.
NOTE: This sample application is for demonstration purposes ONLY and is intended to be used with your own legally cacheable tiles - I am not a lawyer, but I am pretty sure that it is not legal to save locally the ArcGIS.com accessible tiles.

Monday, September 12, 2011

Introspective Event Handling For Flex SkinnableComponents

So the Flex Spark architecture promotes the separation of a component model/controller from its view, or in Flex lingo, its skin. Here is the PITA process that I go throughout when creating a skinnable component manually:
  1. I create an ActionScript class (the host component) that subclasses SkinnableComponent.
  2. I define and annotate the skin parts. I try to define the skin part types to be as high as possible in the class hierarchy. What I mean by that is instead of defining a part to be of type Button, I make it of type ButtonBase.
  3. I override the partAdded function and add all the event listeners for each part, as event handling should be done in the host component not in the skin.
  4. I override the partRemoved function and remove all the added event listeners, as I want to be a “good citizen”.
  5. I create a subclass of Skin and associate it with the host component.
  6. I add the skin parts and any graphic elements to make it “pretty”.
  7. I “ClassReference” the skin to its host component as the default skin in the main application stylesheet.
  8. I Implement the content of the event listeners.
  9. Done, to the next skinnable component.
Told you was PITA! So here is what a very simple skinnable component looks like:
package com.esri.views {
import flash.events.MouseEvent;
import mx.controls.Alert;
import spark.components.supportClasses.ButtonBase;

public class MySkinnableComponent extends SkinnableComponent{
    [SkinPart]
    public var myPart:ButtonBase;

    public function MySkinnableComponent(){
    }

    override protected function partAdded(partName:String, instance:Object):void {
      super.partAdded(partName, instance);
      if( instance === myPart) {
        myPart.addEventListener(MouseEvent.CLICK,myPart_clickHandler);
      }
    }

    override protected function partRemove(partName:String, instance:Object):void {
      super.partAdded(partName, instance);
      if( instance === myPart) {
        myPart.removeEventListener(MouseEvent.CLICK,myPart_clickHandler);
      }
    }

    public function myPart_clickHandler(event:MouseEvent):void {
      Alert.show('myPart_clickHandler');
    }
}
}
Pretty Eh ? When programming, I do believe in DRY and if something is “boilerplate”, then it should be "templated". In the above, what is really the PITA, is the monkey-coding of adding and removing event listeners for each added and removed part. Talk about repeating yourself ! What if we could automate that process with convention and very minimal configuration. This will enable me to focus on the fun part which is the skinning and styling, and on the money making part which is the logic. Now, please note my event handlers:
	public function myPart_clickHandler(event:MouseEvent):void
This naming convention says a lot; this is an event handler for a part named “myPart” and it is handling the “click” event whenever it is dispatched. Cool eh ? See, using this convention, a colleague can look at this “self-documented” function and figure out what is going on at that line of code. So how to make this set of functions with this convention be automagically “hooked” and discovered by the running application ? Enter metadata! So with minimal configuration, I can now have:
	[SkinPartEventHandler]
	public function myPart_clickHandler(event:MouseEvent):void
The discovery and handling of these functions can now be done for any skinnable component in a templated way by overriding the partAdded function using the amazing as3-commons-reflect reflection library:
override protected function partAdded(
  partName:String,
  instance:Object
  ):void
{
  super.partAdded(partName, instance);
  for each (var method:Method in m_type.methods){
    const metadataArray:Array = method.getMetadat("SkinPartEventHandler");
    if (metadataArray && metadataArray.length){
      const metadata:Metadata = metadataArray[0];
      const tokens:Array = method.name.split("_");
      const localName:String = tokens[0];
      if (localName === partName){
        const eventHandler:String = tokens[1];
        const eventType:String = eventHandler.substr(0,
             eventHandler.indexOf("Handler"));
        instance.addEventListener(eventType, this[method.name]);
      }
    }
  }
}
So what is happening here ? As each skin part is added, we look for all the method in this class that are annotated with SkinPartEventHandler. Based on the agreed convention, the name of each matching method can be split into two tokens using the underscore character as a separator. If first split token matches the added part name, then we can get the event type from the second token which is the string that is prefixing the ‘Handler’ text. So now, we can add the matching method as a listener to that added instance for that event type. Cool ? I think so too ! Come to think about it, All event handling in Flash/Flex should be done with metadata and convention. Oh well! Here is a FlashBuilder project that you can download and see how this is implement and for you to DRY.

I am leaving the MOST important part for last, make sure to add "-keep-as3-metadata+=SkinPartEventHandler" to your "Additional compiler arguments" in the "Flex Compiler" under your project properties, or else this special metadata will not be compiled into the class definition by default.

Friday, August 19, 2011

ArcGIS Lite API for Flex

The ArcGIS API for Flex is pretty amazing and very powerful, but it is very GIS centric. Flex Programmers have to know for example the difference between mercator and geographical coordinates systems and have to understand the concepts of map layers, etc….
Programmers just want to put dots on maps at a specific latitude and longitude. This is very easy to do say for example using the google map API and folks have been asking for something like that for a while. So, I am please to tell you about an open source project that we have launched on github that is exactly that. A simple mapping API that is based on the core API to enable Flex developers to build mapping applications. The idea to open source the project is to let you see how some high level functions are implemented using the low level API. Here is a quick sample:
<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               xmlns:mx="library://ns.adobe.com/flex/mx"
               xmlns:views="com.esri.views.*">
    <fx:Script>
        <![CDATA[
            import com.esri.ags.events.MapEvent;

            private function map_loadHandler(event:MapEvent):void
            {
                map.setCenter([ 40.736072, -73.992062 ])
            }
        ]]>
    </fx:Script>
    <views:ESRIMap id="map" load="map_loadHandler(event)"/>
</s:Application>

In addition, the API is mobile friendly. You can build Android and iOS mapping applications using the Flex API. Here is a mobile sample:

<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
               xmlns:s="library://ns.adobe.com/flex/spark"
               xmlns:views="com.esri.views.*">
    <fx:Style source="stylesheet.css"/>
    <fx:Script>
        <![CDATA[
            import com.esri.ags.events.MapEvent;
            import com.esri.events.GeolocationUpdateEvent;

            private function map_geolocationUpdateHandler(event:GeolocationUpdateEvent):void
            {
                map.locationToAddress(event.mapPoint, 50.0);
            }

            private function map_loadHandler(event:MapEvent):void
            {
                map.whereAmI();
            }
        ]]>
    </fx:Script>
    <views:ESRIMobileMap id="map"
                         geolocationUpdate="map_geolocationUpdateHandler(event)"
                         load="map_loadHandler(event)"/>
</s:Application>

As mentioned earlier, the project is on github. So you can clone it, compile it with the core API swc and learn how geocoding or routing is implemented.

The project can use more documentation as, I do write self documenting code! LOL - Sorry, that was not funny since this is supposed to be a stepping stone to the core API that _is_ very well documented. So, I will have to spend more time on this and add more high level functions like “driveTimePolygon”.

Anyway, “git clone” the project and tell me what you think.

Want to give credit where credit is due - Andy Gup started this initiative with his great Starter Project Template.

Just to clarify an important point. This API is for very simple mapping purposes and is NOT maintained by the core team.

Tuesday, March 29, 2011

Hacking the Kinect with Flash in a Mapping Application

At this year's DevSummit, I did a couple of demo theater presentations. One of them was about hacking the Microsoft Kinect using the Flash platform in a mapping application. Here is a video.
Kinect is a very successful and important product for Microsoft. And if you ever played with it using an XBox, you will understand why it is a very neat piece of technology.
Now, have you see the movie "Minority Report" ? Remember in the beginning of the movie when Tom Cruise step up to a console and started waving his hands to manipulate images (video)? One of the gestures that most fascinated me was the one where he twisted his wrist while his fingers were pretending to grab a baseball size object. That twisting gesture moved a sequence of images through time. Twist to the right and the sequence moves forward in time. Twist to the left and the sequence moves back in time. In the movie, he went back and forth to detect a pattern in that time sequence. Now, wouldn't be cool if we can do the same with Flash and a Kinect?
A couple of months back, I was working on a project that visualized levels of flash flood data from hurricane Hermine over Austin, TX. The data is temporal and is localized to nodes of a virtual grid on top of Austin with a cell size of about one and half kilometers. Using a Flex interface to Kinect, I want to detect rotation gestures that my hands will be performing to animate back and forth the flash flood levels over a time period.
My flood data was stored in a DBF file. Using the most excellent DBF library from Edwin van Rijkom, I was able to parse and load the data into memory. This data is about 71 Megabytes, as it spans an area of about 25 by 25 kilometers for about 4 days worth of hourly information. All that data is loaded into memory and bucketed by space and time in such that for each hour, I can know instantly what is the flood level value at any location.
Next, to make this spatial data morph itself over time visually and efficiently using the ArcGIS API for Flex, I created my own custom temporal layer. This custom layer, relies heavily on the bitmap capabilities of the Flash Player. At the application startup time, I create a base bitmap that is proportional to the map pixel width and height. Then, at each frame refresh during the life cycle of the application, I advance or retard an hour index value. For an hour index value, I can lookup all the nodes and their values and convert them to small rectangular bitmaps whose pixel width and height values are adjusted proportionally to the map scale based on about 1.5 square Km area. Each bitmap is filled with a color picked from a color ramp proportional to the range of the loaded flood level values. Blue on the lower end. Red on the upper end. Each bitmap is bit-blit'ed on the base bitmap who is then bit-blit'ed onto the flash player display. By repeating this process over time, each location varies its color, giving the illusion of motion as a color traverses from one node to another. Cool, eh?
In my MVC implementation of the application, my hour index was a bindable property in my model and my temporal layer was bound to this property in such a way that whenever that property changes, the layer reflected that change by bit-blitting the node data stored in the model.
Now, that was the easy part of the implementation. The difficult part is how to hook the Kinect to my Mac and consume Kinect depth frames using Flex to detect gestures that will be translated into positive or negative changes to my model hour index property (Remember, I modify that value and the layer reflects that change). Googling around, I came across the OpenKinect project. In the Wrappers section, I was delighted to find that somebody has already developed an ActionScript implementation and from the looks of it, I thought I was almost there. The AS3Kinect client implementation opens a persistent socket connection from flash to a daemon process written in C that was linked at compile time with the libfreenect library. This daemon process reads the Kinect depth frames as a byte stream from the USB port and forwards that stream of bytes through the open socket to the Flash application. Once on the client side and again taking advantage of the bitmap capabilities, blobs can be detected. A blob is a small region on the bitmap with the same color. See, the Kinect sends depth information as bitmap frames. If you extend your hands in front of you and make your palms face the Kinect, your palms and your body are at a different depth relatively to the Kinect device. By creating a virtual front and back plane, depth data can be filtered and converted to either white or black color encoded bytes on a bitmap. White for the front bytes and black for the back bytes. A continuous patch of white bytes (your palms) can be converted to a blob. A blob movement (single palm) can be translated to gestures like a swipe up, down, left and right. And multiple blob movements (your two palms moving) can be translated to for example a rotation when the blobs are swirling around a point or a scale up or down when the blobs are separated from each other diagonally or brought together diagonally. I wrote a simple program to test the transfer of the Kinect bytes through that daemon proxy to my flex application and it was sluggish and non-responsive despite that the supplied test program rgbdemo (written in pure C and utilizing OpenGL) worked flawless. Now, the AS3Kinet forum said that all should be fine when they tested it on their PC's. It was that last word that prompted me to test it on my Windows machine and on that machine, it worked !!! I did a little bit of investigation and that led me to write this blog post. Summarizing the post; the problem is that there exists a 64K chunking limit on sockets in the Flash player on Mac OS, and I needed to process 2 MBytes chunks (the size of a Kinect depth frame). And that 64K limit, throttled way back the data stream resulting in a slow to respond application :-( BTW, in the post, I did not want to give out too much of what I was working on, as I was preparing a surprise demo for GISWORX'11 plenary session.
I tried to circumvent the chunking problem by using Alchemy and LocalConnection, but that too had it chunking limits, and came to the realization that blob detection and gesture recognition has to occur down in the proxy and passed along as small packets of information to the client. The AS3Kinect had yet another project based on the OpenNI specification that did exactly what I needed, but that implementation was based on the Windows API and I have a Mac. I was very disappointed and started to panic with GISWORX'11 looming so closely. More googling around, I came across another natural interface specification, TUIO. The TUIO home page mentioned a Kinect implementation that runs on Mac OS. In addition, somebody implemented an AS3 interface to the TUIO protocol. The process is the same as with the AS3Kinect, where a TUIO server (TUIOKinect.app) is started. It reads the USB data stream and converts the chunks into frames where each frame is analyzed to detect blobs. All that is happening on the server. The detected blobs and their trajectories are converted into gestures. The gestures are encoded into byte arrays per the TUIO specification and broadcasted as UDP packets. The AS3 TUIO protocol implementation can read the UDP packet, decodes it and dispatch it as AS3 event. To test the implementation, a simple sample application was provided to rotate and translate any display object whose mouseenabled property is set to true. The sample application worked beautifully. Now, it was time to hook the TUIO AS3 implementation into my mapping application.
UDP packets can only be consumed by an AIR application, so first, I had to convert my web application to an AIR application. I used the AIRLanchpad application to generate the base template, and because my web application was based on MVC, the port of the model, the controls and some of the views was simple. The main application had to change from an Application subclass to a WindowedApplication subclass. At the application creation complete event handling, a TUIOClient is instantiated with a UDP connector so is a gesture manager with a stage reference and a rotation gesture listener. The gesture manager having a reference to the stage watches for any added display object with mouse enabled property. These display objects can become gesture listeners. The custom layer being mouse enabled and a child of the stage is a candidate for a listener to gesture rotate events. Now, Kinect is very responsive and TUIOKinect will blast the application with rotation events. To smooth this fast sequence of rotation value that could have pikes in the stream, I implemented a digital low pass filter giving me smooth rotation values as my extended palms are performing rotation gestures. A rotation to the right gives me a positive angle which translated to incrementing my hour index value in my model which through binding automagically refreshed the layer to show the flood level values at each node at that time instance. A rotation to the left, does the opposite. Works like a charm, and this was a huge success at the GISWORX'11 plenary. Modest, ain't I ?
If you have a Kinect and want to try it on your mac, download the AIR application from here. Like usual, the source is available for you to check out what I have done. Just hold the right key on the date label of the running application and it will give you the option to view and download the source code.
Happy Kinecting.