Tuesday, November 18, 2008

RESTful Query URLs (Cont.)

As a follow up on my previous RESTful Query URLs post, I'm going to be looking at implementing RESTful querying to the JSON repository I developed. The repository stores JSON documents and is searchable using a 'query-by-example' approach, e.g. you provide a document with the key/value combinations you're interested in and the repository returns all documents that match.

Since a query involves a template document, the simplest approach would be to provide a /search URI and let the client POST their query to that. My main hangup with this approach is the lack of linkability, e.g. I can't email/im/twitter a URI to the results of a query.

Doug talks about this in his recent REST via URI's and Body Representation blog post. In it, he suggests an approach where the client would POST the query/payload and recieve a 201 and a GETable URI with the results. This has some interesting implications (as Doug points out). How long does the /resource/request/[id] stay around? Presumably it could stick around indefinitely or until whatever is being queried changes. Do two clients POSTing the same body payload get the same results id. If you're going to support this, then you'll either have to query to see if the request has already been assigned an id or you're going to have to assign the ids based on the contents of the request, perhaps a SHA hash of the body payload. In either case, you're going to have to store the original request along with the id you've assigned to it.

I think this is why URI-encoding appeals to me: I don't have to keep any extra state around because I can re-create the request from the URI. This falls down when you need more expressive request capabilities than URI-encoding allows. I can also see an advantage in Doug's approach if the majority of your interactions are going to be workflow-based rather than single shot queries.

For the interface to the JSON repository, I chose to represent queries via the URI. I even went so far as to avoid using the query string, opting rather to put everything into the URI structure. The choice to do this was mainly one of exploration. I wanted to see if it offered any advantages (readability, cachability, simplicity) over using the query string.

The URIs take the form of: /collection[/term/value(s)]+ where term is either a direct property/key in the desired JSON document or a derived property (such as 'fulltext' which looks at the full text index of the document). The value section can either be a single value or a comma separated list of values. Some annotated examples include:
  • /and2 - return all documents in the and2 collection
  • /and2/1 - return the document with id = 1 in the and2 collection (special case)
  • /and2/type/image.SplitCore - return all documents with a type property of 'image.SplitCore'
  • /and2/fulltext/calcite,calcareous,carbonate - return any documents that contain 'calcite' OR 'calcareous' OR 'carbonate'
  • /and2/depth/100,200 - return any documents between the depth 100 and the depth 200. This changes the semantics of the comma operator as it no longer means the OR as it did with the fulltext term. If you pass in only one depth, it only returns documents at exactly that depth. If you pass in more than two depths, then the additional depths are ignored.
Multiple terms can be chained together:
  • /and2/type/image.SplitCore/depth/100,200 - return any documents of type 'image.SplitCore' AND between depth 100 and 200.
  • /and2/fulltext/calcite/fulltext/carbonate - return any documents containing 'calcite' and containing 'carbonate'
And I've added some special query operators:
  • /and2/type/!psicat.* - return any documents not of any psicat type, e.g. this would exclude documents with type properties of 'psicat.Interval', 'psicat.Unit', and 'psicat.Occurrence'.
What I like about this approach is that I can build a URI to just about any subset of documents in the collection (though it does require some prior knowledge of the structure). There are a few warts though. For one, it requires that URI components occur in pairs, so you can't peel back the URI like an onion: e.g. /and2/type/image.SplitCore is valid but /and2/type doesn't make sense. There is also an issue of canonicality, e.g. /and2/type/image.SplitCore/depth/100,200 will always return the same results as /and2/depth/100,200/type/image.SplitCore but they appear as separate resources to the caching layer. There's also aesthetics. I don't yet know how I feel about commas in the URL; they look weird to me. The scheme also doesn't pluralize the terms when multiple values are sent, e.g. /and2/types/image.SplitCore,image.WholeCore.

I'd love to hear any feedback on what you think of this approach.

Encounters at the End of the World

I just got my copy of Encounters at the End of the World. If you're interested in Antarctica and what it's like to live and work there, then I highly recommend this movie. It was actually filmed while I was down there the first time, but I was far too busy to put in a cameo. :) It will probably come off as a little out there, but it's a good representation of the people and life down there.

Monday, November 17, 2008

ImageMagick DSL 2

This is a quick post that shows another way to work with the ImageMagick DSL (and other Java DSLs). It comes from a trick I saw in a DSL talk given by Neal Ford. Basically you can use initializer blocks to construct objects in a bit less verbose way:

new ImageMagick() {{
option("-rotate", "90");
option("-resize", width + "x");
}}.run(in, out);


This still requires external variables, such as width, need to be declared final. So to wrap up, here's three different ways to invoke the DSL:

Standard:

ImageMagick convert = new ImageMagick();
convert.option("-rotate", "90");
convert.option("-resize", width + "x");
convert.run(in, out);


Method Chaining:

new ImageMagick().option("-rotate", "90").option("-resize", width + "x").run(in, out);


Initializer Block:

new ImageMagick() {{
option("-rotate", "90");
option("-resize", width + "x");
}}.run(in, out);

Friday, November 07, 2008

ImageMagick DSL

I've been fighting with JAI/Java2D over the last day or two to manipulate (resize, crop, composite) some large images. I have working code that produces decent quality images, but I really have to crank up the heap space to avoid OutOfMemoryExceptions. If I try to process more than one or two images concurrently, OutOfMemoryExceptions are inevitable. Since this code is going to be called from a servlet, I'm expecting to handle multiple concurrent requests.

This is not a new problem and people have been tackling it in various ways. Since I was working in a server environment and have control over what applications are installed, I decided to use ImageMagick for the image manipulation. ImageMagick is great; I've used it quite often in various shell scripts.

There's basically two ways to work with ImageMagick from Java. You can use JMagick, a JNI layer over the ImageMagick interface, or you can use Runtime.exec() to call the ImageMagick command line application. I opted for the latter as it seemed simpler when I pushed the code from my Mac to my Linux server.

Since finding and invoking ImageMagick's convert command can be somewhat problematic, I decided to write a simple fluent API in Java to hide the details. The result allows you to invoke convert using method chaining:


ImageMagick convert = new ImageMagick(Arrays.asList("/opt/local/bin"));
convert.in(new File("in.jpeg"))
.option("-fuzz", "15%")
.option("-trim")
.out(new File("out.jpeg")).run();
convert.option("-resize", "250x").run(new File("in.jpeg"), new File("out.jpeg"));


You create a new ImageMagick object. As a convenience, you can pass in a list of additional paths to check for the convert command in the event that it isn't on the default path. If the convert command can't be found, the constructor throws an IllegalArgumentException.

Once you have an ImageMagick object, you can execute convert by chaining various method calls, ending in a run(). run() returns true if the command succeeds, false otherwise.

In less than 200 lines of Java code, I had a much nicer way to interact with ImageMagick. A fun experiment would be to take the code and implement an even nicer DSL in Groovy. methodMissing() would allow fluent method chaining on steroids:

convert.fuzz("15%").trim().run(in, out)

As Guillaume Laforge tweeted, using metaprogramming, categories, and syntactic sugar like named parameters you could end up with a full blown DSL that looks like this:

convert fuzz:15.pct, trim: true, in: file, out:file

Friday, October 31, 2008

RESTful Query URLs

The last couple of days I've been working on writing a RESTful JSON document database. While a number of these already exist (CouchDB, FeatherDB, DovetailDB, Persevere, JSONStore, etc.), I decided to write my own because I wanted a bit more control over the URL scheme used by the REST interface, and I needed the ability to tweak the search functionality to achieve decent performance on some common but complicated queries. All in all it was an interesting diversion. The actual server clocked in at about 1000 SLOC, with much of that boilerplate because I wrote it in Java/JDBC rather than Groovy/GroovySQL.

The most interesting problem came in designing the query scheme for the REST interface. There seems to be a couple different ways to implement it with no real consensus as to which is the "right" way. As with most things, I suspect it depends on how you've implemented other pieces of the architecture and even personal preference. Below I describe three approaches I considered. The nice thing with REST is there's nothing stopping you from implementing all of these approaches in your interface.

NB: I'm no REST expert so the information below is my observations rather than any best practices. I'd love for anyone who knows better to chime into the discussion.

POST query parameters/document
In this approach, you provide a search endpoint, say something unoriginal like '/search', and queries are POSTed to that URI. The query is either a set of form encoded key-value pairs or a search document using a schema shared between the client and server.

This approach seems closer to RPC than REST to me, but may be the best approach if your search functionality requires a more complex exchange of information than simple key-value pairs allow. The obvious downside to this approach is that there is no way to bookmark a query or email/IM a query to someone else. This approach also can't take advantage of the caching built into the HTTP spec.

GET query string
Similar to above, you expose a URI endpoint, possibly something like /search, and queries are sent to that endpoint with the parameters encoded in the query string of the URL, e.g. http://www.google.com/search?q=REST+query+string

This approach improves on the bookmarkability of searches, since all of the parameters are in the URL. However, the use of the query string may interfere with caching as described in Section 13.9 of the HTTP spec. Overall, I think there is nothing inherently un-RESTful about this approach, especially if you provide more resource-oriented URIs than /search, e.g. /documents?author=Reed. In my head, I interpret the latter as "give me all of the document resources but filter on the author Reed. Removing the query string will still give you a resource (or collection of resources in this case).

Where this approach falls down is when you start trying to represent hierarchical or taxonomic queries with the query string, e.g. http://lifeforms.org?k=kingdom&p=phylum&c=class&o=order&f=family&g=genus&s=species as described on the RestWiki.

Encoding query parameters into the URI structure
In this approach the query parameters are encoded directly into the URI structure, e.g. /documents/authors/Reed, rather than using the query string. Another example of is described at Stack Overflow.

This approach solves both the bookmarkability and the caching issues of the previous approaches, but can introduce some ambiguity, especially if your resources aren't strictly hierarchical in nature. The biggest stumbling block for me was this: looking at the URI /documents/authors/Reed, it's not immediately clear what will be returned. For example, if I sent you the URI /documents you might infer that you would get a list or the contents of some documents. From the URI /documents?author=Reed, you might infer that the resource(s) returned would be documents authored by Reed. So what might you expect to get from the URI /documents/authors/Reed? Information about the author Reed or all documents authored by Reed?

How important is this? I guess it's really up to you. A machine likely infers about as much from
/documents/authors/Reed as it does from /documents?author=Reed.

Thursday, October 09, 2008

Core Gallery

It seems like every couple of months I end up with a project that involves a fair amount of Javascript. Back in March it was working with Simile Timeline to visualize depth-based data. This time around, I wanted to create a lightweight way to visualize our drill core imagery. We already have full-featured visualization tools that scientists use, so I was looking to create something simple that would engage non-geologists.

The result is the Core Gallery. It shows an animated whole core image next to a split core image. Since the images are too large to display on the screen, there's a slider that lets you see different parts of the core. The page also displays some additional information about the core.

I'm really happy how it turned out. The page is 100% HTML, Javascript, and CSS. No Flash and no Java. For the Javascript, I'm using JQuery for no reason other than I wanted to see how it stacked up to other JS libraries I've used. It was perfect for this project and a treat to work with. Below I'm going to sketch out how various parts of the page are built.

Core Slider
The core slider is the most complicated part of the page. It uses the JQuery UI/Slider component. I used this screencast to help me acquaint myself with the slider. To achieve the highlighted core effect as the slider handle moves, I used two thumbnails of the core. One thumbnail is regular and one is washed out. I set the washed out thumbnail as a CSS background image on the slider element. I set the regular thumbnail as a CSS background image on the slider handle. The handle has a fixed size based on the height of the thumbnail vs. the height of the real core images so only part of the thumbnail is shown. From the slider's slide() callback, I simply update the CSS background-position property on the handle to ensure that handle's image is showing the same portion of the core as the underlying slider. I use this same technique to move the rotating whole core and split core images, taking the difference in image height between the thumbnail and the other core images into account.

Animated Whole Core Image
The slider was the most complicated but the animated whole core image was the most challenging. I wanted show the image animated in faux 3D. I initially started with a Java applet using JOGL. The applet worked on my Mac but not on Windows or Linux, so I abandoned it. I then got the idea to employ the CSS Sprites technique. So I used a tool to render the 3D whole core image 90 times each rotated by 4 degrees and montaged them together. Once I had this, it was simply a matter of setting up a Javascript Timer interval to fire every 50ms and move the image right by a fixed amount each time. This simulates animation fairly effectively. I keep track of the current rotation and vertical offset in global variables so the core keeps rotating when you move the slider.

Split Core Image
I use the same technique as on the slider handle to make the image track the slider's position.

Core Links
In the text description, it is possible to link to different parts of the core. This is a somewhat neat trick. To accomplish it, I wrap portions of the description text in span tags. Each span tag has an id attribute in the form of a ratio between 0.0 and 1.0. Using JQuery, I find these special span tags and add an onClick handler that updates the slider position based on the span's id attribute. So if the span had an id of 0.8, clicking on it would move the slider to the 80% position of the core. 0.0 takes you to the top and 1.0 takes you to the bottom.

Conclusion
Overall the Core Gallery turned out surprisingly well for being 100% browser-based. It took much less work than I originally envisioned thanks to JQuery. I'd definitely consider JQuery for future projects.

Wednesday, September 17, 2008

My First Griffon App

Sorry it's been so long since I posted here. Work keeps me busy.

Recently I had been given several GB of raw data from our two most recent scientific drilling expeditions in Antarctica. This data needs a fair amount of quality control processing to turn it into a usable datasets for the scientists. To do this, I needed to write a tool for the drillers to interactively plot and explore the data to determine regions of interest. Given the recent buzz about Griffon, I thought I'd give it a try.

I started by downloading and installing Griffon. Once I had everything setup, I created an app:

griffon create-app DrillingAnalytics


If you've done any Grails development, this will be a familiar idiom to you. The results of this command is a straightforward directory structure, focused around the MVC pattern. You'll recognize directories for models, views, and controllers.

My next step was to flesh out my model. When you create an app, Griffon automatically creates a model class called ${app.name}Model (DrillingAnalyticsModel for me) in the griffon-app/models directory. The main purpose of my app is to plot time series data so I defined two fields, startDate and endDate in my model:

import groovy.beans.Bindable

class DrillingAnalyticsModel {
@Bindable String startDate = "2006-11-07 00:00"
@Bindable String endDate = "2006-11-08 00:00"
}



You'll notice the @Bindable annotations on these fields. These fields will tie to these components in the UI, and the @Bindable annotation will automatically take care of keeping the UI in sync with the model via PropertyChangeEvents.

The model class is also where you can put other fields to maintain applications state:
 
def plot = new CombinedDomainXYPlot(new DateAxis())
def subplots = []
def chart = new JFreeChart(null, JFreeChart.DEFAULT_TITLE_FONT, plot, false)


With the model sorted out, I moved on to developing the view. As with model, Griffon creates a ${app.name}View class for you in the griffon-app/views directory. Griffon puts the full power of SwingBuilder, SwingXBuilder, and GraphicsBuilder (with more on the way) at your fingertips for developing the UI.

I spent the majority of my time on the UI. It was a seemingly endless cycle of tweaking the code and testing with griffon run-app to get it to look the way I wanted. This is no knock on Griffon; writing Java UIs, especially by hand, just plain sucks.

After far too long trying to get the standard Java layout managers to do what I want, I did myself a favor and downloaded MigLayout. Despite not being built into SwingBuilder, MigLayout integrates nicely with SwingBuilder:

application(title:'Drilling Analytics', pack:true, locationByPlatform:true) {
panel(layout: new MigLayout('fill')) {
// chart panel
widget(chartPanel, constraints:'span, grow')

// our runs and time
panel(layout: new MigLayout('fill'), border: titledBorder('Time'), constraints: 'grow 100 1') {
scrollPane(constraints:'span 3 2, growx, h 75px') {
runs = list(listData: model.mis.keySet().toArray())
}
label('Start:', constraints: 'right, gapbefore 50px')
textField(id:"startDate", text: bind { model.startDate }, action: plotAction, constraints:'wrap, right, growx')
label('End:', constraints: 'right, top')
textField(id:"endDate", text: bind { model.endDate }, action: plotAction, constraints:'wrap, right, top, growx')
label("+/-", constraints: 'right')
textField(id:"padding", text: "30", constraints: 'growx')
label("min")
button(action: plotAction, constraints:'span 2, bottom, right')
}

// our plots panel
panel(layout: new MigLayout(), border: titledBorder('Plots'), constraints: 'grow 100 1') {
model.data.each { id, map ->
checkBox(id: id, selected: false, action: plotAction, text: map.title, constraints:'wrap')
}
}
}
}


SwingBuilder gets rid of all the boilerplate code and MigLayout makes it possible to code decent Java UIs by hand:


We've covered the Model and the View, now it's time to focus on the Controller. The controller mediates between the model and view. It contains all of the logic for handling events from the UI and manipulating the model.

One common pattern in the existing Griffon examples is the use of Swing Action objects to trigger actions from the UI. My UI was pretty simple so I could reuse a single action on all of the components to refresh the plot:

actions {
action(id: 'plotAction',
name: 'Update',
closure: controller.plot)
}


I put this code in my DrillingAnalyticsView class, but it could just as easily be defined in its own file and imported into the view via the build() method. You'll notice that I give the action an id--plotAction--which I use to reference it from the components:

button(action: plotAction, constraints:'span 2, bottom, right')


You can also see that the action just delegates to the controller.plot closure. This is convenient because it keeps all of the logic in one place and the controller has access to both the model and view. The actual code of the controller.plot is unremarkable. The big consideration is to properly manage your threading. Don't do long running actions in the EDT as it will freeze the UI, and don't update the UI from outside the EDT as Swing is not thread safe. Andres Almiray has a good description of how Griffon makes this easy.

Since my app is fairly niche (I doubt there's many of you visualizing drilling data), I'm not going to post the whole source code here. However, I want to point out that the source code consists of just 327 lines of code, and that's including blank lines and comments! The bulk of that code is the logic to query the database and update the JFreeChart plots. This truly demonstrates how simple and easy it is to build an app with Griffon.

If you're looking for more Griffon examples, check out the samples included in the samples directory of the Griffon distribution, and keep an eye on Griffon posts groovy.dzone.com

Wednesday, August 06, 2008

OSGi Command Line Applications

I'm a big fan of OSGi. One thing I always wanted to do but never got around to implementing until just recently was to be able to call services in an OSGi application from the command line. I've often wanted to be able to script PSICAT instead of having to fire it up and interact with the GUI. Turns out it's not all that difficult; you just need to sit down and do it. The only snag I ran into was that I couldn't find an implementation-agnostic way of accomplishing this, so the code I'm going to show is for the Equinox OSGi implementation. Though the same could easily be accomplished in Felix or likely other implementations with minor changes.

As with most things, there are multiple ways to skin a cat. The route I chose was to embed Equinox in a Java app and mediate command line access through this class. Fortunately, most of the work is already done for us via the EcliseStarter class (if you're on Felix, check out this). Assuming Equinox is on your classpath, simply calling EclipseStarter#startup() will fire up the Equinox runtime. More importantly, it will give you a BundleContext which you can use to interact with the OSGi framework. Once we have a BundleContext, we can do interesting things like install and start additional bundles:

public static void main(final String[] args) throws Exception {
// start the framework
context = EclipseStarter.startup(new String[0], null);

// install all bundles
installAllPlugins();

// start our platform bundles
startPlugin("org.eclipse.core.runtime");

// start plugins
for (Bundle b : context.getBundles()) {
startPlugin(b.getSymbolicName());
}
...


The final piece is to do the command line interaction. For this, I created an interface that bundles can publish services under to make them available to the command line:

public interface ICommand {
/**
* Execute this command.
*
* @param args
* the args.
* @return the return value.
*/
Object execute(String[] args) throws Exception;

/**
* Gets the help text that explains this command.
*
* @return the help text.
*/
String getHelp();
}


Unfortunately since there is a lot of classloader magic going on, we can't just get these ICommand classes from the service registry and invoke them directly (like we would do from inside OSGi). The OSGi classes are on a different classloader than the one we started things on. At first this may seem annoying but its actually a good thing--it means fools can't crash the OSGi implementation. So we either can specify some classloader chicanery (osgi.parentClassloader=app) or we can invoke the commands via reflection. I opted for this route because I was always taught not to mess with things you don't understand and the ClassLoader hierarchy under OSGi is definitely something I don't understand. Here's the two applicable methods:

private static Object invokeCommand(final String name, final String[] args)
throws Exception {
String filter = "(&(" + Constants.OBJECTCLASS + "="
+ ICommand.class.getName() + ")(name=" + name + "))";
ServiceReference[] services = context.getAllServiceReferences(
ICommand.class.getName(), filter);
if ((services != null) && (services.length != 0)) {
Object c = context.getService(services[0]);
if (c != null) {
Method m = c.getClass().getMethod("execute", String[].class);
return m.invoke(c, (Object) args);
}
}
return "Command not found: " + name;



private static Map getAllCommands() {
Map commands = new LinkedHashMap();
try {
ServiceReference[] services = context.getAllServiceReferences(
ICommand.class.getName(), null);
if (services != null) {
for (ServiceReference r : services) {
Object c = context.getService(r);
if (c != null) {
try {
Method m = c.getClass().getMethod("getHelp");
commands.put((String) r.getProperty("name"),
(String) m.invoke(c));
} catch (SecurityException e) {
// ignore
} catch (IllegalArgumentException e) {
// ignore
} catch (NoSuchMethodException e) {
// ignore
} catch (IllegalAccessException e) {
// ignore
} catch (InvocationTargetException e) {
// ignore
}
}
}
}
} catch (InvalidSyntaxException e) {
// should never happen
}
return commands;
}


Not my finest hour, throwing Exception, but it should get you on your way. It works like a charm in my app.

Cheers,
Josh

Saturday, August 02, 2008

AT&T Update

Well, since I bitched about AT&T last time, I suppose I should post something with some technical merit. It'll be in the next post, so folks that want to read it don't have to read through this post. For those of you interested, things aren't fully resolved with AT&T but Elizabeth's mom got on the phone with AT&T and put them in their place. She took it to the AT&T National level and has direct lines to folks there that can actually get stuff done. Supposedly everything is almost sorted, I just need to bring my iPhone in and get it re-programmed to my new number. I say 'supposedly' because until the deal is actually done and it's been a month or two, I have absolutely no faith in AT&T. It was a bit comical, though, because Elizabeth's mom got things sorted in like 20 minutes. Both Elizabeth and I are dumbfounded after the numerous interactions with AT&T, both on the phone and in person, as to how she could be so persuasive.

Thursday, July 31, 2008

AT&T == Lying, Deceitful, and Fraudulent

So it's been a long time since I blogged and I really hate to be so negative but I had an absolute nightmare of a day dealing with AT&T today. My birthday is coming up and Elizabeth thought it would be nice to get me an iPhone because I had been asking about them. So begins the saga. We previously had cell phones on Elizabeth's parent's AT&T Family Talk plan, which costs about $20/month for 2 lines. Elizabeth called up AT&T to figure out what we had to do to get me an iPhone. They informed us that all we needed to do was sign up for a new account and transfer our existing numbers to this account. However, despite mentioning numerous times that we were doing this to purchase an iPhone and asking explicitly about the costs, we were told "pay the transfer fee of $18/line and sign up for a Family Talk plan @ $69.99 and then go into the Apple Store and they'll set up the iPhone data plan" and you'll be good to go. (And I make it sound easy, but this really entailed 2 hours on the phone and talking to several different people at AT&T). Looking online we had a ballpark figure of around $150/month for the voice and data. This is far more than the $20/month we had been paying but we were willing to shell out the money for our own account and for the iPhone plan.

Fast forward to an hour later when we were in an Apple store trying to activate one of the few remaining iPhones. Activation failed! Apparently the new account got created as a business account instead of a domestic account and Apple couldn't activate the phone. WTF? Well we called up AT&T while at the mall and after another 30 minutes on the phone, got the new account switched to a domestic rather than business account. We go back for a second time to activate the phone and they said we're not eligible. So what everyone from AT&T neglected to mention was that despite signing up for a new account and a significant additional cost, by transferring our numbers we were still bound under the original contract. Nevermind the fact that we 1) weren't switching companies and 2) we were actually bringing MORE money in for AT&T since we were going from paying $20/month for the next year to paying $150/month for the next 2 years.

That's all fine and dandy but what I don't understand is how they can sign us up for a new contract under different terms but hold us to the original contract. They were all too happy to charge us $36 to transfer our numbers and commit us to paying $150/month for the next 2 years but when we want to get the iPhone at the discount all of a sudden the story is that we're still under the other contract and are not eligible for the phone at the reduced price. Now, I can understand the transfer logic and I can understand the new account logic. What I can't fathom is how they think they can enforce two contracts, with conflicting terms, at the same time? Either the new account comes under the old contract terms, and I pay $20/month through September 2009 and no iPhone upgrade (not really a new account then) or the new account is treated like a new account at the new rate for the new time period and I'm eligible for the iPhone upgrade. One or the other, but you can't have both!

But it doesn't end there. I went ahead and signed up a new single account under my name to purchase an iPhone. This meant having to get a new number. After the Apple store, we walked down the hall in the mall and went into the AT&T corporate store thinking it might be a refreshing change from spending hours on the phone. We explained the situation to the customer service rep there and he was all to happy to try and rectify the situation. He said "sure, we can just transfer those numbers back to the original account and close the new account". We were like, that's fine even though my old number/phone would basically go unused now that I had a new number. It would save Elizabeth from having to change her number. The AT&T rep couldn't do the transfer from his system because of that stupid business vs. domestic error so he called the corporate office to get things fixed. He almost got it but then needed permission from Elizabeth's mom to add us back onto the Family Talk plan. This makes sense, so I don't fault them for that. Unfortunately we couldn't get Elizabeth's mom on the phone so we couldn't continue.

It was at that point that the rep informed us that it would cost an additional $18/line to transfer the numbers back! So we had to pay to transfer the lines to a new account, despite the fact that no one informed us that we wouldn't be able to purchase the iPhone at the reduced rate and now they wanted to charge us an additional $18/line to transfer back. All that after spending the whole afternoon, from noon to 5PM either on the phone or in stores dealing with AT&T! The rep, Andrew "Drew" at the Southdale AT&T store then proceeded to get in our face about the charges and be quite rude. "Well it's not like we just went in and changed it without permission." No, but you also were deceitful when you said that all we had to do was sign up for a new account and transfer our numbers and then we'd be good to go.

But the best is yet to come. So we leave the store and Elizabeth immediately gets on the phone again with AT&T. Once she actually gets to a live person, she explains the situation for the umpteenth time and then gets flak when she asks for a manager after the person on the other end won't help her. After explaining the situation yet again, the manager seems sympathetic and is willing to waive the transfer fees. She begins the process of transferring back and then magically says "we can't transfer back because lines in a new account can't be transferred for 60 days". So the only thing you can do is transfer the 3 lines on her parents account to our account for 2 months and then transfer them to another account after that. And guess what, that's $18/line for each line and then another $18/line to transfer off our account. All because transferring back wasn't possible. Gee, well our buddy Drew in the store seemed to think it was possible. So yet again, AT&T comes up with these convenient rules.

So let's re-cap. When you sign up a new account with AT&T and try to transfer your lines, beware that despite them taking significantly more than what you were paying before and binding you to an additional 2 years, they can and will choose to enforce the previous contract when it suits them. So basically you're bound to 2 contracts and they've got you over a barrel by using whichever suits them at the time. You should also not expect to be informed of absolutely anything, especially not contract terms when you sign up for your new old account. Furthermore, what you can and can't do changes from person to person and from minute to minute. Our buddy Drew was going to transfer us back and the manager on the phone was going to transfer us back but then randomly came up with this no-transfer during 60 days rule which conveniently nets AT&T an additional $108 in transfer charges. The best part is, and what no one at AT&T seemed to grasp, was that it was in their interest to just give us a new account and let us sign up for the iPhone because it meant we went from paying $20/month to paying $150/month AND they had us for 2 full years! I hope some AT&T investors stumble across this and realize how poorly managed the company is that they are throwing away money and souring customers.

So at this point, there's not much we can do. We're going to let Elizabeth's mom talk to them and see if she can make any headway. Tomorrow I'm going to file a complaint with the MN Attorney General and the Better Business Bureau for deceitful and fraudulent practices. If Elizabeth's mom doesn't make any headway, I think we'll be contesting the charges with our credit card company and we'll have to see if it is worth filing in small claims court. After that, I'm out of ideas. The only advice I have is: steer clear of AT&T if you can.

Through it all, the Apple Store employees were helpful and pleasant to work with, even going so far as to try and cover for AT&T. It was all too obvious, though, that AT&T was in the wrong. They were truly apologetic that we had to go through such a mess, and didn't want this experience to sour our opinion of Apple and the iPhone. No worries, though, as we got nothing but top notch service from Apple.

Time for bed, it's been a long day.

Thursday, July 10, 2008

St. Petersburg, Russia

Sorry for the lack of recent updates. It seems like I've been on a tour for work: Lincoln, Potsdam, and now St. Petersburg, Russia all in the last month or so. And if

St. Petersburg is like no place I've ever been. The diversity and contrast between buildings is amazing. You'll be walking down the street and see buildings with huge golden domes and intricate architecture next to a no-nonsense, utilitarian building that looks like it has been abandoned.

Overall, I've had good luck with the people. Most have been friendly and helpful. The rest have been largely indifferent to my butchering the pronunciation of the few Russian words I've picked up via osmosis.

The biggest adjustment for me is the lack of smoking bans in public areas. I was shocked when we arrived and I saw someone lighting a cigarette in the hotel lobby. It's completely different from the US and is something I don't think I'd want to get used to.

I've had a hard time adjusting to the timezone. It is 9 hours different from home but for whatever reason I haven't been sleeping very much and not on a regular schedule. Part of the problem may be that there's very little darkness at night during the summer. It usually gets dark around midnight and stays dark for 2 hours or so. It's almost like my first weeks on the ice in Antarctica.

I'm looking forward to returning home on Saturday. My flight is early Saturday morning and I arrive back in Minneapolis around 3:30PM if all goes to plan (though I'm not holding my breath with the state of air travel these days). I have to quick rush home from the airport, change, and go to a wedding reception. I doubt I'll make much of a party guest, but I should put in an appearance. After that, I think I'll take a few days to settle back in and get on a normal schedule. I think I'm home for all of a week before I have to pop over to DC for a quick meeting. Then I think I'm going to do everything in my power to spend a full month at home in my new house. Though we'll see what comes up.

Dasvidania.

Thursday, June 26, 2008

Mercurial Push from IntelliJ

I've been using IntelliJ on a recent project because of its Groovy and Mercurial support. The Mercurial support worked quite well except for pushing changes to a remote repository over ssh. A quick look at the Version Control Console revealed that things were getting hung up on: remote: ssh_askpass: exec(/usr/libexec/ssh-askpass): No such file or directory. After confirming that there was no ssh-askpass on my Mac OS X Leopard system, I turned to Google. After a few misses, I stumbled across Joe Mocker's blog post about VNC tunneled through SSH on OS X. Embedded in that post is this shell/AppleScript:

#! /bin/sh

#
# An SSH_ASKPASS command for MacOS X
#
# Author: Joseph Mocker, Sun Microsystems

#
# To use this script:
# setenv SSH_ASKPASS "macos-askpass"
# setenv DISPLAY ":0"
#

TITLE=${MACOS_ASKPASS_TITLE:-"SSH"}

DIALOG="display dialog \"$@\" default answer \"\" with title \"$TITLE\""
DIALOG="$DIALOG with icon caution with hidden answer"

result=`osascript -e 'tell application "Finder"' -e "activate" \
-e "$DIALOG" -e 'end tell'`

if [ "$result" = "" ]; then
exit 1
else
echo "$result" | sed -e 's/^text returned://' -e 's/, button returned:.*$//'
exit 0
fi


I dropped this code into a script at /usr/libexec/ssh-askpass and now when I push from IntelliJ:

Ugly, but it works. Now I just wish that the IntelliJ Mercurial plugin would consult the .hg/hgrc file for the remote repository or at least remember the value I type in when I pushed the last time, so I don't have to type in some long ssh://user@host.org/path/to/the/repo every time.

Thursday, June 12, 2008

PSICAT News

Last week I was in Potsdam, Germany for a meeting with folks from ICDP, ESO, and CoreWall. We were discussing integrating our various tools (PSICAT, Corelyzer, and the Drilling Information System) to create a turnkey technology platform for future ICDP and ESO drilling expeditions. Each of the tools have been used successfully on multiple expeditions, but until now haven't interoperated with each other. The meeting went well and we adjourned with an integration plan.

So what does this mean for PSICAT? First off, if you're an existing user, you won't have to do anything different; PSICAT will continue to work exactly as it does today. It will just have some new, optional features for integrating with the DIS and Corelyzer. One positive side effect of this is that I will have some "official" time devoted to working on PSICAT. Things have been so busy in the last couple of months that PSICAT development has been on the back burner. PSICAT won't be my only project but it should get more attention than it is getting now. It also means that PSICAT will be getting used all over the world on new drilling projects, which (I hope) leads to a larger user community and a better overall product.

Exciting times! Keep an eye on the PSICAT site for more updates.

Tuesday, June 10, 2008

GraphicsBuilder Update

Just a quick update to my previous post about GraphicsBuilder. All of the issues I had in Step #4 have been fixed, so you should no longer have to modify the pom.xml file or manually install Batik 1.7 jars. The only remaining issue is the requirement on Java 1.6, but Andres recently posted that the next version of GraphicsBuilder will not require Java 1.6.

Thursday, June 05, 2008

GraphicsBuilder Experimentation

This afternoon I was experimenting with GraphicsBuilder on Groovy, and I'm blown away. Andres has done an amazing job with GraphicsBuilder.

It took a bit of work to get setup because I wanted to play around with the SVG rendering support (which isn't included in the 0.5.1 package that is currently available). Once I built from the trunk, it worked like a charm. Below are the steps that I went through to set things up in case it is useful to anyone else.

Pre-requisites:
1) Java 6
Make sure you have Java 6 installed for your platform. I'm on a Mac which doesn't have Java 6 support (grr...) so I went the SoyLatte route.

2) Groovy 1.5
Make sure you have Groovy installed. I ran into troubles when I tried to use the 1.6 snapshot (ClassCastExceptions) so stick with 1.5 for now.

3) Maven 2
We'll be building GraphicsBuilder from the Subversion trunk, so we need to install Maven 2.

4) Subversion
Hopefully you already have the Subversion, but if not, grab and install it for you platform.

Instructions:
1) Setting up some environment variables at the commandline

export GROOVY_HOME=~/Source/groovy
export JAVA_HOME=/usr/local/java1.6
export PATH=$JAVA_HOME/bin:$GROOVY_HOME/bin:~/Source/maven/bin:$PATH


Obviously your paths will differ. Once this is done, we should be able to run our java, groovy, and mvn commands without errors.

2) Download GraphicsBuilder from Subversion

svn co http://svn.codehaus.org/groovy-contrib/graphicsbuilder/trunk graphicsbuilder


3) Download the Batik 1.7 distribution
Download a copy of the Batik 1.7 distribution and unzip it into our graphicsbuilder directory.

4) Use Maven to build GraphicsBuilder

cd graphicsbuilder
mvn


If everything works for you, then proceed to the next step. For me, I ran into two problems. The first problem was that top level pom.xml referenced groovy-all-minimal as a dependency but the other pom.xml files referenced groovy-all as a dependency. This caused Maven to complain about a missing version and to fail. I fixed this by changing the top level pom.xml file to reference groovy-all:

--- pom.xml (revision 356)
+++ pom.xml (working copy)
@@ -77,7 +77,7 @@


org.codehaus.groovy
- groovy-all-minimal
+ groovy-all
${groovy-version}




This seemed to clear up Maven's problems and the build actually proceeded.

The other problem I ran into was that the Batik 1.7 jars weren't available in the Maven repositories so the build complained of missing dependencies. Fortunately Maven will allow us to install the required jars locally:

mvn install:install-file -DgroupId=batik -DartifactId=batik-awt-util -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-awt-util.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-util -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-util.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-gui-util -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-gui-util.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-ext -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-ext.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-svggen -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-svggen.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-dom -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-dom.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-svg-dom -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-svg-dom.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-parser -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-parser.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-xml -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-xml.jar
mvn install:install-file -DgroupId=batik -DartifactId=batik-gvt -Dversion=1.7 -Dpackaging=jar -Dfile=batik-1.7/lib/batik-gvt.jar


After that, I was able to kick off mvn and build everything.

5) Install GraphicsBuilder into GROOVY_HOME
If you want to play around with GraphicsBuilder from the commandline, the easiest thing to do is to install it in your GROOVY_HOME:

cp */lib/*.jar $GROOVY_HOME/lib/
cp */target/*.jar $GROOVY_HOME/lib/
cp */src/lib/*.jar $GROOVY_HOME/lib/
cp */src/bin/* $GROOVY_HOME/bin/
chmod +x $GROOVY_HOME/bin/graphicsPad
chmod +x $GROOVY_HOME/bin/svg2groovy


You may also want to grab the build of GraphicsBuilder 0.5.1 because I think it may have included another jar or two that wasn't covered above (MultipleGradientPaint.jar, TimingFramework-1.0-groovy.jar, swing-worker.jar, and swingx-0.9.2.jar).

6) Play with it
You can either test it by running the graphicsPad application or writing a script that calls one of the renderers:

import groovy.swing.j2d.*
import groovy.swing.j2d.svg.*

def foo = {
antialias('on')
circle(cx:0, cy:0, radius:300, borderColor:'black', borderWidth:4) {
multiPaint {
colorPaint('orange')
texturePaint(x:0, y:0, file:'/Users/jareed/Desktop/602.png')
}
transformations {
translate(x:500, y: 500)
}
}
circle(cx:300, cy:300, radius:150, borderColor:'black', borderWidth:4) {
multiPaint {
colorPaint('red')
texturePaint(x:0, y:0, file:'/Users/jareed/Desktop/602.png')
}
}
}

def gr = new GraphicsRenderer()
def sr = new SVGRenderer()

gr.renderToFile("/Users/jareed/Desktop/test.png", 1000, 1000, foo)
sr.renderToFile("/Users/jareed/Desktop/test.svg", 1000, 1000, foo)


which generates:

Nothing earth shattering, but it will be potentially interesting once I flesh out the project I want to use it on for. The best part, though, is that the SVG rendering works exactly as advertised. I had tried to do SVG rendering in the past using Batik's Graphics2D implementation (from a SWT app) and I couldn't get the background images to show up. Sweet!

Tuesday, June 03, 2008

Lincoln and Potsdam

Sorry for the lack of updates. I'm in Lincoln this week and then Potsdam, Germany next week for work. If anyone happens to be reading from either of those places and wants to grab a beer, shoot me an email. I'm hoping to get some work done on the flights, so hopefully I'll have stuff to blog about.

On a personal note, Elizabeth and I are in the process of buying a house. We found one we liked and offered on it last week. The offer was accepted and now we're just waiting for the sellers to do a few things and for our mortgage loan to come through. The plan is to close on the 20th or sooner, so when I get back from Potsdam, it looks like I'll be getting my stuff packed up to move.

The house is in Golden Valley (about 2 miles north of where we currently live). It's about 2000 sq. ft, with 4 bedrooms and 2 bathrooms. It's a one story, with a finished walkout basement. The lot is nice and large (for the city) and it's in what appears to be a nice, established neighborhood. I'm super pumped about the move as it means I can start doing things around the house (when I'm home, that is). I'll get some photos up when we start moving.

Thursday, May 22, 2008

Units DSL in Groovy

A while back I saw Guillaume Laforge's article about building a Groovy DSL for unit manipulations. I recently needed to implement something similar in a project, so I decided to take Guillaume's code and update it a bit. I wanted a nice way to package it up so I could quickly enable unit manipulation support on a particular class. I also added a pair of methods to make things more flexible. Here's my UnitDSL.groovy:

package org.psicat.model

import org.jscience.physics.amount.*
import javax.measure.unit.*

/**
* A helper class for setting up the Units DSL
*/
class UnitDSL {
private static boolean isEnabled = false;
private UnitDSL() { /* singleton */ }

/**
* Initialize the Units DSL.
*/
static enable() {
// only initialize once
if (isEnabled) return

// mark ourselves as initialized
isEnabled = true

// enable inheritance on EMC
ExpandoMetaClass.enableGlobally()

// transform number properties into an mount of a given unit represented by the property
Number.metaClass.getProperty = { String symbol -> Amount.valueOf(delegate, Unit.valueOf(symbol)) }

// define opeartor overloading, as JScience doesn\'t use the same operation names as Groovy
Amount.metaClass.static.valueOf = { Number number, String unit -> Amount.valueOf(number, Unit.valueOf(unit)) }
Amount.metaClass.multiply = { Number factor -> delegate.times(factor) }
Number.metaClass.multiply = { Amount amount -> amount.times(delegate) }
Number.metaClass.div = { Amount amount -> amount.inverse().times(delegate) }
Amount.metaClass.div = { Number factor -> delegate.divide(factor) }
Amount.metaClass.div = { Amount factor -> delegate.divide(factor) }
Amount.metaClass.power = { Number factor -> delegate.pow(factor) }
Amount.metaClass.negative = { -> delegate.opposite() }

// for unit conversions
Amount.metaClass.to = { Amount amount -> delegate.to(amount.unit) }
Amount.metaClass.to = { String unit -> delegate.to(Unit.valueOf(unit)) }
}

/**
* Add Units support to the specified class.
*/
static addUnitSupport(clazz) {
clazz.metaClass.setProperty = { String name, value ->
def metaProperty = clazz.metaClass.getMetaProperty(name)
if (metaProperty) {
if (metaProperty.type == Amount.class && value instanceof String) {
metaProperty.setProperty(delegate, Amount.valueOf(value))
} else {
metaProperty.setProperty(delegate, value)
}
}
}
}

/**
* Remove Units support from the specified class.
*/
static removeUnitSupport(clazz) {
GroovySystem.metaClassRegistry.removeMetaClass(clazz)
}
}


To enable the DSL, you have to call UnitDSL.enable(). This adds a few methods to the metaclasses on Number and Amount. The majority of the code in enable() is a straight cut and paste job from Guillaume's article.

I did add two methods. The first:

Amount.metaClass.static.valueOf = { Number number, String unit -> Amount.valueOf(number, Unit.valueOf(unit)) }


allows you to create an Amount using a Number and a String, e.g. Amount.valueOf(1.5, "cm").

The other new method:

Amount.metaClass.to = { String unit -> delegate.to(Unit.valueOf(unit)) }

allows conversions with the unit specified as a String, e.g. 1.5.m.to("cm")

The final enhancement I added was to create a addUnitSupport(clazz) method. This method overrides the setProperty() method of the passed class to support setting Amount properties as strings. All assignments in the following scenario are valid:

class Foo {
Amount bar
}

// test
UnitDSL.enable()
UnitDSL.addUnitSupport(Foo)

def foo = new Foo()
foo.bar = Amount.valueOf(3, SI.METER)
foo.bar = Amount.valueOf(3, "m")
foo.bar = Amount.valueOf("3m")
foo.bar = 3.m
foo.bar = "3m"


To use this code, you'll have to grab the latest JScience release.

Wednesday, May 21, 2008

Visualizer, Part 3: Poor Man's PDE Build

This is the third in my series (Part 1, Part 2) of posts about Visualizer. In this post I'll be talking about how to create a simplified PDE build.

As I mentioned in the previous post, Visualizer is built on OSGi. My preferred development environment for doing any Java development, but especially OSGi development, is Eclipse because of its wonderful JDT and PDE tooling. The PDE team has created an awesome environment for developing and managing OSGi bundles. However, one of the requirements that I had for Visualizer was that anyone could download the source code and build it, regardless of their IDE or environment preferences. PDE includes the ability to perform a headless build, but I didn't really want to expect the user to download Eclipse or to include a stripped down version of Eclipse in the Visualizer distribution just so the user could build it from the commandline. So I set out to create a "Poor Man's PDE Build" using just Ant.

Actually building the plugins with Ant is relatively simple. This short Ant file will build the plugin:

<?xml version="1.0" encoding="UTF-8"?>
<project default="build-plugin">
<property name="src.dir" value="src"/>
<property name="classes.dir" value="bin"/>
<property file="META-INF/MANIFEST.MF"/>
<property file="build.properties"/>

<path id="classpath">
<fileset dir="${dist.dir}">
<include name="*.jar"/>
</fileset>
<fileset file="${osgi.framework}"/>
<pathelement path="${java.class.path}"/>
</path>

<target name="init">
<mkdir dir="${classes.dir}"/>
</target>

<target name="compile" depends="init">
<echo message="Compiling the ${Bundle-SymbolicName} plugin"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}" classpathref="classpath" debug="true"/>
</target>

<target name="copy-resources">
<echo message="Copying resources"/>
<copy todir="${classes.dir}">
<fileset dir="." includes="${bin.includes}"/>
</copy>
</target>

<target name="build-plugin" depends="compile, copy-resources">
<jar jarfile="${dist.dir}/${Bundle-SymbolicName}_${Bundle-Version}.jar" basedir="${classes.dir}" manifest="META-INF/MANIFEST.MF"/>
</target>

<target name="clean">
<delete includeemptydirs="true">
<fileset dir="${classes.dir}" includes="**/*"/>
</delete>
</target>
</project>


As you can see here, there really isn't much to the actual build. The best part is we can use the META-INF/MANIFEST.MF and build.properties created when we're working in PDE to control the build.

For a single bundle with no dependencies, this effectively duplicates the PDE build process. The difficulties comes in when you start having dependencies. If you use the headless PDE build, it will sort out all the dependencies for you and build your bundles in the proper order.

Implementing proper dependency resolution seemed awfully complicated, especially since PDE build already implements it. Fortunately, Visualizer doesn't require complicated dependency resolution because I've structured the bundles in a logical order. There are three levels of bundles: "core" which implement the main functionality, "ui" which implement the user interface to the core bundles, and "application" bundles that build on both the core and ui bundles.

Armed with this knowledge, we can structure a three stage build process where we first build all of the core bundles then all of the ui bundles and then all of the application bundles. To accomplish this, we have a master build.xml that calls out to the template build-plugin.xml file listed above using a subant task.


<target name="build-framework" depends="init">
<!-- build org.andrill.visualizer, org.andrill.visualizer.services* -->
<subant target="build-plugin" genericantfile="build-plugin.xml" failonerror="false">
<property name="dist.dir" value="../${build.dir}"/>
<property name="osgi.framework" value="../framework.jar"/>
<dirset dir=".">
<include name="org.andrill.visualizer"/>
<include name="org.andrill.visualizer.services*"/>
</dirset>
</subant>
</target>


Here you can see we build first the org.andrill.visualizer bundle and then all of the org.andrill.visualizer.services bundles. As we build each bundle, we copy the bundled JAR file to our dist.dir. Each time a bundle is built, it creates its classpath from all of the JARs in dist.dar. So even though there are dependencies among bundles, we are progressively fulfilling those dependencies by collecting the built bundles in dist.dir.

Once all of the "core" bundles are built, we can kick off the build of the ui bundles:


<target name="build-ui" depends="build-framework">
<!-- build org.andrill.visualizer.ui* -->
<subant target="build-plugin" genericantfile="build-plugin.xml" failonerror="false">
<property name="dist.dir" value="../${build.dir}"/>
<property name="osgi.framework" value="../framework.jar"/>
<dirset dir=".">
<include name="org.andrill.visualizer.ui*"/>
</dirset>
</subant>
</target>


Finally we can build all of the "application" bundles by excluding everything we've already built:

<target name="build-apps" depends="build-framework, build-ui">
<subant target="build-plugin" genericantfile="build-plugin.xml" failonerror="false">
<property name="dist.dir" value="../${build.dir}"/>
<property name="osgi.framework" value="../framework.jar"/>
<dirset dir=".">
<include name="*.*"/>
<exclude name="org.andrill.visualizer"/>
<exclude name="org.andrill.visualizer.services*"/>
<exclude name="org.andrill.visualizer.ui*"/>
<exclude name="${build.dir}"/>
<exclude name="${dist.dir}"/>
</dirset>
</subant>
</target>


You can check out the full build file at: build.xml and build-plugin.xml.

It's not nearly as neat as just kicking off a PDE build and letting it do all of the hard work of figuring out the dependencies for you. However, I'm rather fond of my approach because it keeps me honest. If I create a new bundle and it starts breaking the build, then I know I need to go back and make sure I've thought through the dependencies and am not trying to mix "core" code with "ui" code and such. And there's no need to bundle Eclipse with the source to build the thing.

Monday, May 12, 2008

Open Participation in SimpleIssue

When I started writing the simple issue tracker it was with the goal of learning more about Grails and coming away with a generally useful project. I decided to blog it because I was hoping my insights might be useful to the community. The response has been awesome, with lots of encouragement and lots of great ideas.

It strikes me that I can probably learn more from the community than just working away on my own. So if you want to fix something I screwed up or you have ideas for new features or you just want to experiment, drop me a line and I'll add you to the SimpleIssue project out at Google Code. Mike Hugo has already pointed out my embarrassing lack of tests and has offered to lend a hand. This is great news for me because testing has never been my strong suit. It will give me an opportunity to study how it's done.

SimpleIssue is a nights and weekends project for me, so I don't expect any particular level of participation. If you've got ideas and want to contribute, then all the better for us.

Sunday, May 11, 2008

Writing a Simple Issue Tracker in Grails, Part 2

This is the long overdue follow up to my Writing a Simple Issue Tracker in Grails post. In this post I'll be detailing how to add security with the JSecurity plugin.

So let's dive right in and work on securing our application. I opted to use the JSecurity plugin for no other reason than I've used the Acegi plugin in the past and wanted to see how JSecurity compares. You could use the Acegi plugin with a similar process and results.

Following the JSecurity Quick Start guide, we'll begin by installing the plugin and running the quick start script:

grails install-plugin jsecurity
grails quick-start


The quick start script created a few domain classes as well as a controller for logging in and out. Now we need to setup an Administrator user and role to test things with. We'll just cut and paste the Administrator role setup right from the Quick Start guide into our Bootstrap.groovy file:

import org.jsecurity.crypto.hash.Sha1Hash

class BootStrap {

def init = {servletContext ->
// Administrator user and role.
def adminRole = new JsecRole(name: "Administrator").save()
def adminUser = new JsecUser(username: "admin", passwordHash: new Sha1Hash("admin").toHex()).save()
new JsecUserRoleRel(user: adminUser, role: adminRole).save()
}

def destroy = {
}
}


With the administrator user in place, we can start securing our controllers. JSecurity makes this drop dead simple by letting you specify rules in the conf/SecurityFilters.groovy file.

Before we dive in and start writing our rules, let's think about what we want users to be able to do. I'm going to keep things simple and have two classes of users: anonymous and administrators. However, you could easily create a third class of users, authenticated, that would have more permissions than the anonymous users but less than the administrators.

Administrators should be able to create, edit, and delete Projects, Components, and Issues. Additionally, Administrators should be able to access the admin controller.

Depending on how private you want your issue tracker, anonymous users might be able to list and show the Projects, Components, and Issues or they might not. They may also create new issues or they may not. To handle this, I'm going to create two new configuration parameters that control how secure the application is by adding the following lines to Config.groovy:

issue.secure.create = true
issue.secure.view = false


These configuration options will let us configure whether anonymous users can create issues (issue.secure.create = true) and whether anonymous users can view issues (issue.secure.view = false). In the above example, creating a new issue is secured (requires the user be logged in) but viewing is not.

Now that we have an idea of our security rules, let's codify them in the conf/SecurityFilters.groovy file:

import org.codehaus.groovy.grails.commons.ApplicationHolder

class SecurityFilters {
def filters = {

// secure the project controller
projectCreationAndEditing(controller: "project", action: "(create|edit|save|update|delete)") {
before = {
accessControl {
role("Administrator")
}
}
}

// secure the component controller
componentCreationAndEditing(controller: "component", action: "(create|edit|save|update|delete)") {
before = {
accessControl {
role("Administrator")
}
}
}

// secure issue editing
issueEditing(controller: "issue", action: "(edit|update|delete)") {
before = {
accessControl {
role("Administrator")
}
}
}

// secure admin controller
admin(controller: "admin", action: "*") {
before = {
accessControl {
role("Administrator")
}
}
}

// secure creating issues if Config#issue.secure.create = true
if (ApplicationHolder.application.config?.issue?.secure?.create) {
issueCreation(controller: "issue", action: "(create|save)") {
before = {
accessControl {
role("Administrator")
}
}
}
}

// secure viewing issues if Config#issue.secure.view = true
if (ApplicationHolder.application.config?.issue?.secure?.view) {
issueBrowsing(controller: "issue", action: "(show|list)") {
before = {
accessControl {
role("Administrator")
}
}
}

componentBrowsing(controller: "component", action: "(show|list)") {
before = {
accessControl {
role("Administrator")
}
}
}

projectBrowsing(controller: "project", action: "(show|list)") {
before = {
accessControl {
role("Administrator")
}
}
}
}
}
}


So with that we should have a secured grails app. Obviously there are plenty of things we can improve. First off, there is no user management code. We'll probably want to generate a controller to allow new users to register as well as to allow Administrators to add new users. The views could be cleaned up quite a bit and we could add a custom logo. Finally there are numerous other options that folks have suggested, including saved searches, internationalization, etc.

I've gone ahead and created a new Google Code project with the code from this article. If you're interested in hacking on this code, let me know. In the next installment, which I promise won't take a month, I'll be adding search/filtering support via the Searchable plugin and creating an API so I can create new issues from external applications.

Cheers,
Josh

Tuesday, May 06, 2008

Visualizer, Part 2: OSGi & Native Libraries

This is the second in my series (Part 1) of posts about Visualizer. In this post I'll be talking about packaging an OSGi bundle that includes native libraries.

Visualizer uses the OpenGL bindings provided by JOGL project to display images and data. JOGL provides a series of platform-specific downloads that include a standard JAR file of Java classes and a set of native libraries for variety of platforms. This complicates the deployment of Visualizer because we need to install the appropriate version of JOGL for the user's platform. One option is to mimic JOGL and provide platform-specific builds of Visualizer that includes the appropriate version of JOGL. This isn't ideal because it adds extra steps to the build process and can introduce confusion for users trying to figure out which version of Visualizer they should download.

Fortunately, there's another option: OSGi. In a nutshell, OSGi is a component framework specification that allows you to assemble and manage applications as a collection of components (bundles). I'm not really doing OSGi justice so if you don't know what it is, you owe it to yourself to check it out. And odds are you've probably already used something built on OSGi because it seems to be everywhere these days.

Anyhow, OSGi elegantly solves our Visualizer deployment problem by allowing us to provide a single download. We simply combine the Java classes and all of the platform-specific native libraries provided into a single bundle and OSGi will detect and extract the appropriate set of native libraries based on the user's platform.

The first step was to download all of the JOGL packages for the platforms you want to support. I have users on Linux (32 & 64 bit), Mac OS X, and Windows (32 & 64 bit), so I downloaded all of these. From these downloads, I kept one copy of the JOGL JAR files and collected all of the native libraries.

The next step was to use Eclipse's excellent PDE tooling to create a new "Plugin Project from existing JAR files". I called it 'jogl' and pointed it at the JOGL JAR files. It sucked in all of the class files and spat out an OSGi bundle. If there were no native libraries, we'd be done.

Since we have native libraries, I copied them into the jogl bundle directory using a straightforward directory structure:


The 'native' directory structure is not required; you can use whatever makes sense to you. As you can see from the screenshot, I've got a set of libraries for 5 platform/processor combinations.

The final step is to make the OSGi framework aware of the libraries so it can extract the appropriate libraries when it starts up. This requires using the Bundle-NativeCode header in your bundle manifest:

Bundle-NativeCode: native/macosx/libgluegen-rt.jnilib;
native/macosx/libjogl_cg.jnilib;
native/macosx/libjogl_awt.jnilib;
native/macosx/libjogl.jnilib;
osname=mac os x;
processor=x86;
processor=ppc,
native/linux/x86/libgluegen-rt.so;
native/linux/x86/libjogl_cg.so;
native/linux/x86/libjogl_awt.so;
native/linux/x86/libjogl.so;
osname=linux;
processor=x86,
native/linux/x86-64/libgluegen-rt.so;
native/linux/x86-64/libjogl_cg.so;
native/linux/x86-64/libjogl_awt.so;
native/linux/x86-64/libjogl.so;
osname=linux;
processor=x86-64,
native/windows/x86/gluegen-rt.dll;
native/windows/x86/jogl_cg.dll;
native/windows/x86/jogl_awt.dll;
native/windows/x86/jogl.dll;
osname=win32;
processor=x86,
native/windows/x86-64/gluegen-rt.dll;
native/windows/x86-64/jogl_cg.dll;
native/windows/x86-64/jogl_awt.dll;
native/windows/x86-64/jogl.dll;
osname=win32;
processor=x86-64


One quick note: watch the whitespace when editing the bundle manifest. The OSGi specification is explicit about where whitespace is allowed and where it is required.

With this final piece we can JAR up our class files, native libraries, and bundle manifest and we should be able to use it in any OSGi implementation. When the OSGi implementation loads our bundle, it will extract the appropriate set of native libraries based on the user's osname and processor properties and make sure those libraries are available on the classpath.

I've tested this JOGL bundle on the Equinox implementation of OSGi across Mac, Windows, and Linux and it works great. If anyone is interested, I can make the pre-built bundle of JOGL available for download.

Busy Last Couple of Weeks

In the last 2.5 weeks, I was in the Bahamas for my wedding, Tallahassee for work, and in Iowa for a friend's wedding. I've spent a total of maybe 36 hours at home during that time. And to top it all off, my laptop went kaput while I was in the Bahamas and I'm waiting for a replacement. The blogging should pick back up now that I'm home for a bit.

Friday, April 25, 2008

Bahamas

Sorry for the radio silence, I'm currently in the Bahamas for my wedding and honeymoon. I should be back to blogging next week.

Cheers,
Josh

Sunday, April 13, 2008

Writing a Simple Issue Tracker in Grails, Part 1

My project for the weekend was to write a simple issue tracking webapp in Grails. I could have used something like Trac but that's overkill for my needs. I just wanted something simple where my users could report issues and request new features. I also wanted to add a few personal touches, which I'll show you along the way.

I'm going to assuming that you're not absolutely new to Grails and you've already got it installed and played around with it. If that's not the case, I suggest checking out Scott Davis's Mastering Grails series of articles. He's goes into far more detail than I do, so check them out.

Let's start by creating our project:
grails create-app simpleissue

First up is to define our domain models. We're going to keep it simple with just three models: Project, Component, and Issue. A Project has one or more Components, such as ui, documentation, etc. A Component is associated with a single Project and has zero or more Issues associated with it. The Issue object is associated with a Component and captures a bunch of information.

So let's lay down the code:
Project.groovy

class Project {
// relationships
static hasMany = [components: Component]

// fields
String name

String toString() {
return name
}

// constraints
static constraints = {
name()
components()
}
}

Component.groovy

class Component {
// relationships
static belongsTo = Project
static hasMany = [issues: Issue]

// fields
Project project
String name

// override for nice display
String toString() {
return "${project} - ${name}"
}

// constraints
static def constraints = {
name()
project()
issues()
}
}

Issue.groovy

class Issue {
// relationships
static belongsTo = Component

// fields
Component component
String type
String submitter
String description
String status = "New"
Integer bounty
Date dateCreated
Date lastUpdated

// constraints
static constraints = {
component()
type(inList: ["Defect", "Feature"])
submitter()
description(size: 0..5000)
status(inList: ["New", "Accepted", "Closed", "Won't Fix"])
bounty(range:0..12)
}
}


Most of the code is a pretty straightforward translation of our written description of the domain. You may, however, notice a few peculiar constraints. I've used a fair number of 'empty' constraints such as:

static def constraints = {
name()
project()
issues()
}

By default, Grails treats all fields in the domain class as required. I didn't want to change that, but I wanted to affect the order that the fields show up in a particular order in the web forms. By specifying the constraint, even if it is empty, it'll show up in that order in our forms. Of course, we could have customized the field order by hand directly in the view GSP code.

I also make use of the inList constraint to limit the fields to a specific set of values. Our views will be generated with an HTML select drop down containing the list of values we've specified.

Finally, we specify our issue description as being size:0..5000. This will ensure that there is plenty of space in the database for the description text. If we hadn't specified this, the description would have been generated as a varchar(255).

With our domain classes in place, we can create our controllers and views to test things out:

grails generate-all Project
grails generate-all Component
grails generate-all Issue
grails run-app

Fire up your browser and test things out by visiting http://localhost:8080/simpleissue:
and our issue creation form:

Looks pretty decent for 5 minutes of work. Poke around and test creating a project, component, and a few issues.

Customizing the Look

Now let's clean things up a bit and add some polish. The first thing I want to do is have the index page show the list of issues. We could copy and paste the code from the Issue List view or we can simply add a redirect to the top of our web-app/index.gsp file:

<% response.sendRedirect('issue/list') %>


The next thing I want to do is clean up the issue creation form. A few of the values, such as status, dateCreated, lastUpdated don't need to be specified in the form. We can go into grails-app/views/issue/create.gsp and remove those fields.

You may have noticed an odd field in the Issue domain class: bounty. You might have expected to see a field for priority on the issue. Instead, I chose to add a "beer bounty" field where the issue submitter could pledge a certain number of beers that I could redeem upon completion of the issue. This is, in my opinion, far superior to simply assigning low, medium, high priorities to issues.

As a final customization, I want to convert the number of beers into little beer mug icons to make it easy to see the important issues to fix. We'll do this by first copying the repeat example tag from the Dynamic Tag Libraries page of the documentation:
grails create-tag-lib Misc
This will create a grails-app/taglib/MiscTagLib.groovy file which we can add:

class MiscTagLib {
def repeat = {attrs, body ->
def i = Integer.valueOf(attrs["times"])
def current = 0
i.times {
out << body(++current)
}
}
}


And we'll call it in our grails-app/views/issue/list.gsp:

<g:repeat times="${issue.bounty}">
<img src="${createLinkTo(dir:'images', file:'beer.gif')}" alt="${issue.bounty} beers"/>
</g:repeat>


Here's a look at the final output:
In part 2, we're going to add in some security to prevent arbitrary user's from editing and deleting issues. We'll also add in searching/filtering support with the Searchable plugin.

Cheers.

Saturday, April 12, 2008

Searchable: Me Too!

I've got to echo what seems to be the community consensus: use the Searchable plugin for Grails if you need to do any sort of searching or filtering. It just works and it works damn well. I'm using it to search/filter some lists of domain objects in my app with nice paging support. I added the plugin to an existing, deployed application in less than an hour this afternoon. The documentation was straightforward and easy to understand.

The only real trick I did was using the 'component' option for searching related domain classes. Take my domain class below:

class SampleRequest {
static searchable = true
static hasMany = [samples: Sample]
static mappedBy = [samples: "request"]
static belongsTo = User

// fields
User investigator
Hole hole
Double top
Double bottom
String sampleType
Integer samplesRequested = 1
Double sampleSpacing = 0.0
SampleGroup sampleGroup
String notes = ""
Date created = new Date()
String status = STATE_NEW
Integer priority = 1
}


By default, related domain classes such as User, Hole, and SampleGroup above are treated as references. When I was searching for something like "micropaleo" which happens to be the name of a SampleGroup object, Searchable would return the actual SampleGroup but not all of the SampleRequest objects in that group. Since I was mainly interested in the sample requests in that group, I simply changed my searchable definition to:

static searchable = {
hole component:true
investigator component:true
sampleGroup component:true
}

and added static def searchable = true to my User, Hole, and SampleGroup domain classes. So now when I search for something like "micropaleo" or "olney" the sample requests in that group or by that user are returned.

The best part is, my user's think I'm some sort of programming deity because they asked for search and I added it that same day. Hopefully none of them read this blog and see how little work it was for me.

Cheers.