Posted by & filed under Coding.

So let’s take this pretty standard JS + jQuery:

this.names = [];
self = this;
$(['jack','lori','megan']).each( function() {
    self.names.push( this );
} );
console.log( );

We see that type of thing a lot. So much that the function closure in there almost seems like syntactic sugar. But it’s not. It’s a function. And calling it on each item is expensive.

It would be better to just use a ‘for’ loop. But that gives you this nastiness:

this.names = [];
a = ['jack','lori','megan'];
for( i in a ) {
    this.names.push( a[i] );
console.log( );

Because natively JS ‘for’ loops return the index. But it is faster, about 10X faster, according to my timings. Since you aren’t calling a closure.

But in Coffeescript it’s even cleaner. Like so:

@names = []
for name in ['jack','lori','megan']
     @names.push( name )
console.log( @names )

Which compiles down to a native for loop and does the index work for you.

It can be even better because you can put the statement before the for:

@names = []
@names.push( name ) for name in ['jack','lori','megan']
console.log( @names )

And it can be even cleaner because the ‘for’ can be used as an expression, like so:

@names = for name in ['jack','lori','megan']
console.log( @names )

And you can even do some filtering, like just taking the odd names, like so:

@names = for name, i in ['jack','lori','megan'] when i % 2 is 0
console.log( @names )

And for IE compatibility we can conditional the call to ‘console’ since &^*(%^&%! IE doens’t support the console.

@names = for name, i in ['jack','lori','megan'] when i % 2 is 0
console?.log( @names )

Coffeescript. It’s good.

Posted by & filed under Coding.

I’ve spent the last couple of days switching to Sublime Text from TextMate. Which is a solid move. Thank you very much for the heads up on that one Winsha. And in addition to that I’m teaching myself CoffeeScript.

I looked at CoffeeScript when it first came standard in Rails and I wasn’t impressed. It felt to me like it was a language for people who hated Javascript and wanted a different syntax. But I gave it short shrift and I realize that now. James Polanco convinced me to have a second look and I’m really glad I did. It’s clear to me now that it was written for folks who love JS, are fluent in JS, understand JS best practices and who want to make those best practices easier to… well… put into practice.

As an example, this:

$divs = $('div'); for( var o in $divs ) { $divs[o]... }

Is going to be faster than:

$('div').each( function() { this... } );

Because you aren’t invoking a function on each iteration, and having to create a new context and all of that. In my little ad-hoc test the for loop was about ten times faster. Another nice advantage to the ‘for’ loop is that the this reference remains the same so you don’t have to do little tricks like ‘bind’ or setting a ‘self’ variable to reference that context in closure. I like the ‘.each’ style because it’s clean and looks like Ruby. But with CoffeeScript I get the ‘for’ back because it handles all of the grunt work for me.

Honestly, it’s not clear to me that CoffeeScript would even be around if the value of ‘o’ in the for loop example were the object and not the index. It’s those little decisions that make all the difference.

It’s not just iteration though. There is a lot more to like. The explicit class stuff is great. The visual de-cluttering allows you to really see what the code is doing. The significant white space is definitely growing on me. The ‘fat rocket’ syntax for maintaining the context in the function is very clean. And the fact that almost everything that can be an expression is. That in particular I like a lot. For example.

chords_i_like = [] for c in chords chords_i_like.push( c ) if c.good()


chords_i_like = ( for c in chords when c is good() )

The ‘for’ loop becomes a ‘select’ operation in Ruby/Rails parlance. Which is very, very cool.

Plus there are a nice fixup around hash key naming where you can just put { text, title, comments } into a hash and that translates to { text: text, title: title, comments:comments }. But the first version looks cleaner and encourages engineers to use better variable naming.

I’m glad to say I was wrong about CoffeeScript. It’s an excellent addition to the Javascript language. Or whatever it is. Honestly, I’m hesitant to say that I “learned a new language” since it’s really just short hand for writing good JS.

Also, a big shout out to the Peepcode intro to CoffeeScript. It was a bit labored and the guys vocal style was grating after a little while. But the content was excellent.

Posted by & filed under Observations.

A question from a friend that I got this morning and wrote back so much that we figured I should blog it.

seen this?

interested to hear your views!!

I’m doing Test Driven Development for my own work so I have some experience with it. I like where this guy is going, it’s pragmatic; “do as much as you can”. I think what’s missing here is a proportional approach to testing. You test some stuff a lot more, and some stuff almost not at all. Generally speaking; the more mission critical the code is, the more I test it. Saying “if I can’t test 100% I shouldn’t test at all” is total horse shit.

For example, take WaveMetrics. They probably unit tested the absolute hell out of the ‘wave’ datatype. Every possible attribute and aspect of a wave had multiple tests so that even the slightest change would show an error and they would jump all over that. Whereas the About Box would have little or no testing.

Both extremes are bad. Too many tests and the code feels to ‘rigid’ and it’s hard to get anything done. Too few tests and it’s the wild west. The right blend is to have the outskirts of town a little ‘wild westy’ but when you go in town (into the core) you need to put on your ‘sunday go to meeting clothes’ even if they fee a little rigid.

Just safe and sane. Check in your code. Back it up. Have ‘releases’ even if it’s just to you. Test the critical code. Run the tests before checkins. If you can’t checkin the test data, at least check in the stuff you used to build the test data. I liken writing “exploratory software” to being on an island and seeing a little piece of dry land just in the distance. You cobble together whatever you can to get there. And then if you like it you backfill to permanently annex this new land to your old island.

If the island is the base of your software, and the little piece of land is that feature that you think you might want but your not sure, then if you find that you like it, you need to ‘formalize it’, by doing the work to make sure that it’s a genuine solid feature.

There is an actual term for this process of cobbling stuff together, it’s called ‘spiking’. And it’s where you take one guy off the team and have him cobble together a feature on top of the current software using whatever tools he can quickly. The idea is to determine two things; is it feasible? is it worthwhile? If it’s both you throw it away and put the real team on it to do it properly.

Way, way, way too much of what I did in our lab was that kind of spiker code. Not to mention it being out of source control, and yadda yadda yadda. (Though as an aside I’m an amazing spiker and it’s because of the work I did in the lab.)

That being said, I was onto something with ‘Acquisition Objects’ as I recall. Where I could build out an acquisition protocol in an OO abstraction then have it rendered out in ‘real time’. That type of structure; Acquisition Objects, is exactly the kind of thing it’s worth putting a lot of tests around, and formalizing. And that would take more time. But it’s worth doing because it gives a lot of confidence in the underlying correctness of the acquisition. It would also have been good because it would give a language between the Product Manager, Customer and the development team; “Let’s build a new type of acquisition object that takes the input in real time, phase shifts it, then sends it back out the output…” Or something like that. Everyone doesn’t need to know the details, but they do need to understand the abstractions.

A related anecdote for you. An acquaintance of mine is working on a data recovery project for a lab that had something in the range of 200TB. They RAIDed it, but for speed, not redundancy, by mistake, and they had some drives blow out. Basic stuff, done wrong, too risky. Result was an expensive mistake.

There is a lot that science software could take away from the was consumer software is written. A lot of stuff is just ‘known’ and ‘standard’ now. Wanna interoperate? It’s REST and JSON. Wanna store transactional or highly structured data, SQL, unstructured mutable data, NoSQL. Etc. Having lots of stuff ‘known’ and ‘standard’ is good because it means that people can come up to speed quickly, that customers and users have expectations that start at a higher level, and particularly that you can concentrate the bulk of your efforts on building your “unique value”. Ideally a project should be spending 90-100% of it’s time writing code (and tests) that are the unique value proposition of your project. The rest is a waste.

Last anecdote, an old boss of mine paid a consultant for three months to build a different scrollbar implementation for “Random Mac Science Product” on the Mac because he didn’t like the look and feel of the standard Mac scrollbar. That makes me want to punch either or both of them. Oh, and we didn’t end up using it because it was too buggy.

Posted by & filed under Observations.

Woo hoo! I took the course and passed the test and now I’m a certified scrum master. What I took away from the course was real emphasis on building teams and not putting features on individuals entirely. In my experience that has always resulted in weak software and teams. The software is little pillars and the team can’t support it when one developer moves on.

Posted by & filed under Observations.

I was diagnosed with dyslexia when I was ten. At the time I was told that I had built coping mechanisms and there was nothing to do. Which is fine. But long duration reading is still painful for me.

Recently a new font, Open Dyslexic, was released, and for me it actually helps. So I wrote a tiny little Google Chrome extension that adds a CSS definition to the current tab to turn the ‘default’ font for the page to Open Dyslexic.

Posted by & filed under Observations.

I’m taking a scrum master training course over these two days. And I’ll be going for certification after that. It’s been a useful course so far. Learning the basics and how it all ideally fits together is important, before it goes through it’s inevitable changes as we apply it.

Observing my classmates and how they approach it is almost as interesting as the course itself. A few a hard and fast in the waterfall world and I’ll be watching today to see when, or if, it clicks that there isn’t the same spec-driven, approval-centric, process with Scrum as there is with waterfall.

Posted by & filed under Coding.

I had an interesting clipping problem for the HTML5 Canvas tag. I wanted a donut clipping. For example, I had a circle like this:


And I wanted the center to show through like so:


My first pass looked something like this:


function testclip() {
	var cvs = document.getElementById('cvs');
	var ctx = cvs.getContext('2d');
	ctx.fillStyle = 'white';
	ctx.rect( 0, 0, 100, 100 );


Turns out that the inner circle is a no-op because of the direction of the clip assembly. If you draw the inner circle in the other direction, like so:




Note the true at the end instead of the false. In that case you get the nice donut shaped clipping region.

Posted by & filed under Observations.

I've decided to start fresh and move my blog over to Typo. I figure with a Rails back-end I can use it as a way to experiment as well as getting more experience running production Rails apps. So, from time to time I will post something here that hopefully you can use and I can refer back to when I run into technical issues and search the great Google machine for an answer.

Update! And then I didn’t like Typo very much. So I ported it all to WordPress because it seems to have largely won the blog wars.