June 2011 Archives


June 25, 2011

I recently started biking again and in the interest of being able to more accurately measure my workouts, I moved my SigmaSport BC1100 bike computer from my race bike onto my training bike. Like basically all bicycle computers from the pre-GPS era, the BC1100 is of the wheel magnet/sensor loop variety: you mount a magnet to one of the spokes and and a sensor to the fork. Every time the magnet passes by it induces a current which is transmitted to the computer.1 Of course, this mechanism just measures rotational velocity (rotations per second). In order to measure road speed you need to know the circumference of the wheel and as, the battery had run out so whatever calibration I used to have was long gone.

If you read the manual for a typical bike computer you'll discover not just one but many calibration techniques arranged in a hierarchy of both accuracy in inconvenience that goes something like this:

  • Look up your wheel size in a table.
  • Measure the diameter and multiply by 3.14.
  • Roll the bike one wheel rotation and measure the distance traveled.
  • Roll the bike one wheel while sitting on it (to compress the front tire the way it would be if you were riding it) and measure the distance.
  • Roll the bike N rotations (plus sitting in it, etc.), measure the distance and divide by N.

Regardless of the technique, the basic principle is that perform the above procedure, get the circumference, and enter it into the computer. In the specific case of the SigmaSport, you want the circumference in millimeters, so for a typical 700C-sized road wheel, you want something around 2100mm. Anyway, I dutifully performed the procedure as specified (see instructions here) and entered the desired number (2037) into the computer. So far so good, except that once I actually got on the bike, it reported that I was going about 25 miles an hour on average and 30 mph on the flat. Seeing as typical time trial pace for amateur athletes is around 25 mph and I wasn't even breathing hard, either I was ready to sign up for the Tour de France or something was screwed up with the calibration. The second of these seemed more likely.

A little searching around the InterWebs quickly revealed the problem: this model doesn't have an internal adjustment for English versus Metric, so you need to divide by 1.6ish to convert to miles/hour. I guess it was cheaper to just have the units setting change the labels than to actually include a circuit that divided by 1.6. Turns out that this actually is on the SigmaSport web site, though not in the owner's manual. Unfortunately, it's labelled "Attention BASELINE 400, BASELINE 700, BASELINE 1200 & BASELINE 1200+ owners!". which doesn't really help, since I have a BC1100. Outstanding!

1. The really cool Jobst Brandt-designed Avocet cyclometers instead used a ring of alternating polarity magnets mounted around the hub, allegedly for better precision. They don't seem to be available any more.


June 23, 2011

Today's XKCD quite correctly points out that if you're unfamiliar with some aesthetic experience (his example is wine), you're willing to tolerate any cheap crap, but once you have some experience, you tend to develop some taste. Inevitably, you find yourself preferring some varieties of that experience more than others.

Arguably, developing taste is a good thing, since, as Munroe has his character suggests, it opens up whole new vistas to you—albeit at the risk of turning you into an annoying snob.

There's another downside too, though: it tends to be expensive. This isn't inevitable, of course: after sampling a whole bunch of whiskeys you might find that you prefer Jack Daniels ($16.99/750) to Macallan 25 ($649.99), but assuming your neural architecture isn't too different from the rest of humanity—and perhaps you take your cues from your peers—it seems likely you're going to find that your tastes line up with others. And as things which are in demand naturally tend to be more expensive, you're suddenly going to be expending a lot more money on the same general class of experience. [I don't think the market's natural response to produce more of a desirable product helps out here, since you can almost always invest more and more input into some product (use the best grapes, age it longer, etc.), in the interest of creating an ever more exclusive and allegedly better version.]

Of course, the mere fact that you're shelling out more money doesn't necessarily mean you're worse off, since the counter-argument would go that you're getting more hedonic value out of the better product. I'm not sure that's true, though, since you habituate so fast. When I first started eating sushi, I was happy to eat the cheap stuff, but now that I've had reasonably good sushi, I'm not prepared to go back. Seems like a good reason to stay away from Masa.


June 5, 2011

I've been writing some JavaScript lately and figured unit testing might be a good idea. I'm using JQuery, so QUnit seemed appropriate. By and large it seems pretty solid, but I recently discovered something annoying. I was working on a test suite in which I first created and stored an object and then retrieved it. Everything was going along swimmingly and then I messed something up in the retrieve code. No problem, that's the kind of thing unit testing is supposed to catch, so I fixed the bug, but the retrieve still didn't work.

A little debugging revealed the proximal cause. The store returns a new object identifier (actually a sequence number), which I was using to retrieve the object. But when I went to get the object, the identifier was 0; it had never been set. For a while I thought I was just missing something important about JavaScript variable scoping, but after a bunch of debugging I uncovered the real problem: when you have a set of QUnit tests and one of them fails, QUnit remembers. The next time you run the test suite, it helpfully runs the tests that failed first. So, consider the following code:

       asyncTest("Test 1", function(){
           d = new Date();
           console.log("Test 1: " + d);
           ok(true);   // report success
       asyncTest("Test 2", function(){
           d = new Date();
           console.log("Test 2: "+d);
           ok(false);  // report failure

The first time you run the test suite, Test 1 runs first (and succeeds) and then Test 2 runs (and then fails). However, the next time, Test 2 runs first, then Test 1. In a real scenario where Test 2 depends on Test 1, Test 2 will fail again, which means it will run first again, ad infinitum.

This seems like a good idea from some perspective, I guess: why should you have to wade through all the tests that work in order to retest the one that failed? Unfortunately, if the tests need to be run in a specific order then everything goes to hell.

I don't see this feature in the documentation; I found it by source code inspection. I suppose it's probably in there somewhere, though. Anyway, there's a way to force the tests to run in order. You just do: QUnit.config.reorder = false;


P.S. Mrs. G tells me that unit tests are supposed to be order independent and so I should make any operations that need to run in sequence a single test. That's one way to do things, I guess, but I don't really want my software silently forcing it on me. `