Overthinking: January 2009 Archives

 

January 30, 2009

While listening to KQED's latest pledge drive, I noticed something funny about their thank you gift schedule. This time, they offered the option to have you not take any gift but instead donate it to the SF Food Bank.. The schedule looks like this:

Donation ($) Meals
40 2
60 5
144 33
360 180

This seems strangely non-linear, which suggests something interesting, namely, that the fraction of your pledge that KQED uses to pay for thank you gifts as opposed to using to fund their operations. There's way too few points here to do a proper fit but I can't help myself. Playing around with curves a bit, a quadratic seems to fit pretty well, with parameters: Meals = .0014 * Donation^2 + 1.2. It's not just the $360 data point that throws it out of whack, either. There's apparent nonlinearity, even in the first three points. (Again, don't get on me about overfitting: with only four points there's only so much you can do.)

I'm not sure what this suggests about their business model. Naively, I would have expected the fraction of your donation that goes to gifts to go down as your gift went up. Indeed, you might have thought that they would take a small loss on the smallest pledges just to get people involved and then move to the upsell at some later date. Thinking about it some more, I guess the natural model is that KQED as trying to extract money from you up to the point where the marginal dollar they extract from you costs them a marginal dollar in gifts (or in this case food bank donations) at which point they stop. So, as people's marginal utility of having given something, anything, to KQED declines, they need to keep jacking up gift quality faster than the size of the donation to keep extracting your cash. Other theories are of course welcome.

 

January 19, 2009

Mrs. G. and I were up in San Francisco last weekend and while on our way to Fog City News we ran into someone we knew. This was sort of surprising, so I got to thinking about how probable it was (or wasn't). Grossly oversimplifying, my reasoning goes something like this:

The population of San Francisco is about 800,000. Let's call it 10^6. I know perhaps 100 people in the city at any given time. There are maybe 20-50 people on any given stretch of city block. Say I walk for an hour at 3 mph and that the average block is 100m long, so I walk for 50 blocks in that time and pass on the order of 10^{3} people. If we assume people are randomly distributed (this is probably pessimistic, since I know that I spend most of my time in SF in a few places and I assume my friends tend to be somewhat similar) then I have a .9999 chance of not knowing any given person I run into. If we assume that these are independent events then I have a .9999^{1000} chance of not knowing any of those people [technical note: this is really (999900/1000000) * (9998999/999999) * ..., but these numbers are large enough and we've made enough other approximations that we can ignore this.] .9999^1000 = .90 so if I walk around the city for an hour, I have about a 1/10 chance of meeting someone I know. That doesn't sound too far out of line.