HW1: A “better” model?

That black box problem causes everyone involved no end of headaches. I still like it, but even though I’ve been assigning some variant of it for several years, tweaking it each time, I still haven’t managed to make it really clear.

In part 1, I give what I call a “non-optimal” model of the box. But actually, that wasn’t really a good thing to call it. It’s not just that it’s “non-optimal”, it’s actually a bad model. “Non-optimal” kind of suggests that it does what it needs to do, but just could be more elegant. In fact, it doesn’t do what it needs to do, and (or even because) it could be more elegant. The reason it is a bad model of the black box’s behavior is that it predicts that the box should do things that it doesn’t in fact do.

In part 2, then, I have you contemplate a “better” model. And what I mean by “better” here is something more like “adequate”—a model that doesn’t make these wrong predictions that the box should do things that observation reveals it doesn’t do. So, by “better model” I mean something like “not a complete failure of a model.”

The basic idea is not particularly complicated, once you see what I’m after. And I do think that that Car Talk puzzler thing at the beginning is kind of relevant to the thought process. Just to save you needing to do a lot of searching for it, here is the original puzzler and the answer.