Ian Bicking: the old part of his blog

Re: Training for Failure

There are a couple levels at play in Joel's essay. By programming exclusively in Java, he asserts that it is extremely hard for a hiring manager to quickly assess a potential hire's skills. This is not something exclusive to external assessment; it's a problem for the nascent programmer to assess himself. Schools can capitalize on this by keeping poorer programmers in their program, and these poorer programmers cannot tell that they are poor at it. This is unfair to the student during the process, and unfair to the hiring manager later. Is this because the failures are minimized?

To take an analogy of cooking, with only limited ingredients and extremely helpful tools (imagine a pan that won't burn anything), how can you identify the great cooks, the good cooks, and the mediocre cooks? In school it's more important whether I can improve my skills. For the hiring manager it's often more about which level of skill I possess - or at least where I max out. But unless it can be usefully measured, how will any of us know?

I agree Joel's essay oversimplifies in the implication that algorithms and pointers are necessarily the hard part, and that competence in the hard parts can predict the overall competence. I think they're strong markers, and much more easily measured than some idea of code quality, but they're still not ideal or fully representative markers.

Comment on Training for Failure
by Michael Urman