Today, the New York Times carried a story about the recent page rank/search delisting of books, which can be found here. Also, the post and resulting commentary about the debacle can be found on Slashdot here.
The comment by calmofthestorm on Slashdot’s post raises the same kinds of questions that I have about how Amazon can claim that a technical glitch would have such a surgical effect on particular books and not all the books in the categories, which Amazon claims were effected:
I don’t know, one time I was writing a Huffman compressor for an applied information theory class and I couldn’t find this weird bug where it would email racist statements to everyone in your address book every time you tried to compress a file larger to 50kb. Took me several hours to fix, and my solution was under 100 lines of Python.
I can fully sympathize with companies who have to deal with overly sensitive people who think that bugs like this, which emerge quite frequently in sufficiently complex systems, are the result of bad calls or poor intent, rather than the simple technical glitches that they are.
Even in sufficiently complex systems, these kinds of things don’t “just happen.” Was it something Amazon.com did on its own, or was it perpetrated by a third party hacker? I don’t know, but as my friend Seth said earlier today, “It doesn’t pass the smell test.”