That's a really good point. Google are in the process of developing this type of filtration using the input the receive from the toolbar. This may be partly how they are honing and analysing the results.
Your last sentence says it all, ... I wonder though that if Google evolves into an "Algorithm plus Neural Network" (which, I think, is another algorithm, which adapts itself to optimize an outcome) that we might see the diminishing of the weight of a DMOZ-listing, and an abolition of the Sandbox; ....dream on, tradefor, I suppose!
Part of my job is translation between English and Japanese, and I am fully aware of how poorly machines can be made to understand meaning just by looking at the outputs of machine translation programs. So in my opinion, a human-originated Bayesian filter / neural network (or whatever you want to call it) as part of the search algorithm can only be a good thing as far as I can see. I would rather they did something like that than to use the sledgehammer multiple-filter-sanbox to crack the spam nut. After all, trained humans can spot spam and a non-value-adding page much easier than a machine can. It would be fairer for all and much harder to fool.
I'd be willing to bet the farm that this is exactly what they are doing, and that it is a neural net that's learning. In fact, the neural net may have already learned and the people are actually playing against the n-net and don't even know it. Then the PHDs go to work to see who performed better, the people or the n-net. Machine wins... hands down.