As you may have noticed, I correctly predicted the outcome of the 2012 Presidential Election with spookily precise accuracy. I except calls from The Washington Post any minute now.

Seriously, though, you may have noticed that my prediction was also the modal prediction of a certain Mr. Silver of East Lansing, who has a notable blog about these things. I admit that my prediction, for the most part, was based on reading both his projections, as well as those of the PEC and Drew Linzer and even PollTracker, and picking what was the most likely outcome. I basically benchlined their projections against my own assumptions (fundamentally, that the 2008 Obama coalition was here to stay and would prove resilient even in the face of Dukakis levels of white support), and picked the scenario that was more Obama friendly within reason. Basically there were only like 3-5 “correct” answers to this election, or that were at least within 1-2 standard deviations of the mean, and I picked the most Obama-friendly of them (Obama wins all swing states except NC). It involved a little intuition, but only a little. And I couldn’t have done it without the geeks and the aggregators.

So the question is – what did they do? But the better question is – what didn’t they do? And the answer to that is “mostly everything.” If you look at what Nate Silver did, he basically took all the polls, weighted by house effects, and baselining them against a few fundamentals and let the model spit out the answer. And it worked! Mostly because the central limit theorem works. But doing this is surprisingly hard because it is overwhelmingly tempting to try and devise a secret sauce or to tease out magical answers or to unskew the polls until they produce something bold and counter-intuitive. But as long as you can have steely-eyed confidence that the polls are largely right, you can keep calmly raising and raising in the face of sloppy bettors on tilt and collect big.