If anyone needs to take the time to read that article it's you.
First, he was arguing about the use of statistics in predictive modeling, not whether or not polls are accurate. In particular he was attacking likely voter screens (although not by name). None of that is present in this sort of poll. None. Public polls are estimates, not concrete predictions; that's what the margin of error is all about. Needless to say with the polling on background checks the results are light years outside of the margin of error. I sincerely doubt the author would have any issues with them whatsoever.
Furthermore, and this is really the best part, you just linked to a guy who bet against the predictive power of scientific polling and lost. He didn't lose by just a little either, he literally lost as badly as it was possible to lose. His bet was that if Nate Silver predicted fewer than 48 out of 50 states correctly that he would win. He said he was so confident he even gave his opponent a free extra state. He even mentioned that if we were doing a REAL bet on Silver's methodology he would have to get all 50 right to win.
Not only did Silver predict 48 correctly, he was right on all 50 (and DC), a success rate of 100%. If you're trying to make a statement against stats you probably don't want to link to a guy who just got his ass handed to him.
This is so wrong I don't know where to start. Sports polling is about what people guess for the future, not about what is. What those polls show is that people are bad at sports betting, not that polls are inaccurate.
He said, Nate made the prediction and got it right. That doesn't mean to say the same thing that the method to his poll or prediction was any more valid than ANY OTHER poll or prediction that stated other outcomes. There were other outcomes and those lost. He also states, that Nate's prediction being right once doesn't mean it will be right in the future. It certainly could be, but it's basically no better nor worse than something like watching when a ground hog pops out of the ground to tell if winter will come early or late this year.
While something like a ground hog popping out of the ground doesn't allow people to control variables, that doesn't make other polling or prediction based methods any more valid. In real science they are just one more tool. They are never the only tool. Most scientists will tell you it not even that great of a tool.
OPINION Polls are even worse than scientific statistics where variables can be accounted for. It's even worse when the person polling uses a weighting system and one they do not disclose. As in the polls listed in this thread. For example, what if the weighting system in that poll was to exclude any answers made by a person being polled that was from a rural zipcode when phoned?
We do not know if that was done or not done. Not without full disclosure. A weighting system that did that would certainly skew the data being presented by the poll.
Nor are polls from people's opinions EVER considered good. Language matters in such polls. MASSIVELY matters. It doesn't take much more than a single word change in most questions to change the outcome of a given poll drastically in many cases.
http://cstl-cla.semo.edu/renka/Renka_papers/polls.htm
Polls CAN be made better than others. They are a useful tool as a guideline and starting point. Not once have I said otherwise. I said it's just decision making to place all your bets on the outcome of a poll. Many have in the past and been massively burned by that. I use JCPenny as an example in my last post. There are plenty of others.
The point I'm making is that even the best polls are still just "educated guesses" for a specific outcome. Rallying around the outcome of a poll as your sole basis for decision making is fucking stupid.
I also pointed out how the poll in question as used is a BAD one. It uses too small a sample size. No release of how the randomization of sample selection was done. No mention of how weighting for various questions of the poll was done. The language was not vetted, nor was a release of a vetting process done. The language was hardly neutral as buzz words from the media were plentiful in the poll. Such as the use of the word GUNSHOW loophole. That connotates a derogatory outcome in anyone's mind. Nor was there a release on how the poll questions were asked. Was it an automated computer voice poll? Or was there a live person? If live, how were they directed to ask the questions? What was the specific order of the questions?
Those are all relevant to determining how useful any data gained by a poll is. In the case of the poll done by both Shoen and the GKP group... NONE of that information is released to the public. As such, they are horrible polls.