Some marketing researchers know that using Margin of Error with convenience samples, non-probability samples, and online research panels is inappropriate. However, some researchers continue to report MOE as there does not seem to be a simple or any alternative. We set up a debate among 4 respected researchers and statisticians to duke it out: John Bremer, Nancy Brigham, Steve Mossop, and Trent Buskirk. May I simply say that the discussion was fabulous – intelligent, decisive, and passionate. I’m sure you’ll feel the same after viewing the video below.
Unfortunately, two panelists had to cancel when the webinar was rescheduled. Fortunately, they shared some of their thoughts ahead of time.
Andrew Gelman, Professor, Department of Statistics, Columbia University: Andrew Gelmanhas received the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. Selected books include Bayesian Data Analysis, Teaching Statistics: A Bag of Tricks, Data Analysis Using Regression and Multilevel/Hierarchical Models. @StatModeling
Andrew’s thoughts: “…I pointed out that if you’re concerned about non-probability samples and if you don’t trust the margin of error for non-probability samples, then you shouldn’t trust the margin of error for any real sample from a human population, given the well-known problems of nonavailability and nonresponse.When the nonresponse rate is 91%, any sample is a convenience sample….” Read the rest of Andrew’s thoughts here: http://andrewgelman.com/2015/01/23/whats-point-margin-error/
Jane Tang, SVP, Advanced Analytics At Vision Critical: Jane is a leading statistician who specializes in advanced analytics across a variety of industries, applied with a unique balance of academic and business acumen. A veteran of almost 20 years of experiences in statistics and marketing sciences, Jane provides consulting and statistical analyses for Market Research and Product Development at Vision Critical. She has a Master of Science degree in statistics from the University of Manitoba.
Jane’s thoughts: We often informally repeat the same study over time to assess the variations between samples, and get at a reliability (test/re-test) measure. Why don’t we try to do this in a more formal way, establish empirically sampling variability using the various sampling methods? This is where professional organizations can take the leadership role. While the task would fall on the sampling suppliers, organizations should be the gatekeeper of this, establish standards on how the empirical evidence should be collected, analyzed, and distributed. Researchers should demand this information from their sampling supplier when they review sampling plans. While this doesn’t solve the problem of convenience sample and our inability to infer to population, we at least have some idea of the reproducibility of the results.
Documents mentioned during the webinar:
- AAPOR’s statement on credibility intervals as co-authored Robert Santos, Trent Buskirk, Andrew Gelman
- AAPOR’s comment on use of margin of error
- Ipsos Public Affairs statement on credibility intervals as referenced by Nancy Brigham.
Email Jonathan to set up your full-service or DIY sampling portal and instantly access millions of double-opt-in, pre-screened panelists, or Davis to find out how you can monetize your loyalty, e-commerce, or gaming website.