Common Statistical FAQ about RIWI
Yes. This enables RIWI-specific tests of correlation of reliability and falsifiability. RIWI has conducted a 21-month long all-country study in a major peer-reviewed journal and compared answer options over time on social values that would not be expected to change significantly year over year or month over month. The data have remained highly stable and predictable across time, within regions. In many areas of international data science, client work and scholarly work, we seek to prove this out further and further. We are happy to furnish clients with other third-party reviews. We are unaware of any other technology in data collection that has been more subject to peer-review and third-party review than ours since our commercialization in 2012.
Statistically, based on third-party reports, everyone reading this has made this error at least once (but may have not answered a RIWI survey). These errors will occur more frequently, given the enormous evolution and explosive architecture of the Web, the growth of smart phone usage, IPv6, the rise of the Internet as a medium of commerce around the world, and new domain name extensions. People who make these errors constitute a random highly representative sample of a given online population. They also tend to be non-habitual survey takers – as we have shown this globally in two successive third-party published reports (GreenBook GRIT 2013 and GRIT 2014). Our data for why RIWI respondents answer our questions vary: they may find it interesting, they may be curious, or they may find the surveys to be clear, simple, and fast. These reasons vary by culture and geography considerably.
Traditional polling and market research is just one vertical focus of our business — but a very important one, especially since this sophisticated data sector has preached and recognized the importance of randomness in data collection since its inception. Without randomness, there is no representation of what the person on the street – the undecided voter, the average person, the silent majority – really thinks. This is true for descriptive and predictive data, and in transparent or opaque markets across the globe.
It is a demonstrable fact that non-frequent survey takers are psychographically different than those who take surveys frequently. By accessing these infrequent survey takers, RIWI is able to collect data from a much larger spectrum of an online population. Panel respondents and dynamically sourced (previously, “river sample”) respondents are spending hours a week on average doing surveys. Globally, 84% of RDIT sourced survey takers have not done a survey in the past week. If you believe a person who randomly stumbles across a RIWI survey is different from a person who answers multiple long, tedious surveys every day for panel companies, then we can help you. Many serious thought leaders and methodologists tell us in public or private about their agreement with these views and how our data prove that RIWI respondents do not display the same tendencies and resulting axiomatic errors of ‘recency’ and ‘frequency’ established by the early 20th century psychologist researcher John B. Watson in the context of his studies of behaviorism. By contrast, RIWI respondents globally are not bound to a chain of well-entrenched reflexes.
Less and less. Today, predominantly English content does create a bias of the Web. The Web is biased against minority cultures and non-English language Web users but this is changing. We mitigate this challenge through programmatic integration with special RDIT supply that is internationalized. Question design can also mitigate this, in particular with visuals and extensive triple-check work in translation. And keep in mind: even in America, at least 45% of homes are only accessible through cell phones. Possible respondents in those homes are thereby subject to strict FTCC rules regarding auto-dial. If legacy technologies cannot reliably reach people in wealthy countries, they will encounter even greater challenges in developing countries. There are also numerous privacy concerns especially in Europe, where personally identifiable information capture and related rules are regulated at the EU level.
Yes and yes. We are passionate about geo-location globally, often a lot harder than many data vendors imply, and we are constantly trying to improve the way we do it. When it is an essential measurement we always suggest verifying location in survey. We do have the capacity to target specific regions within most developed countries. For example, we are able to target a large municipality such as New York City by either using designated market area (‘DMA’) definitions or by a radius measurement from a defined latitude/longitude evolving from our patent-pending code algorithms.
Yes. We have been asked to be on ESOMAR data committees, written up in niche and highly-respected data journals, featured in global media, presented at ESOMAR and AAPOR, and we have won several data awards as judged by methodologists at peer data companies. We have been peer-reviewed, and are in ongoing review, with some of the most data-sophisticated organizations in the world. Our clients themselves will vouch for our methodology as powerful and distinct.
Like many other companies and methodologists in data capture, we understand that margin of error is a theoretical construct. It is the error produced by interviewing a random sample rather than the entire population whose opinion you care about. RIWI’s coverage bias mitigation here is proprietary, since, although we use margin of error because clients expect and understand it, our data are reflective of online usage, and, as disclosed in our IP and third party reviews, our entire intent is to reduce coverage bias. So if our population parameter is the online parameter only, then margin of error is much less relevant to RIWI. But we still report it, subject to the caveat that it should not be relied on generally for online data. Here’s why: Based on the sample size (and some other factors) and utilizing statistics, the pollster can calculate the margin of sampling error. This describes how close the sample’s results likely come to the results that would have been obtained by interviewing everyone in the population — in theory — within plus or minus a few percentage points. We will remain actively committed to exploring these issues at events with which we are associated, such as AAPOR and ESOMAR, in order to review our positions on this complex matter, in league with multiple constituencies. Our goal is for clarity and transparency in methodology and to support our customers to make the best decisions possible.
No. We take audit, risk, and legal controls very seriously.
Whom we present surveys to is a near perfect proxy of the Web activity in any given country, and third party Web analytics testing has proven this. RDIT reduces many of the biases associated with online panels by intercepting random Web users and asking clear, simple surveys. We reach younger age groups, ethnic populations and rural areas, all of which have been long-standing challenges for traditional panels. We can ramp up our US data capture within hours to meet client needs.
Millions. The number matters vastly less than quality and other elements of statistical relevance. Our scale to a certain extent is incalculable. We have hundreds of millions of potential survey respondents that we can intercept in a given month. Big Data? Not our concern. At RIWI, it’s all about smart data.
We believe, as other major sample providers have admitted, that ‘river sample’ as we knew it, no longer exists. Today we are left with programmatic purchasing of dynamically sourced respondents by suppliers in the ‘river’ industry. What does that mean? Routers and marketplaces exchange survey takers in an increasingly efficient way. What this results in is a small part of the population spending hours a week being shuttled between survey platforms. Industry papers reveal that these survey takers will do an average of 40 surveys a week across 8 different panels (!). 84% of RIWI survey takers have not done a survey in the past week. We are surveying the true Web population, not niche groups of high-intent survey takers.
Randomness is the holy grail in representative data collection. Our IP and tech platform recognizes this as the case, and we are frequently third-party identified as a ‘potential or near-probability’ sample. Our position is that we are not a convenience sample, like a panel, nor a social media stream, nor a gaming modality of data capture, nor a so-called ‘river sample’ or akin to an open access Twitter™ feed or blog pulse measure. We strive to be recognized more and more as a global random replicable probability sample through non-human manipulation of our data. We go through ongoing expert evaluation of our raw un-weighted data from our technology platform, which we share with clients and third-party reviewers on an ongoing basis.
Not usually. When you collect data from a much more diverse section of a given population there will always be differences from data gathered by panels. When we recruit to panel, we get about 1 in 2000 people to sign up. Compare that 0.05% willing to join a panel, to our 10% completion rates globally on a 10 close-ended question survey. There are 200 times more people willing to participate in a survey than join a panel, in a single invite scenario. We analyze, share, and publish these differences openly and confidently.
Yes, our survey platform, although device agnostic, was built specifically for smartphone and tablet data collection. Every survey we build is mobile-optimized with exceptional speed to maximize reliability and response rate. Our user experience is cross-browser, cross-device friendly and has been in successive development since 2009, when it was first incubated in an academic research unit.
No, as understood by EU or like regulations.
Occasionally. We are firm believers in the short, simple, respectful survey. We recommend no more than 15 linear questions for the highest data quality based on our exhaustive research. We have proven through third parties that long 150 question surveys, modularized into discrete constructs, can be completed with very powerful results. This process relies on individual respondents answering different numbers of questions based on their willingness to participate and for the data to be stitched together post survey.
No. We have patents on all of our processes. In all parties’ protective interests, please ask us how you may wish to license our platform or work with us directly for maximal efficiency. We are looking for proven partners looking for the very highest global or multi-country data – not just “good enough” data – on an ongoing basis. We will be notified independently if our patents are violated and will vigorously defend our IP.
Please contact our Sales and Marketing Manager, Shashank Tiwari, at firstname.lastname@example.org.
Yes, but please contact us first so we can get a sense of your needs and most applicable information to send you. Please contact us at: email@example.com.
As transparently as possible. Those who complete the final answer in a linear survey are counted in the response rate and others are not. The data collected by those who do not finish can be included in the final data delivery for analysis. In research publications, we occasionally distinguish response rates to suit journal rules, with clear caveats, based on the percentage of respondents who opt-in to answer the first question and thereafter answer all the remaining ones. Note that RIWI responses rates vary considerably across the globe and are generally better than all other modalities in hard-to-reach nations of the world. We compare very favorably in terms of true response rates to telephone or panel response rates when one digs into how response rates emerge in those contexts – based on raw members of people telephoned or who are proactively asked to join a panel to win rewards – and, based on this total population, thereafter take the procedures and actions to completely answer the whole survey. Providing incentives would, we know, raise our response rates, but would compromise data quality.
In the US, our average project is approximately 50%-50% in terms of males and females who have the option to respond. There is variation with bespoke studies. This 50%-50% split average changes in countries where men disproportionately use the Web. Since we are intercepting a given population randomly, we skew towards people who use the Internet more often, which means high numbers of young people, especially in developed countries. Our comparative upward weights on the age skews (which we weight to population census) are therefore understandably different to the unweighted data collected by panel companies.
Yes, they are Internet only. When comparing the Internet population to the population in any given country, statistics show that Web users tend to be younger, more educated, and have higher incomes. We do know that within the Internet population, we reach much more diverse populations — whether it be ethnic minorities, rural populations or different races. We do mash up RIWI data with other valuable data, such as feedback from affected populations in parts of Africa where we work with many global NGOs. In some very poor countries, rural use of Web-enabled devices is very high. In other developing countries, we have much larger socio-economic spread of response than in others where the bias is to wealthier populations. Our vast respondent data have given us market intelligence on how countries around the world answer surveys differently, and why.
We cost per completed survey, which varies depending on geography. We are very cost-competitive with legacy data capture models, such as panels, especially in multi-country work. Online population prevalence and length of survey are the most significant factors that affect price. We have NGO and university discounts. Please contact us at: firstname.lastname@example.org.