What are examples of RIWI’s long-term contracting vehicles for global data collection?
United States Department of State − Long-Term Agreement
Following technical review of past RIWI communications research on behalf of the United States Department of State (DoS), the DoS awarded to RIWI a long-term agreement signed on September 25, 2017. The DoS agreement has an initial base period of one year, plus four additional one-year option periods. Under this long-term agreement, RIWI will deploy its patented technology to collect scientifically reliable population sentiment, and will measure the effectiveness of digital communications in various regions around the world.
United Nations-World Food Programme − Long-Term Agreement
Following competitive tender and technical review in Fall 2016, RIWI now maintains a Long-Term Agreement (LTA) with the United Nations World Food Programme (UN-WFP), which establishes RIWI as the preferred bidder for Web-based surveys in 72 countries. The LTA allows WFP staff and country groups to rapidly utilize RIWI for Web surveys in the designated countries through pre-approved pricing. This is an indefinite quantity contract.
Omidyar Network − Master Service Agreement
Effective January 2017, RIWI signed a Master Service Agreement with the Omidyar Network, the worldwide philanthropic investment firm. This contracting mechanism enables investment managers and project leaders across the Omidyar Network to draw down on existing funds to commission custom research projects and new data feeds from RIWI. To date, RIWI has completed projects with the Omidyar Network in more than 60 countries.
United States Agency for International Development − Human Rights Support Mechanism
The United States Agency for International Development (USAID) Human Rights Support Mechanism (HRSM) is a five-year contract that enables long-term and rapid response human rights programming by a consortium of selected USAID partner organizations. RIWI provides rapid response proposals to Freedom House, the lead implementer, in order to service this multi-year contract, awarded for the period of October 1, 2016 to September 30, 2021. Following competitive tender and technical review, the HRSM was awarded to the consortia by the US government in October 2016. The HRSM contracting vehicle now enables USAID Missions to issue contracts to the members of the consortium quickly, avoiding the need to design and tender a new solicitation for each initiative.
ILGA-RIWI Global LGBTI Perceptions Index
In July 2016, RIWI entered into a five-year agreement with the Geneva-based International Lesbian, Gay, Bisexual, Transgender, and Intersex Association (ILGA) to conduct yearly studies of changing attitudes towards social issues concerning LGBTI people. The first study initiative ran in 65 countries, incorporating the responses of more than 95,000 respondents. Each subsequent year of the survey will have an expanded scope. Partners and funder organizations from the international corporate and civil society sectors increasingly contribute to this study partnership. More information on the survey and how it is being used by civil society organizations around the world can be found at: http://ilga.org/.
RIWI does not:
- Collect personally identifiable information that make respondents prone to risk, or which violate EU or other privacy regulations.
- Track or “cookie” potential respondents to trace their browsing history.
- Capture data that violates the Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans.
- Field surveys that are longer than 5 minutes.
- Perform lead generation.
- Perform data capture projects in exchange for marketing.
RIWI stands for Real-time Interactive Worldwide Intelligence.
All RIWI samples are independent, replicable, and falsifiable in different countries or regions and over time, which enable RIWI-specific tests of correlation of reliability and falsifiability.
RIWI has conducted a 21-month long all-country study in a major peer-reviewed journal and compared answer options over time on social values that would not be expected to change significantly year over year or month over month. The data have remained highly stable and predictable across time, within regions. In many areas of international data science, client work and scholarly work, we seek to prove this out further. Other third-party reviews are available upon request.”
Statistically, based on third-party reports, everyone reading this has made this error at least once (but may have not answered a RIWI survey). These errors will occur more frequently, given the enormous evolution and explosive architecture of the Web, the growth of smart phone usage, IPv6, the rise of the Internet as a medium of commerce around the world, and new domain name extensions. People who make these errors constitute a random and highly representative sample of a given online population. They also tend to be non-habitual survey takers – as we have shown this globally in two successive third-party published reports (GreenBook GRIT 2013 and GRIT 2014). Our data for why RIWI respondents answer our questions vary: they may find it interesting, they may be curious, or they may find the surveys to be clear, simple, and fast. These reasons vary by culture and geography considerably.
Without randomness, there is no representation of what the person on the street – the undecided voter, the average person, the silent majority – really thinks. This is true in transparent or opaque markets across the globe for descriptive and predictive data. Traditional polling and market research is just one vertical focus of our business – this sophisticated data sector has preached and recognized the importance of randomness in data collection since its inception.
Studies have shown that non-frequent survey takers are psychographically different than those who take surveys frequently. By accessing infrequent survey takers, RIWI is able to collect data from a much larger spectrum of the online population. Panel respondents and dynamically sourced respondents are spending multiple hours a week on average completing surveys, while 84% of RDIT-sourced survey takers have not completed a survey in the past week. Numerous thought leaders and methodologists have expressed their support for these views, and with how our data proves RIWI respondents do not display the same tendencies as frequent survey takers. These tendencies result in axiomatic errors of ‘recency’ and ‘frequency’; established by psychologist-researcher John B. Watson, in his studies of behaviorism. By contrast, RIWI respondents globally are not bound to a chain of well-entrenched reflexes, leading to more genuine opinions.
Less and less. Today, predominantly English content does create a bias of the Web. The Web is biased against minority cultures and non-English language Web users, but this is changing. We mitigate this challenge through programmatic integration with special RDIT supply that is internationalized. Question design can also mitigate this, in particular with visuals and extensive triple-check work in translation. And keep in mind: even in America, at least 45% of homes are only accessible through cell phones. Possible respondents in those homes are thereby subject to strict FTCC rules regarding auto-dial. If legacy technologies cannot reliably reach people in wealthy countries, they will encounter even greater challenges in developing countries. There are also numerous privacy concerns especially in Europe, where personally identifiable information capture and related rules are regulated at the EU level.
RIWI surveys are able to target specific regions within most developed countries. RIWI is passionate about geo-location globally, and is constantly improving the way our surveys can be geo-targeted within hard-to-reach countries. When it is an essential measurement we always suggest verifying location within the survey. For example, we are able to target large municipalities such as New York City by either using designated market area (‘DMA’) definitions or by a radius measurement from a defined latitude/longitude evolving from our patent-pending code algorithms.
Yes. We have been asked to be on ESOMAR data committees, written up in niche and highly-respected data journals, featured in global media, presented at ESOMAR and AAPOR, and have won several data awards as judged by methodologists at peer data companies. We have been peer-reviewed, and are in ongoing review, with some of the most data-sophisticated organizations in the world. Our clients themselves will vouch for our methodology as powerful and distinct.
Like many other companies and methodologists in data capture, we understand that margin of error is a theoretical construct. It is the error produced by interviewing a random sample rather than the entire population whose opinion you care about. RIWI’s coverage bias mitigation here is proprietary, since, although we use margin of error because clients expect and understand it, our data are reflective of online usage, and, as disclosed in our IP and third party reviews, our entire intent is to reduce coverage bias. So if our population parameter is the online parameter only, then margin of error is much less relevant to RIWI. But we still report it, subject to the caveat that it should not be relied on generally for online data. Here’s why: Based on the sample size (and some other factors) and utilizing statistics, the pollster can calculate the margin of sampling error. This describes how close the sample’s results likely come to the results that would have been obtained by interviewing everyone in the population — in theory — within plus or minus a few percentage points. We will remain actively committed to exploring these issues at events with which we are associated, such as AAPOR and ESOMAR, in order to review our positions on this complex matter, in league with multiple constituencies. Our goal is for clarity and transparency in methodology and to support our customers to make the best decisions possible.
No. We take audit, risk, and legal controls very seriously.
RDIT reduces many of the biases associated with online panels by intercepting random Web users and asking clear, simple surveys. Respondents who are presented with RIWI surveys are demonstrative of a near-perfect proxy of the Web activity in any given country, and third party Web analytics testing has proven this. RIWI reaches younger age groups, ethnic populations, and rural areas with ease, contrary to traditional panels. Depending on client needs, data capture within a given territory can be ramped up.
Millions. The number matters vastly less than quality and other elements of statistical relevance. Our scale to a certain extent is incalculable. We have hundreds of millions of potential survey respondents that we can intercept in a given month. Big Data? Not our concern. At RIWI, it’s all about smart data.
With RDIT, RIWI is able to survey the true Web population. Industry papers reveal that river sample survey takers will do an average of 40 surveys a week across 8 different panels; 84% of RIWI survey takers have not done a survey in the past week.
The data collected by RIWI are frequently considered a ‘potential or near-probability’ sample. RIWI provides a global, random replicable probability sample through non-human manipulation of our data to clients across multiple sectors. In a recent academic paper in Statistical Science published by the Institute of Mathematical Statistics, (Statistical Science 2017, Vol. 32, No. 2, 279–292) by Matthias Schonlau and Mick P. Couper, RIWI’s Random Domain Intercept Technology is described by the authors as a non-probability sample akin to “stopping passersby on the street for an interview on the spot.” While researchers may disagree on the degree to which RIWI constitutes a technology capable of yielding a true probability sample, they agree that the RIWI approach is statistically compelling, and that post-stratification techniques can be used to re-weight observations in the RIWI sample to match the known population distribution on variables of interest. We expect that the debate and scholarship over the ‘probability sample status’ of RIWI will continue as RIWI gains increasing attention in the academic community worldwide. RIWI researchers work with external parties to help better understand these evolving, complex and important issues (e.g., such as issues with potential selection bias in all modes of data collection), as well as how best to articulate these issues.
Not usually. When you collect data from a much more diverse section of a given population there will always be differences from data gathered by panels. When we recruit to panel, we get about 1 in 2,000 people to sign up. Compare that 0.05% willing to join a panel, to our 10% completion rates globally on a 10 close-ended question survey. There are 200 times more people willing to participate in a survey than join a panel, in a single invite scenario. We analyze, share, and publish these differences openly and confidently.
Do you know anyone who answers telephone surveys or panel surveys honestly? Seriously? RIWI technology was developed in response to the reality that there is a need for reliable global, predictive, representative and anonymous privacy-compliant surveys and message tests that are simply not possible or desirable with traditional Web opt-in panel surveys or telephone surveys. Opt-in surveys on panels are delivered by email links or test messages. These delivery methods are not secure and almost always appear to the potential respondent, who is often completing the survey for some kind of incentive, as spear-phishing links or suspected malware depending on his or her email privacy settings. And if you’re familiar with the Equifax or Sony hacks, you should know that once you’re on a panel, you put yourself at severe risk of a privacy breach. A hacker can potentially mine all your responses and tie them to you and release them to the world with all your personal information. And that’s just the privacy concern, not data quality, an ever-rising concern of panel-based survey data, especially in fragile or conflict states or non-permissive, monitored environments where RIWI strongly recommends that no client ever consider panel-based data in the interests of mitigating potential risk to respondents and to the client.
By contrast, RIWI surveys appear on real, registered, non-trademarked rotating websites as described elsewhere in our FAQs. There are no ad units or concerning cookies that give rise to malware warnings or ad block on the global RIWI platform collecting consent-based data continuously and globally. The patented RIWI random domain intercept platform is the gentlest, most respectful way of asking a potential online respondent a question. If respondents are not interested, they exit – they are not “hijacked” as can happen on some panel surveys if respondents are motivated to win rewards by completing the panel survey. And potential RIWI respondents were navigating the Web while they stumbled randomly into a RIWI survey or message product test — so we don’t waste their time. More than 1 billion people in the world have been exposed to RIWI questions, many posed by the most elite academic groups and data-sophisticated government agencies and finance clients in the world now working with RIWI. We don’t receive complaints about contacting people inappropriately during the dinner hour or bombarding someone’s email box with survey requests that look eerily like SPAM or phish attacks.
It’s not just us saying that traditional phone surveys have severe methodological problems that go far beyond bothering people during the dinner hour and collecting information from non-representative people. We are in 2017. Please don’t let social science methodologists who cling to old-world social science models of data collection fool you into what constitutes “best practice” that may have been so in 1982. As Keiding and Louis point out in the Journal of the Royal Statistical Society (“Journal of Perils and potentials of self-selected entry to epidemiological studies and surveys”. J Roy Stat Soc A Sta. 2016;179:319–376) the “gold standard” household survey is more like “tarnished gold” when response rates, social desirability bias, coverage bias, motivation bias and volunteer bias pollute the integrity of the data being collected. (See also: Kreuter F. “Facing the nonresponse challenge.” Ann Am Acad Polit SS. 2013;645:23–35; Barratt MJ et al. “Moving on From Representativeness: Testing the Utility of the Global Drug Survey.” J Subst Abuse. 2017 Jun 30;11:1178221817716391. doi: 10.1177/1178221817716391. eCollection 2017.)
Yes, our survey platform, although device agnostic, was built specifically for smartphone and tablet data collection. Every survey we build is mobile-optimized with exceptional speed to maximize reliability and response rate. Our user experience is cross-browser, cross-device friendly and has been in successive development since 2009, when it was first incubated in an academic research unit.
No, as understood by EU or like regulations.
RIWI specializes in conducting short, simple, respectful surveys; no more than 15 linear questions are recommended for the highest data quality based on our exhaustive research. Long surveys of up to 150 questions can be modularized into discrete constructs. This process relies on individual respondents answering different numbers of questions based on their willingness to participate and for the data to be stitched together post-survey.
RIWI has patents on all of our processes and technology. In all parties’ protective interests, please ask us how you may wish to license our platform or work with us directly for maximal efficiency. RIWI works with proven partners seeking the very highest global, multi-country data on an ongoing basis. We will be notified independently if our patents are violated and will vigorously defend our IP.
Please contact our Global Engagement Lead, Shashank Tiwari, at firstname.lastname@example.org.
As transparently as possible. Those who complete the final answer in a linear survey are counted in the response rate and others are not. The data collected by those who do not finish can be included in the final data delivery for analysis. In research publications, we occasionally distinguish response rates to suit journal rules, with clear caveats, based on the percentage of respondents who opt-in to answer the first question and thereafter answer all the remaining ones. Note that RIWI response rates vary considerably across the globe and are generally better than all other modalities in hard-to-reach nations of the world. We compare very favorably in terms of true response rates to telephone or panel response rates when one digs into how response rates emerge in those contexts – based on raw numbers of people telephoned or who are proactively asked to join a panel to win rewards – and, based on this total population, thereafter take the procedures and actions to completely answer the whole survey. Providing incentives would, we know, raise our response rates, but would compromise data quality.
In the US, our average project is approximately 50%-50% in terms of males and females who have the option to respond. There is variation with bespoke studies. This 50%-50% split average changes in countries where men disproportionately use the Web. Since we are intercepting a given population randomly, we skew towards people who use the Internet more often, which means high numbers of young people, especially in developed countries. Our comparative upward weights on the age skews (which we weight to population census) are therefore understandably different to the unweighted data collected by panel companies.
RIWI survey respondents are Internet only. When comparing the Internet population to the population in any given country, statistics show that Web users tend to be younger, more educated, and have higher incomes.Within the Internet population, we reach much more diverse populations such as ethnic minorities, rural populations, and different races. We mash up RIWI data with other valuable data, such as feedback from affected populations in parts of Africa where we work with many non-governmental organizations. In some very poor countries, rural use of Web-enabled devices is very high. In other developing countries, we have much larger socio-economic spread of response than in others where the bias is to wealthier populations. Our vast respondent data have given us market intelligence on how countries around the world answer surveys differently, and why.
Examples of academic work using RIWI technology can be found in the publications below. Peer-reviewed publications by independent scholars and by RIWI researchers are continually in various stages of preparation.
Seeman N. and Bob Seeman. 2017. “Monitoring Receptivity to Online Health Messages by Tracking Daily Web Traffic Engagement Patterns: A Review of More than 13 Million US Web Exposures over 1,235 Days.” Healthcare Quarterly, 20(3) October
Peixoto, T. and Sifry, ML eds. 2017. Civic Tech in the Global South: Assessing Technology for the Public Good. Washington, DC: World Bank.
Seeman, N., D.K. Reilly, and S. Fogler. 2017. “Suicide Risk Factors in U.S. College Students: Perceptions Differ in Men and Women.” Suicidology Online 8:24-30.
Seeman, N., S. Tang, A.D. Brown, and A. Ing. 2016. “World Survey of Mental Illness Stigma. Journal of Affective Disorders 190:115-21. doi: 10.1016/j.jad.2015.10.011
Seeman, N., S.G. Fogler, and M.V. Seeman. 2016. “Mental Health Promotion through Collection of Global Opinion Data.” Journal of Preventive Medicine and Care 1(1):23-36.
Seeman, N. 2015. “Use Data to Challenge Mental-Health Stigma.” Nature. 528(7582):309. doi: 10.1038/528309a.
Raftree, L. and Bamberger, M. 2014. Emerging Opportunities: Monitoring and Evaluation in a Tech-Enabled World. New York: Rockefeller Foundation.
Macer, T. (2013), Disruptive Change. Research World, 2013: 30–35. doi:10.1002/rwm3.20019
Seeman, N and Alton Ing 2013. “2013 GRIT Consumer Participation in Research Report: A Study of Survey-Takers in 200+ Countries and Regions around the World”. New York AMA Communication Services and Greenbook. https://riwi.com/wp-content/uploads/2015/06/GRITCPR2013_SeemanIng.pdf
Seeman, N., and M.V. Seeman. 2010. “Autism and the Measles, Mumps, and Rubella Vaccine: Need to Communicate a Health Study Retraction to Patients.” Journal of Participatory Medicine 2: http://www.jopm.org/evidence/research/2010/12/17/autism-and-the-measles-mumps-and-rubella-vaccine-need-to-communicate-a-health-study-retraction-to-patients
Seeman, N., A. Ing, C. and Rizo. 2010. “Assessing and Responding in Real Time to Online Anti-Vaccine Sentiment during a Flu Pandemic.” Healthcare Quarterly 13 Spec No:8-15.
Have independent University-based scholars used the RIWI platform? Has RIWI passed or been formally exempted from University-based Institutional Research Board (IRB) approval? Does RIWI technology satisfy ethics review for research involving humans?
RIWI works with scholars across a wide range of disciplines located at many of the most eminent Universities and research-based institutions around the world. By contrast, social media analytics tools mining open-source data sets are increasingly frowned upon by University researchers as non-compliant from an ethics and privacy perspective (Gibney E. “Ethics of Internet research trigger scrutiny” Nature 550, 16–17 (05 October 2017)). RIWI data collection is not considered human subjects research for the purposes of University IRB approval. Unlike other methodologies that collect personally identifiable information in order to provide incentives or track individuals’ online habits, only random anonymous potential respondents who do not provide personally identifiable information are recruited. All US and EU privacy rules are thus satisfied. Several leading Faculties of Medicine have determined that RIWI-collected data do not constitute human subjects research as defined by US Department of Health and Human Services and by FDA regulations.