Our four-stage approach
We’re committed to delivering accurate, high-quality data. To achieve this, we’ve put a rigorous four-stage quality control process in place:
Before surveys:
We work with trusted panel providers, ensuring participants have been on the panel for at least two weeks and removing higher-risk sources.
Partnering with The Research Agency (TRA) helps us maintain strict sample requirements for the highest-quality respondents.
During surveys:
Our panel partner employs advanced measures such as geo and browser location checks, proprietary honesty detectors, digital fingerprinting, double opt-in validation, and machine learning to detect and prevent fraudulent responses.
We also implement internal quality scores and remove the bottom 20% of respondents to maintain robust data integrity.
Initial post-survey processing:
We carefully review open-ended responses to ensure they meet our quality standards and check them against internal brand and market norms.
We validate base sizes, apply weights where necessary, and confirm data completeness before producing reports.
Final post-survey processing:
We use generative AI to assess the relevance and appropriateness of open-text answers. Any responses deemed invalid are removed from further analysis.
We oversample by 10% to account for these quality assurance checks and inform Dynata of any flagged respondents at this stage.
Nationally representative samples
To ensure representativeness and reliability, we align our sampling with census data for each market:
Australia & New Zealand: Minimum sample of n=400 nationally representative respondents.
United States: Minimum sample of n=800 nationally representative respondents, allowing finer analysis of subgroups by state.
Why sample size matters, Not population size
Population size doesn’t affect statistical accuracy - sample size does. For instance, our standard n=400 sample results in about a ±4.6% margin of error. Increasing the sample to n=800 only reduces this margin to about ±3.5%. While increasing sample size slightly improves precision, the gains are marginal.
Smaller profile-specific samples
When targeting a more specific demographic or behavioural profile, you may have fewer respondents. While n=400 is our benchmark for general population accuracy, n=250 respondents in a specific profile group still yields reliable insights, though the margin of error increases to around ±6.2% compared to ±4.9% at n=400.
This slight trade-off in precision is common in subgroup analysis. Even with a smaller base, the results remain reliable enough to inform decisions, keeping in mind the slightly wider margin of error.
By implementing these quality controls and thoughtfully selecting our sample sizes, we ensure that the data you receive is both accurate and actionable, helping you make informed, evidence-based decisions.