Skip to main content

Our Latest Reporting Improvements

We’ve made several enhancements across the platform to give you greater control, flexibility, and depth in your reporting.

Updated over 5 months ago

Statistical Significance


To determine whether differences between your stimulus and benchmarks are statistically meaningful, we’ve implemented tailored significance testing methods:

  • Paired t-testing: Applied in sequential monadic test designs to account for respondents seeing multiple stimuli.

  • Unpaired t-testing: Used in monadic test designs, where different groups respond to different stimuli.

These methods ensure your results reflect genuine shifts in perception rather than random variation.

Top Box Reporting


You can now fine-tune the Graph view to match your internal reporting preferences. Choose how to display response data from a range of visualisation options:

Answer scale options:

  • Top 2 Box (combined)

  • Top 2 Box (split)

  • Full answer scale

Benchmark comparison options:

  • Top 2 Box

  • Top Box

These options give you better control over how data is framed and interpreted for stakeholders.

Split by Audience Group


You can now break down results by demographic or custom segments directly in:

  • Graph view

  • Follow-up questions

Using the Split by feature, select an Audience Group to reveal how different segments within that group responded to each stimulus.

You can also flip the axis to see how each stimulus performs across audience groups — offering a versatile way to uncover patterns and segment-specific insights.

Swapping regression metrics


We’ve added more flexibility to the Drivers tab by allowing you to switch out the default metrics:

  • Simply click on “Chart options” in the top-right corner.

  • Swap Desire or Distinctiveness for alternative metrics like Purchase Intent or Consideration.

This makes it easier to align drivers analysis with your business objectives and key performance indicators.

Did this answer your question?