If you want to know the nuts-and-bolts behind Valuegraphics you've come to the right place.
If you want to know the nuts-and-bolts behind Valuegraphics you've come to the right place.
A total of ten surveys were launched, beginning in June 2016. Each of these surveys had a specific theme, for example: finance, education, or recreation. We also did a couple of follow-up surveys to explore some of the findings more deeply, but those first ten surveys are what matter most.
Across these surveys, a total of 340 questions were asked about all aspects of what it means to be a human being alive today, and what people value, want, need, and expect. The questions were derived from proven social-science tools: the World Values Survey, the World Happiness Index, the Bhutan Gross Domestic Happiness Index, and various other established studies.
Every valid respondent answered the values-based questions, and we used the resulting data to identify the ten Valuegraphics Archetypes who agree on a statistically impressive number of values, wants, needs, and expectations. In other words, we found ten huge groups of people who agree on pretty much everything.
We continuously access the data pool to create Valuegraphics Profiles that can help organizations understand how to more effectively motivate the people they want to impact. Each time we extract data, more insights are uncovered, and our understanding of how to motivate people improves. At the time of writing this the database now contains 75,000 surveys and continues to grow.
Since roughly 1,850 surveys would be a more-than-sufficient sample size to statistically model the population of Canada/USA, our data pool should be considered exhaustive.
In order to ascertain if age impacted responses to other questions, each respondent was asked for their year of birth.
Respondents were asked to identify their gender.
Surveys were distributed throughout the United States and Canada. Quotas were applied to ensure a representative sample was obtained throughout both countries, with the results as follows:
The majority of respondents (92 percent) were attracted through social media using a series of precisely worded advertisements, and participants were offered a chance to enter a prize draw for an Amazon gift card. These recruitment messages have been refined over almost a decade of trial-and-error testing.
The remaining respondents were sourced through non social-media sources, predominantly to gather data for social media-related questions. We did this to minimize the potential skewing of data, which could occur if questions about social media were only posed to respondents sourced through social media channels. We also were able to see that the majority of all questions were answered by all respondents in a similar way regardless of the source. Therefore, any concern about respondents being sourced primarily through social media channels is moot.
The result was a stratified random sample with a statistical representation of the population of Canada and the USA answering the questions we needed to provide the benchmarking data, also known as the Valuegraphics Database.
To ensure data hygiene, a series of twelve data-cleaning methods/validity checks were implemented. We removed any surveys that seemed suspicious to avoid selection bias and data skewing. Following are just a few of the data-cleaning checks we implemented.
· Responses completed more than 25 percent faster than the average completion time.
· Consistent straightlining (e.g., where respondents give the same rating for each question on a matrix of questions).
· Consistent gibberish and/or one-word answers for open-ended questions.
· Consistent selection of only one option for multiple-selection questions.
· Consistent selection of all options for multiple-selection questions.
· Duplicate responses from the same respondent/IP address.
Data was analyzed using a combination of Microsoft Excel, SPSS, and NVivo, and was explored both as one large data set and in relevant segments, including the Valuegraphics Archetypes. Initial analysis included a technique called sample assessment, which ensures the collected data is representative of the target population. Further, we employed exploratory analysis, a real-time assessment of incoming data to gauge perceptions of the developing data set in advance of analysis of each question.
All open-ended questions were coded, and a thematic analysis technique was used to identify key themes within the data. These themes revealed commonalities and contrasts across and within segments and cohorts. Put another way, this coding of the thematic analysis quantified the qualitative data.
Analysis of each question was performed using methods relevant to the format of the question. Question formats included, but were not limited to: mean ratings, spreads, distributions, and percentages. All eleven-point scale (zero to ten) questions were both averaged and categorized as follows:
· Ratings of nine or ten: Extremely, Strongly agree, etc.
· Ratings of seven or eight: Somewhat agree
· Ratings of four, five, six: Neutral, Average, etc.
· Ratings of three or less: Unlikely, Strongly disagree, etc.
Responses to each question were compared to the full Valuegraphics Database of responses to identify those that were most similar.
Similarity was measured by averaging the differences from respondents who considered their primary values to be extremely important and gave them a rating of nine or ten. By searching for the ten key variables that resulted in the closest alignment of other responses, the ten Valuegraphics Archetypes were revealed.
The primary ten Valuegraphics Archetypes’ responses to all questions on all topics, for example, were similar to each other with as little disagreement as 2 percent. The respondents within each Valuegraphics Archetype resemble each other on all things to a remarkable degree, irrespective of age and other traditional demographic categories. The discovery of these ten radically similar archetypes was our light bulb moment. We had proven that profiling target audiences based on what people value is far more powerful than the old-fashioned demographic methods used in boardrooms everywhere today.
Before we do anything, we need to understand what the product, service, or brand in question is all about. We have a questionnaire to get things rolling, but the intake conversation will yield all sorts of information that is specific to each situation.
Next, we create what we call the Unlocking Survey to meet the objectives of the research by doing two things.
First, it will collect the data we need to profile the demographics and psychographics of your target audience. The response to this part of the survey gives us what we need to construct a miniature model, otherwise known as a statistical representation, with what statisticians refer to as a 95% level-of-confidence and a margin of error around 3.5%. This means we are 95% confident that the findings are accurate to +/- 3.5%. Research standards can reach as high as a 5% margin-of-error and still be considered valid, so our 3.5% margin is particularly solid and robust.
Second, the Unlocking Survey collects the information we need to unlock the Valuegraphics Database and and point to the Valuegraphics Profile of your target audience. This is the information you need to motivate your audience to take action.
Respondents who match your target audience description are attracted to the Unlocking Survey by placing a variety of advertisements on carefully chosen social media channels. For each survey, we use a minimum of ten different advertisements to attract respondents. With more than ten years of trial-and-error testing behind us, we’ve figured out exactly how to motivate people to click-through to the questions.
Once we have the required number of responses, plus an extra 10% just for good measure, we clean and scrub the data until it shines. In total, 13 different validity checks are done, and any surveys that seem suspicious are removed.
The first step in analyzing the data is to create a benchmark. So we look at all the responses together, and see what we’ve got our hands on, regardless of how interested the respondents were in the product or service the survey was about. This provides a series of metrics for us to examine, and extract the most promising respondents from. Those most promising respondents are otherwise known as the target audience.
If, for example we saw that 25% of all the survey respondents said they were likely to use the product or service, that’s a good place to start. If, however, some sub-segment of the survey respondents (let’s say women, aged 25-34) said they were not just likely, but extremely likely to use the product or service, those individuals might be the target audience we are looking for.
Once we’ve identified the key variables that identify the target audience, we can build a profile of the members of that group. The Unlocking Survey asks basic demographic questions about age, income, gender and etc. It also poses a few key psychographic questions about consumer behaviour, like “How often do you use this product?” and “What would it take to get you to switch to another brand?”
In addition to demographic and psychographic questions, the Unlocking Survey asks a few questions (seven, to be exact) which allow us to unlock the Valuegraphics Database.
Remember, the Valuegraphics Database contains deep insights from 75,000 surveys on what people across Canada and the USA value most: what they want, need and expect. It is an enormous profiling tool, that identifies the core values that motivate people to make decisions about all things, large and small.
We use what we’ve learned from the Unlocking Survey to extract the relevant information for your target audience from the database and create a custom Valuegraphics Profile.
On Wall Street or Main Street, it’s so much better to face a client with strategies based on real audience data, instead of demographic stereotypes that just don’t hold water anymore.