The Best Overall paper, Going from Qual to Quant with Visual Content
, presented by Chris Davies and Adhil Patel, talks about image analytics. Visuals are the new language of consumers, and as pictures become more prevalent, both on social media and also through research mechanics such as mobile diaries, how do we begin to analyse large amounts of images in a quant fashion, moving away from qualitatively cherry-picking a few of them to illustrate a point. The paper outlines an approach that integrates machine learning and crowd-coding to do this at scale.
Chris then went on to talk about Making Social Media Analytics Scalable
. His paper discussed creating new hybrid methodologies that integrate and fuse traditional research (shorter, smarter surveys) with passive Big Data - in this case specifically social media. He explored this through a case study measuring loyalty and satisfaction amongst banking customers in South Africa.
A brief overview of the papers: Going from Qual to Quant with Visual Content Winner of Best Overall Paper, the TNS Innovation Award and the People’s Choice Award
Presented by Chris Davies, Innovation Partner, TNS South Africa and Adhil Patel, Head of Thought Leadership, TNS Global Brand Equity Centre
Visuals are the new language of consumers, and we find ourselves increasingly living in a world filled with images enabled by smart mobile devices that are always on. We communicate more and more by sharing pictures and videos, with entire social media platforms emerging that are dedicated to this practice - Instagram was the fastest growing social media service in South Africa in 2015. Better cameras, more storage space and bigger screens with higher definition on our mobile devices all point to and encourage the move from a text-heavy environment to one dominated by visuals.
Yet the market research industry is not prepared for the deluge of visual content that is on the way. As an industry, we tend to treat pictures in a purely qualitative way; we collect a few pictures through our immersions or we might ask for some images to accompany consumption diaries. But we tend not to have approaches for dealing with them adequately, and at best, we insert a few pictures to add colour to our reports. At worst, we ignore them completely.
This paper outlines an approach for dealing with images in a quantitative fashion.
Turning a dataset consisting purely of thousands of pictures into the kind of data that we can filter and graph and analyse with our traditional skillset requires that we tag each and every image with relevant attributes, accurately. Chris and Adhil recommend a hybrid approach that leverages computer vision and machine learning, but also keeps humans in the loop to ensure correct and appropriate tagging.
Computer vision refers to the ability of machines to identify objects and concepts in visuals, and more importantly, to interpret them. This capability shows a lot of promise and will be the lynchpin in the process, but there is still room for improvement. As these computer vision algorithms get better, through iterative machine learning, the workflow can lean harder on automatic coding - but until then, humans will have to pick up the slack in the shape of crowd-coding and specialist research teams.
Crowd-coding refers to the use of large groups of people connected by specific platforms (like Amazon Mechanical Turk or CrowdFlower) in order to complete small tasks at scale in a short timeframe, within a small budget. The ability to leverage a crowd as required, in a quality control role, and not have to cover large ongoing overheads, makes this perfect as an interim solution.
The last piece of the puzzle is the researcher layer, which brings together the two parts of the process just discussed, and shapes the analysis to better meet the business need. The researcher continues to be an important part of the workflow, most certainly in the short term.
The paper uses a dataset collected from social media platforms Instagram and Twitter, leveraging posts relating to the music festival, Ultra, and tests a number of computer vision services to establish what is truly possible in this space at the moment. Making Social Media Analytics ScalableWinner of the Best First-time Speaker Award
Chris Davies, Innovation Partner, TNS South Africa
Over the last several years, significant advancements in the way we (as an industry) conduct research has been driven by two imperatives: firstly, to shorten questionnaires across the board, and secondly, to get closer to those crucial consumer moments as they happen in order to really understand the decision-making process. While both are absolutely necessary for the successful evolution of research, they also have their own challenges: shorter, mobile surveys mean less design flexibility, but even more importantly, we cannot truly understand consumer decision-making through explicit questioning alone. Thankfully, the proliferation of passive Big Data sources means that we now have a whole new way to understand people, all underpinned by actual behaviour. Social media data is a great example – characterised as it is by in-the-moment, first-hand accounts, but also, easy access in terms of collecting the data.
In many ways social media represents a spontaneous consumer feedback loop that is vaster and more sophisticated than anything we’ve had access to before – if we know how to harness it. And therein lies the challenge: as with any Big Data source, it is chaotic, unstructured, and contains a lot of irrelevant noise. Separating out this noise by knowing where to focus, is tricky to say the least. And this has, in no small part, contributed to the continuing sense that social media analytics has underwhelmed in recent years. This is not to say that brands are unaware of the medium’s importance – many have successfully cultivated an online relationship structure with their customers where questions or grievances can be aired. But in many instances, this is where brands stop – a customer relationship strategy is enough for them to think they are “doing” social media analytics.
This is where we need to help our clients go further. Stan Sthanunathan (Unilever Senior VP, Consumer & Market Insights), at the recent Market Research Society Conference (MRS), spoke about the 10 commandments that agencies and brands need to take to heart – one of them being the need to be mining the information gleaned from social media. In this sense, mining means going beyond customer relationship management, as well as the standard surface-level metrics of followers, likes and reach. What it really means is finding ways to leverage social media data as a robust research input of itself, something that contributes to other spheres of research to further our understanding of how to improve brand performance and positioning.
That being said, we should not shy away from the fact that social media is not a holistic data source, it cannot be all things to all research problems, nor should we try to position it as such. On top of this, there is still the issue of being able to efficiently identify the conversations that are going to be truly relevant, rather than just blindly jumping in (as has largely been the case up until now). For this, we should be looking to more traditional research mechanisms to provide the lens. A short, smartly designed survey can provide the initial insight into the key attributes, with subsequent social media deep-dives vividly bringing those attributes to life in a way that is structured and actionable. What is being suggested here is the creation of a new hybrid methodology, combining the best of research expertise and story-telling, with the insight now made available by these incredibly rich and powerful behavioural data sources.