The standout capability of AI that’s spreading across all industries including advertising, technology, creative and publishing, seems to be generative AI – the likes of ChatGPT, DALL-E, Claude and Synthesia. And the AI creations emerging from these tools - mistakenly or not – are carving out a niche for themselves in the medium to long term. Adobe has even recently announced that it will introduce a generative AI subscription that will remove watermarks from generated content, as well as help subscribers in court and pay damages if they are sued.1 These seemingly “small changes” will stand to affect the entire industry whether directly or consequentially.
As always, while the rise of new technological capabilities, and their potential and growth trajectory, is exciting, it’s not without its challenges and dissenting voices. The main risk of the rapid expansion of generative AI in the digital advertising ecosystem comes when looking seriously at brand safety, particularly suitability, which affects publishers as well.
One of the advantages of generative AI is the ability to produce more content. Want an article on any topic? In theory, it could be just a ChatGPT request away, opening up the ability to generate hundreds more articles within the same time it would take a person to write an article from start to finish. But this “advantage” also poses one of the biggest risks for brands and publishers; the greater the use of AI-generated content, especially if not properly reviewed and vetted, the greater the risk to brand suitability and likeability. While the promise of higher volumes of content seems attractive from an advertising space perspective, brands may find it more difficult to ensure their creative is displayed in the right content. This could potentially put not just brand images at risk, but publishers, who in turn need to protect image and relationships with advertisers to secure revenue, at risk too.
So, what’s to be done? In the podcast “Diary of a CEO”, Mo Gawdat mentions the inevitability of AI becoming smarter than humans over time.2 While it may sound like science fiction, in an article published in Scientific American in March of this year, ChatGPT was estimated based on combination of five subtests to have an IQ of 1503. For context, Albert Einstein is estimated to have had an IQ of 160. While it may not be the highest score even humans have achieved, the fact that AI is still largely in its infancy and is getting wise fast, suggests that it won’t be long before the scores may look a little different, and that the time may have come to fight AI with AI. At Invibes for example, we’ve harnessed the power of AI to create technological solutions that protect and ensure brand suitability. One of these is our Sentiment solution.
Sentiment is a tool that increases brand suitability and appropriateness by ensuring that ads are displayed within positive content by measuring the sentiment of articles by quantifying it with a “Page Sentiment Score”.
How does it work?
Each page of content is processed through our NLP (Natural Language Processing) model. The model accounts for factors such as the words used, their order, the context in which they are used and their synonyms. The Page Sentiment Scores are then calculated by combining the intelligence of natural language processing and machine learning. The calculation considers a number of elements, such as the wording of the article, the sentence structure and the emotions expressed through the content. Once all the elements have been combined and analysed, the article is assigned its Page Sentiment Score, which ranges from 0 to 100 and gives an overall score for the tone of the article, from very negative to very positive.
Based on this ranking, articles with a low score can be identified and excluded from the campaign display even if they do not include blacklisted words if the campaign is particularly sensitive, ensuring the campaign is only displayed within appropriate and relevant content. With this AI innovation, brands can be protected against a potentially less regulated landscape of content that does not meet brand suitability standards. This is not to say that, with the rise of AI-generated content, more tools will not be developed at the editorial level to ensure greater integrity of articles, especially in today's age of misinformation, but until then, Invibes proactive AI solutions are key in maintaining brand suitability and likability.