Subscribe & Follow
Jobs
- Copywriter Cape Town
- Junior Copywriter Cape Town
- Digital Designer Cape Town
- Digital Marketing and Content Designer Johannesburg
- PR and Digital Content Writer Sandton
- Multimedia Motion Designer Johannesburg
- Financial Accountant Johannesburg
- Sales and Business Development Manager Cape Town
- Content Curator Ilovo, Sandton
- Digital Archive Intern Cape Town
#GartnerSYM: AI (Artificial Ignorance) mistakes to avoid
Gartner's senior director analyst Alys Woodward began with the ‘chihuahua or muffin’ meme.
She admitted that it’s usually easy enough for humans to tell the difference, but when a data scientist tested it with free-to-access image recognition apps, her resulting article showed interesting results.
Today, @JakeSearcy is teaching our @uodatasci and @BGMP_UO students about convolutional neural networks for image identification. Can YOU tell the difference between a blueberry muffin and a chihuahua? #MachineLearning #NeuralNetworks pic.twitter.com/0ETaPFVzfq
— BGMP (@BGMP_UO) September 25, 2019
She first spoke of these examples at a workshop held in March but admits that they’re already starting to looking old fashioned to her – time moves fast.
Facial recognition gone wrong
Next, she shared the example of Dong Mingzhu’s face being identified as that of a jaywalker through video analytics. The image was shared on a public display screen, which brings in the concept of privacy and use of personal data, but the bigger issue is that her face was printed on the side of a moving bus at the time.
Gartner analyst @alyswoodward takes the stage at #GartnerSYM #Capetown to share learnings from the biggest #AI failures. pic.twitter.com/PXlbkCP73t
— Gartner IT Symposium/Xpo™ (@Gartner_SYM) September 18, 2019
The technology had to be corrected, as a result.
The best "benefit" I read was Chinese public toilets identifying you and rationing the toilet paper, because some people used "excessive amounts". (2nd place was business woman Dong Mingzhu's ticket for jay-walking, because ads with her face appeared on buses).
— Stuart Neilson (@StuartDNeilson) March 24, 2019
Virtual voice assistance void
Woodward also spoke of the rise in voice search but mentioned AI can still go awry in this regard, especially in the case of smart devices taking input from voices on the TV or even children.
My new hero! Replacing the young girl who had Alexa get her a doll house & cookies:https://t.co/PnnUXz0LLj https://t.co/fFeV33MdX8
— Wayne Sadin (@waynesadin) August 14, 2019
Ha! News Anchor says, "Alexa order me a doll house" on air. Viewers Alexa's begin ordering doll houses cc @itschappyhttps://t.co/th7nrZ8k7f
— Saunder Schroeder (@SaunderSchroed) January 7, 2017
The robots are coming… but they’re not quite ready for us
A common 4IR fear is that robots will take our jobs. While this may be true of those roles that are more easily automated, there’s much to be said for what humans bring to the world of work.
Woodward said that a batch of robots had already been hired and swiftly fired as assistants at Japan’s Henn na Hotel, as they were both annoying and incompetent.
The velociraptor-type robots had difficulty making photocopies when helping at the concierge desk, while the in-room voice assistants would ask “how can I help” when hearing snoring in hotel rooms.And of course, one of the main differences in human and artificial decision-making is that we factor in aspects of the context that may have been left out of the AI programming.
Diversity and bias play a role in AI
For example, Woodward said that a robotic passport checker rejected an Asian man’s photo as it thought his eyes were closed.
This implies that there was a lack of diverse data fed into to program in order to be applicable to a diverse population.
There was similar diversity bias in Amazon’s AI recruitment tool, which processed the CVs or resumes of the people the company had already hired and was asked to ‘find similar’.
Unfortunately, the program was found to not be accepting female applicants’ resumes – bias had mistakenly entered the process, hopefully by mistake, so while a good idea in concept, many lost trust in the process and the program was scrapped.
There have also been cases of facial personality analytics startups claiming to predict personality attributes and even IQ, and another AI program detecting sexual orientation – both just from photos of faces.
These examples are both quite extreme on the intrusiveness scale and not even accurate, says Woodward.
The role of ethics in AI
To learn from these mistakes in developing the AI future, Woodward set to set AI ethics principles and document the algorithms – this is part of transparency as well as protecting your organisation.
You’ll need to keep customer data secure and safe and respect their privacy, so the stakes are really high. With analytics, if only collect the data you know what to do with, potentially miss out.
Often, AI mistakes arise from ignorance rather than intention, so ensure you’re putting as much diverse human intelligence into the thinking process as possible.Woodward also said to avoid groupthink and ensure there’s diversity in your existing team so that you don’t potentially miss something offensive down the line.
Ensure your business is lawful and respectful from the start – this translates to making sure your opt-out process is as simple as possible.
In conclusion, Woodward said that principles lead but rules follow in AI. We need to focus on augmenting humans rather than replacing them and raise awareness of deficiencies and biases in the system.
Watch the #GartnerSYM hashtag for further coverage of the Gartner Symposium.