News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

Undoing the business of bias

As we delegate more and more of our business decisions to machines, algorithms and artificial intelligence, we need to aim to programme empathetic, ethical robots that enhance, rather than threaten humanity.
Source:
Source: pixabay.com

In particular, we need to guard against the reckless use of “weapons of math destruction” - or WMDs.

Weapons of Math Destruction

Weapons of Math Destruction is a term coined by former hedge-fund data scientist and author, Cathy O’Neil. In her best-selling book of the same title, she explains that big data-driven algorithms become “Weapons of Math Destruction” when they combine mass scale, and the power to disrupt lives, with opacity.

When the algorithms that make decisions that impact on human lives, for example algorithms that are used in recruitment or loan applications or determining which individual end up on government terrorist watch lists, are opaque, there is a potential for big bias to creep into their decision-making processes - without the victims of such biases even being aware of what they are being penalised for - let alone why.

For example, MasterCard has developed algorithmic technology that can tell how fat you are based on your card purchase history. By sharing this data with airlines, airlines could offer personalised, preferential ticket pricing to people indicated as “thin” by the algorithm. This means a person flagged as “fat” by the system would only be shown higher priced options - and may never even find out that they are a victim of discrimination.

Algorithms are only as good (or unbiased) as the people who programmed them.

Likewise, machine learning systems are only as good as the data sets that are fed into them. If a human recruiter consciously or unconsciously discriminates against a particular ethnic group, and a machine-learning algorithm is programmed to imitate her decision-making process, that algorithm will perpetuate her bias on a massive scale.

And, when that in-bias is inevitably found out, there is sure to be a public backlash. Amazon learned this lesson the hard way when it was forced to scrap one of its AI recruitment tools after the algorithm was found to have a bias against female candidates.

How to build un-biased bots

Companies and recruiters need to consciously work to prevent bias creeping into automated recruitment, staff-management and customer processing systems. After all, as McKinsey research has proved, diverse company cultures are correlated with better company financial results.

One potential solution to in-built bias is the Princeton Web Transparency and Accountability Project (Web TAP). Web TAP developed software that creates digital profiles based on a wide variety of ethnicities, demographics and backgrounds. The software then runs all these diverse profiles through decision making and machine learning algorithms and monitors the results for consistency to pick up on intentional and unintentional biases hidden therein.

Transparency is the key to reducing unintentional bias in automated systems. Once you understand why an algorithm makes the decisions it does, you can work to fix it.

At the end of the day, it is the creators and owners of machines - us - not the machines themselves, that we should be afraid of. Machines can do whatever we programme them to do, for good or ill.

About Bronwyn Williams

Futurist, economist and trend analyst. Partner at Flux Trends.
Let's do Biz