News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

How Africa can participate in the global conversation on AI ethics

Advances in artificial intelligence (AI) are fast shaping the world. A recent PWC study notes that by 2031, AI could add 14% to the global GDP, accounting for a potential $15.7tn in the global economy, which is more in value than commercial real estate, oil and gas, insurance and automotive industries combined.
Source: 123RF
Source: 123RF

In addition, the use cases of AI are consistently evolving, so that may just be the beginning. A study by Google found 2,602 use cases of AI for social good from agriculture, healthcare, security, education, manufacturing, transportation, to managing business processes and the like. With the multifarious uses of this technology comes the burden to guard against malicious use and this is where the ethics of AI comes into play.

Why is the ethics of AI important?

If you have ever watched a sci-fi movie about artificial intelligence (AI)? Chances are that you have nursed suspicion of a possible AI uprising. Fears like this have necessitated a field of inquiry known as the "ethics of artificial intelligence" to investigate and suggest guidelines for AI use.

Although the dystopian depiction of AI, like the one we see in most sci-fi movies, might be far-fetched, they raise serious questions about the ethics and use of artificial intelligence. Addressing these questions is important because with the increasing use of AI comes the potential for abuse and exploitation and a need for governance.

In this article, we discuss how Africa can participate in the conversation around the ethics of artificial intelligence, examining policies, regulations and impediments to engagement with this technology.

Additionally, we understand that as AI systems are built and deployed into various industries, some type of ethics needs to be embedded in these systems.

It is here the ethics of AI becomes most practical as developers and AI ethicists seek ways to build ethically aligned systems.

Generally, the ethics of AI focuses on the socio-economic and legal impact of AI, and the moral and ethical issues surrounding the use of these systems. We look at these areas of focus simultaneously.

Why Africa needs to get in on the conversation

At the current pace, there is still a lot of work to be done to get Africa to have a competitive advantage in terms of technological advancements and use. In many countries on the continent, there is a big infrastructural deficit from power to internet access and a gap in science and technology literacy, making them lag behind other countries.

However, it is not all gloomy. Africa is currently ranked as the world’s fastest-growing continent for software developers, with a good number focused on AI. Without sufficient skin in the game, Africa may end up being technologically colonised by big tech. More so, it could mean that emerging technologies would not be built to have contextual relevance for Africans.

Let us take a look at some socio-ethical issues as they are likely to affect Africa.

AI impact on jobs: The impact of AI on jobs, especially the mundane and repetitive types, is likely to be colossal. Blue-collar workers are likely to be replaced by AI-powered systems.

As companies seek to increase their profit margins and cut overhead costs, many persons without high-tech skills are likely to lose their jobs, as the digital skill gap widens.

From a socio-ethical perspective, socially responsible policies that protect jobs and at the same time create opportunities for upskilling need to be put in place by policymakers and corporations alike.

Economic inequalities: There is an inverse consequence that follows the increasing adoption of AI technologies, which is the widening of economic gaps. Take the ongoing pandemic as an example, during the hard lockdown imposed by many countries, jobs that required the physical presence of workers suffered tremendously, leading to the retrenchment of workers as companies could not meet their financial obligations.

The reverse seems to have been the case for persons who worked remotely, they only transferred their work to the virtual space. In addition, considering that a large number of people on the continent cannot access the basic technological infrastructure required to participate in the digital economy, the social and economic gap continues to widen.

Responsible socioeconomic policies would require viable approaches to bridging the economic gap, ensuring that everyone has equitable access to the digital economy by plugging technology into the informal sectors.

Privacy and surveillance: There is a need for African researchers, developers, engineers and policymakers to lead in conversations around AI use on the continent, providing contextually sound policies and use cases of the technology rather than depending on the global north to create blueprints from which we copy.

Consider, for instance, the EU's General Data Protection Regulation (GDPR), there is no data protection policy of such on the continent, making it easy for data to be appropriated maliciously.

Without a concrete data protection policy by regional governments in Africa, big techs like Facebook, Apple, Amazon, Google etc. will not be obliged to implement safeguards and ensure the fair and ethical use of personal data obtained from users on the continent.

Malicious use of AI: Beyond guarding against privacy, regional bodies like the African Union need to put together guidelines for the use and deployment of AI technology by member states.

Considering the controversy around lethal autonomous weapons systems and the growing rise in terrorism in the Sahel region, there is an urgent need to put regulations in place to check the malevolent use of AI technology for the creation of lethal weapons.

Algorithmic biases: This is perhaps one of the major reasons to advocate for Africa’s participation in conversations on the ethics of AI. Algorithmic bias refers to systemic and repetitive biases or errors in AI systems that lead to unfair outcomes. Notable examples include biases in recruiting tools or ATS, biases in granting loan application, housing distributions and the like, which have been shown to be often skewed against people of colour.

Some reasons for this type of bias could include historical human bias, which leads to distorted training data for AI systems. Others may include incomplete or unrepresentative data, which in most cases is due to the under-representation of data from people of colour.

Recall the recent Google debacle around its Vision AI, which produced racist results? In other cases, AI engineered hand dryers in public washrooms have been found to be unable to detect the hands of Black people.

The ethics of AI requires that we create systems and algorithms that are fair and equitable; more so, it demands developers, roboticists and engineers to check their cognitive and implicit biases while building these systems for wider adoption. Without the participation of African engineers and developers, there is likely to be a challenge with addressing such biases.

Care robots: A care robot is an autonomous or semi-autonomous machine that takes care of a person in a hospital or a hospice, often providing physical and emotional support like placing reminders or offering medication at designated times etc. Some examples include Paro and NAO/Zora.

There is still a debate as to whether it is moral to have care robots and if so, how much access they should have to patient data or authority over a patient's autonomy.

Care robots do sound like a good idea in other climes but in the socio-cultural milieu of Africa, a continent where most cultures require that children take care of their parents in old age or the sick when hospitalised, deploying care robots for medical use might come across as problematic for many.

AI policy and strategy on the continent?

Two countries on the continent appear to be leading in terms of developing blueprints, strategies and policies for the use of AI on the continent; these are South Africa and Nigeria. However, more needs to be done to ensure that Africans are protected and not exploited as AI technology is being deployed on the continent.

In 2020, the Chinese tech company Huawei was accused of spying on the African Union for five years. It is believed that the Chinese government was behind the spying and theft of official documents from the AU’s servers. Even with this shocking revelation, the regional body proceeded to renew Huawei’s contract of providing it with surveillance equipment and internet.

More attention needs to be given by corporate, national and regional bodies on the continent to craft out digital strategies/policies for the continent that are both ethical and progressive.

It has been said that with great power comes great responsibility and AI will inevitably give us great power, the real question is how do we participate meaningfully while exercising the greatest of responsibility.

About Musa Kalenga & Samuel Segun

Samuel Segun is a strategy consultant and AI researcher at SBM Intelligence and Musa Kalenga is the CEO of Bridge Labs.
Let's do Biz