News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

Education & Training Trends sponsored by

ChatGPT and education: a cheating tool or an opportunity to improve learning?

The world has been sent into a frenzy over OpenAI's latest iteration of its artificial intelligence chatbot ChatGPT. As we recently marked the International Day of Education, it's poignant to acknowledge the fear that artificial intelligence platforms such as ChatGPT evoke in educators.
Image by  from
Image by Alexandra_Koch from Pixabay

Some professors have already caught their students cheating by using ChatGPT to create their coursework submissions. As educators, one of our biggest fears is that we grade students as competent in subjects where they have not actually grasped the concepts. In being able to pass off ChatGPT’s work as their own, this fear has been significantly amplified. AI is making it easier for someone who doesn’t know what they are doing to produce a unique piece of work that strongly implies that they do. The dangers of this sends chills up my spine.

Of course, as with all things, there’s the other side of the coin. Which in this case is a great excitement at the prospect of artificial intelligence being able to do the heavy lifting and mundane tasks, freeing us to be the creative beings we were born to be. It is this angle that compels the more open-minded among us to ask whether young people leveraging powerful technology tools to aid their work should be considered cheating at all, or if it is actually a skill and resource that should be embraced.

A new paradigm shift

This perspective would signal a shift to a new paradigm altogether. A paradigm where man and machine work together to produce a superior outcome to what was previously possible. The kind of shift that previous Industrial Revolutions have ushered in.

However, it's the third side of the coin, the perimeter that runs around the edge and connects the two obvious sides, that is perhaps the most intriguing. AI is not some new, objective source of intelligence. Simply put, it is a mirror of us - it is a software programme that consumes datasets created by humans and learns from those datasets how to mimic human intelligence.

This means that the very fuel behind its “brain” is a collection of things that have previously come from human brains. When viewed through the digital divide lens, we must acknowledge that it is laced with all our societal biases and blind spots. Far from being precise and objective purveyors of information, software programmes like ChatGPT are not just biassed, but are actively trained to represent the realities of the digital “haves” while minimising or completely excluding those of the digital “have-nots”.

Nyari Samushonga, CEO of WeThinkCode_
Nyari Samushonga, CEO of WeThinkCode_

We have all these layers of complexity around masses of data that are inherently flawed. When we apply algorithms over them, they produce an output which runs the risk of being perceived as “true” or “objectively correct”. This is where the consequences of digital inequity can cause AI to become quite dangerous. We’ve seen this with the criminal justice system relying on facial recognition tools that are not well-trained in identifying people of colour. In medical diagnostics, clinical data from industrialised nations have been incorrectly presumed to be a representative sample of the broader world resulting in compromised care being administered to patients in developing nations.

So how do we deal with the coin as a whole? Can we prevent Africa and other developing regions from being left even further behind as AI rapidly entrenches the perspective of the digital “haves” as the universal truth? How should we approach this topic from the perspective of education, educators and society at large?

Approach technology developments in the right way

It is not all doom and gloom. We have known for a while that we need to end digital inequity. There is no question that all new technology development must be approached ethically. If we wish to make use of technology that represents a holistic and not exclusive shade of reality, it is paramount that Africans are part of building the datasets and the tools that consume them. This will reduce the biases and actually enrich the quality of insights we can gain from AI.

At the same time, we must train people ethically, and ensure that this superpower is in the hands of people that can leverage its strengths while minimising the damage it can create. As we raise up the next generation of technologists, we need to impress upon them the need to think before they code!

It is evident, then, that technology like ChatGPT asks far bigger questions of us as educators than simply how we are going to prevent cheating. This technology is here to stay and will likely become more sophisticated. Rather, we as educators can be instrumental in ensuring that as technology rapidly evolves, more Africans have a seat at the table and are able to be a part of these pivotal technological developments.

About Nyari Samushonga

Nyari Samushonga, CEO of WeThinkCode_
Let's do Biz