News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

Corporate & Commercial Law News South Africa

How to ensure due diligence when AI is making the decisions

In a recent article, we discussed the feasibility of companies in South Africa amending their constitutional documents to provide for compulsory reliance on artificial intelligence (AI) in corporate decision-making.
Image source: Olena Yakobchuk –
Image source: Olena Yakobchuk – 123RF.com

AI has several notable advantages over the human mind, especially when it comes to the ability to retain and recall vast quantities of information and to make decisions and give advice unaffected by biases and emotions. An analysis of the relevant sections of the Companies Act led us to conclude in the previous article that, as the law stands, there is nothing to prevent a company from legislating, in its memorandum of incorporation or a shareholders’ agreement, that an AI system must be consulted before a final decision is made by the board.

However, the directors cannot abrogate their duty to become fully informed about corporate affairs, apply their own minds and exercise their discretion in the best interests of the company. Given the nature of AI, this issue must also be addressed if a company requires the directors to rely on AI in making decisions.

Due diligence and liability

In terms of section 77(2) of the Companies Act, 71 of 2008, a director of a company may be held liable in accordance with the principles of the common law relating to delict for any loss, damages or costs sustained by the company as a consequence of any breach by the director of the duty imposed by section 76(3)(c), which requires a director to carry out his or her duties with the degree of care, skill and diligence that may reasonably be expected of a person —

  • carrying out the same functions in relation to the company as those carried out by that director; and
  • having the general knowledge, skill and experience of that director.

Since well before the advent of the latest generation of AI, company boards have been relying on information collected, stored and processed electronically when making decisions. With advances in technology of all kinds, it might well be said that company directors who did not take advantage of the latest tools to assist them in coming to accurate decisions quickly were not in fact carrying out their duties with the degree of care, skill and diligence expected of them.

Legal implications

But as AI has been developed to replicate the way in which human minds think and learn, a new issue has arisen, which impacts on the legal implications of using AI in corporate decision-making – as much as it is not possible to “read’ the thought processes by which a human being reaches decisions unless they explain it, we do not always understand the process by which AI reaches decisions. This may make it difficult for a company’s board to show that they have exercised the degree of care, skill and diligence required by section 77(2) of the Companies Act.

The answer to this problem, it is suggested, is for a company to make sure that proper governance rules are in place, regulating the company’s use of AI and ensuring that the there is a chain of record keeping and accountability, by which processes are documented and explained. The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, published in April 2019, provide some useful guidance as to the considerations that should be catered for. The Guidelines propose that, in order for AI used in companies to be trustworthy, it must be “lawful, ethical and robust”. This in turn requires that the following considerations must be taken into account:

  • Transparency – There should be visibility into all elements of the AI system and it should be possible to decipher and explain the decision-making process.

  • Data privacy and security - AI systems should maintain the privacy rights of any person whose data they process, as well as the privacy of the processing models and supporting systems. Any processing of personal information by the system must be carried out in accordance with relevant data protection laws (in South Africa, the Protection of Personal Information Act and the Promotion of Access to Information Act). “Processing” includes the collection, receipt, storage, updating or modification, use, dissemination, erasure or destruction of information. If an AI system is capable of initiating data processing of its own accord, it must also include safeguards to ensure the relevant laws are complied with.

  • Focus on human agency and oversight - The AI system must not undermine human autonomy or cause other adverse effects. The less oversight a human can exercise over an AI system, the more extensive testing and stricter governance of the system is required. Oversight mechanisms can be required in varying degrees to support other safety and control measures, depending on the AI system’s application area and potential risk. The system may allow for human intervention and oversight –

    • at every stage where a decision is required, (which in many cases is neither possible nor desirable);
    • during the design cycle of the system and monitoring of the system’s operation; and
    • in the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system, or not to use it, in any particular situation.

  • Technical robustness and safety - Technical robustness requires that AI systems be developed with a preventative approach to risks and in a manner such that they behave reliably as intended while minimising unintentional and unexpected harm, and preventing unacceptable harm. This includes incorporating a safe failover mechanism, in other words, a backup operational mode that automatically switches to a standby database, server or network if the primary system fails, or is shut down for servicing.

  • Accountability - Mechanisms must be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use. Even though the system may be capable of working autonomously, it should be under the supervision of a human being. There must be an established path of responsibility and accountability for the behaviour and operation of the system.

In terms of the Companies Act, the directors can never abrogate their responsibility and must be able to show that they have exercised the requisite degree of skill and care in carrying out their duties. As we are navigating as yet uncharted waters, the courts have not yet had occasion to pronounce on what would be sufficient compliance by directors when using AI to aid corporate decision-making. But we suggest that if they can show that in introducing and using AI, they implemented these guidelines, it ought to go a long way to establishing that they have complied with their obligations.

About Ian Jacobsberg

Ian Jacobsberg, Director Fluxmans Attorneys
Let's do Biz