Revolutionary as the idea of a “robot” company director was in 2014, advances in the field of artificial intelligence (AI) have ensured that it has continued to gather momentum in a number of countries. Over the past few months, the possibility of computers supplementing or even replacing human decision making has gathered renewed momentum with the advent of ChatGPT, a chatbot with interactive capacity and ability to answer complex and nuanced problems exceeding anything previously available in the public domain.
The advantages of using intelligent algorithms to assist corporate decision making are obvious - its ability to assimilate, retain and recall, in fractions of a second, vast quantities of information and to provide answers to questions unaffected by the problems that beset so many top-level human decision makers - bias, emotions, and stress.
For centuries, through the transition from a predominantly agrarian society to a predominantly urban one, and through successive industrial revolutions, lawyers have had to grapple with how existing laws would answer questions posed by new technologies and changing social and economic conditions. The advent of AI, and its increased role in business decision making is no exception. In this article, we examine some of the issues that would arise in South African law if a company wanted to involve an AI system in its corporate decision making.
There is no express statement in the Companies Act, 71 of 2008, that a company director must be a “person” (defined as a person or entity that the law recognises as being able to exercise rights and incur obligations). However, the Act confers rights and imposes obligations on directors. By implication therefore, the Act assumes that directors must be “persons”.
Section 69(7)(a) of the Act provides that a juristic person (a company, trust or other created association of persons recognised by law as having rights and obligations) is ineligible to be appointed as a director. So, it would seem that only natural persons (human beings) can be company directors in South Africa. While suggestions have been made in some jurisdictions that AI could be accorded legal personality, based on its capacity for autonomous thought and even displaying behaviour akin to human emotions, it would appear South African law would currently not allow an AI “robot” to be registered as a director.
Given that it would not be permissible, certain questions arise as to whether a robot can validly be given the authority to make, or participate in making, board decisions.
There would not seem to be any obstacle to the company effectively granting an AI robot these powers by a simple amendment to the memorandum of incorporation, shareholders’ agreement or both.
The relevant clause could provide that the AI must be consulted before any decision - or any decision in regard to specified subjects - is made, or that the AI must be consulted in the event of a deadlock between the members of the board.
The latter arrangement is similar to the common provision providing for the chairman to have a casting vote in the event of a deadlock, or that the company’s auditors may be called upon to resolve deadlocks on certain topics; only that in this case the decision maker may arguably be better “informed” and less likely to be swayed by personal interests. The question is: are the directors allowed to relinquish their decision-making authority to a non-human resource, no matter how sophisticated?
Section 76 of the Companies Act is headed “Standards of directors conduct”. Section 76(3) states that a director must perform his or her functions –
Section 76(4)(a) of the Act states that –
Section 76(4)(b), read with section 76(5), allows a director, when making or supporting a decision, to rely on –
Looking at the duties imposed on directors in terms of section 76, it is a notable requirement that, for a director to have complied with his or her obligations, he or she must have taken “reasonably diligent steps to become informed about the matter” and must have had a rational basis for believing “…that a decision that he or she made or supported was in the best interests of the company”. A director can never be exempt from applying his or her own mind to the issue at hand and forming an opinion as to the correctness of a course of action.
As mentioned, the Act does allow a director to rely on the advice of professional advisers, competent employees of the company and board committees. However, when allowing for these exceptions, the Act expressly refers to the providers of the advice as “persons”, in other words, people or companies capable of being held to account in law if they are negligent.
Systems like Vital and ChatGPT demonstrate just how sophisticated an AI system may be, and the huge advantages they can have over the human mind, especially when it comes to the ability to retain and recall vast quantities of information and to make decisions and give advice unaffected by biases and emotions. However, as our law currently stands, directors cannot avoid consciously seeking to become informed about company affairs, applying their own minds and exercising their discretion in the best interests of the company. Before that is possible, amendments to the Companies Act would be required, which would raise other issues as to who would then be liable if the system “made a mistake” This is a separate and equally complex discussion.