This instalment sets out the ethical and legal guardrails - HPCSA accountability, evidentiary reliability, and data protection - that draw a firm line between acceptable assistance and impermissible substitution
Ethical and professional safeguards: The non-negotiables of psycho-legal reporting
The psycho-legal report is a professional product governed by ethical, legal, and evidentiary standards, and cannot be outsourced to non-human tools. The Health Professions Council of South Africa (HPCSA) explicitly requires that all registered practitioners remain personally accountable for the professional content they author and submit, including in medico-legal contexts.
According to the HPCSA’s Ethical Rules of Conduct (Booklet 2, Rule 13(a)):
A practitioner shall be personally responsible for his or her professional decisions and actions...
In practice, this means that an industrial psychologist (IOP) must be able to explain and defend every opinion rendered in a report, whether under expert testimony or peer review.
No amount of software automation can shift this responsibility. Over-reliance on AI for career and earnings forecasting or trajectory analysis – particularly in the absence of human oversight – exposes the practitioner to ethical breach, professional sanction, and legal challenge.
While this article is written from within the scope of industrial and organisational psychology, the legal dimensions of expert reporting are no less critical. As Anrich van Stryp (director / attorney, Intellectual Property & Commercial Law, Brits Law) observes, the application of generative AI in psycho-legal reporting raises material risks around admissibility, professional liability, intellectual property, and data privacy.
South African law requires expert opinion from a qualified professional exercising independent judgment. Unverifiable machine outputs cannot satisfy reliability under the Law of Evidence Amendment Act 45 of 1988 or independence under Rule 36(9) of the Uniform Rules of Court.
Van Stryp adds, that in Holtzhauzen v Roodt [1997] (4) SA 766 (W), the court affirmed that admissibility requires transparent methodology defensible under cross-examination – criteria absent from obscure, “black box” AI methods.
International law reflects similar requirements: the UK’s CPR Part 35 demands independence and methodological transparency from experts, while the EU’s AI Act (2024) categorises forensic and medical AI applications as “high risk”, requiring human oversight and auditability. Psycho-legal evidence generated by AI would not meet these evidentiary safeguards.
Generative AI carries a recognised risk of hallucinated content, described as: “Hallucinated text gives the impression of being fluent…despite being unfaithful and nonsensical. It appears to be grounded in the real context provided, although it is actually hard to specify or verify the existence of such contexts.” (Ji et al., 2023; Survey of Hallucination in Natural Language Generation, ACM Computing Surveys)
This is not a theoretical concern. Generative AI systems are known to misrepresent legal citations, invent authors, fabricate references, or extrapolate from incomplete inputs in ways that are not transparent or verifiable.
In the context of expert evidence, particularly when testifying in court or undergoing cross-examination, the presence of even a single unverifiable claim could be grounds to discredit the report and undermine the credibility of the practitioner.
In terms of data privacy, the HPCSA’s Booklet 5 (Confidentiality Guidelines) and the Protection of Personal Information Act (PoPIA) impose strict duties on health professionals concerning the storage, processing, and dissemination of personal and medical information. Submitting claimant records to an unsecured platform for purposes of “AI drafting” contravenes these standards and places sensitive data at real risk.
Johnny Davis, Ahmed Dhupli and Simangaliso Sithole 17 Jul 2025 Van Stryp further highlights that where psycho-legal reports include third-party employer records, educational records, or psychometric test materials, uploading such proprietary information to AI platforms without prior consent constitutes copyright infringement and breach of contractual confidentiality obligations.
This is not only a legal risk but may also violate ethical standards of responsible data handling and professional trust.
As it stands, no publicly available AI system meets the encryption, consent, or professional auditability standards required; and AI cannot sign a report, testify in court, or assume accountability for its content.
Caution, not codification: AI cannot form psycho-legal opinions
In high-volume litigation, automation may seem efficient. But psycho-legal opinion is a high-stakes act of professional reasoning, underpinned by scientific, ethical, and legal admissibility requirements.
Critically, no current AI tool meets the professional, evidentiary, or legal thresholds established by:
- HPCSA Ethical Guidelines (Booklet 2, s6.2): Practitioners must justify all professional decisions and personally verify report content.
- HPCSA Confidentiality Guidelines (Booklet 5, s4.1.1): Disclosure of sensitive information to third parties requires explicit consent and robust security safeguards.
The process of forming a psycho-legal opinion involves more than aligning facts with formulas. It requires the IOP to interpret complex interdependencies between the claimant’s background, established limitations, and most likely occupational trajectory within the realities of the open labour market.
This is not a process that can be templated or semi-automated without risking breach of professional and legal standards.
Melissa Steele 27 Aug 2025 Generative AI tools remain inherently unreliable for use in regulated psycho-legal practice. As Ji et al. (2023) demonstrate, these systems often generate plausible but unsupported responses, particularly when prompts contain ambiguity or require domain-specific validation.
The illusion of accuracy can be professionally dangerous.
Psycho-legal reporting is not data presentation. It is applied judgment under conditions of uncertainty and consequence. It demands case-specific reasoning, evidence-based discipline, and ethical restraint, none of which can be replicated by automation.
Concerns regarding liability are no less marked. Van Stryp notes that negligent misstatement under South African law applies to experts whose reports are used in litigation.
An IOP who integrates AI-generated material that proves inaccurate, fabricated, or misleading risks both delictual liability and HPCSA sanction. Significantly, AI providers exclude liability for professional use of their outputs, leaving responsibility solely with the practitioner.
International law echoes this: UK courts have held that professionals remain liable when relying on flawed AI-produced advice, and the EU AI Act requires human accountability for the operation of high-risk systems.
Data protection and privacy frameworks compound these risks. Van Stryp highlights that Section 14 of the Constitution, reinforced by the Protection of Personal Information Act 4 of 2013 (PoPIA), imposes strict security controls and prohibits cross-border data transfers without safeguards.
Uploading claimant records to offshore AI servers without explicit consent is unlawful export, exposing practitioners to civil liability and regulatory sanction. These provisions mirror the GDPR and the UK Data Protection Act, which restrict processing or exporting sensitive personal, occupational and health data without explicit consent, adequate protection, and human oversight.
The way forward: Anchored innovation, not AI-led substitution
The integration of technology into psycho-legal practice is not the problem. Indeed, responsible innovation – when guided by those who understand the ethical, evidentiary, and professional demands of the work – can support timely, accurate, and defensible outputs.
However, AI must remain a tool of augmentation, not assumption. It may assist with, for example, sourcing legitimate references for the practitioner to verify; but it must never be used to form or justify core professional opinion – especially when that opinion will be used as expert evidence in litigation.
The risks are not hypothetical. Improper use of generative AI in psycho-legal reporting can compromise the admissibility of evidence, breach ethical and confidentiality obligations, and evidentiary accuracy – exposing the IOP to disciplinary proceedings or legal sanction.
The IOP’s role – at the intersection of psychological functioning, occupational capacity, and labour market dynamics – cannot be delegated to an algorithm.
Rather than treating AI as a replacement, the psycho-legal community must develop regulated standards for ethical AI support, grounded in collaboration, validation, and legal-ethical scrutiny – not market enthusiasm.
Final note
This article is written from within the profession. Its aim is not to dismiss technology, but to stress that psycho-legal use must begin with clarity, legal fidelity, and ethical restraint.