News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

Submit content

My Account

Advertise with us

SPONSORED BY:

More #YouthMonth

Subscribe & Follow

Advertise your job vacancies
    Search jobs

    Can academics use AI to write journal papers? What the guidelines say

    Artificial intelligence (AI) refers to “intelligent machines and algorithms that can reason and adapt based on sets of rules and environments which mimic human intelligence”. This field is evolving rapidly and the education sector, for one, is abuzz with discussion on AI use for writing.
    Image source: bpawesome –
    Image source: bpawesome – 123RF.com

    This matters not just for academics, but for anyone relying on trustworthy information, from journalists and policymakers to educators and the public. Ensuring transparency in how AI is used protects the credibility of all published knowledge.

    In education and research, AI can generate text, improve writing style, and even analyse data. It saves time and resources by allowing quick summarising of work, language editing and reference checking. It also holds potential for enhancing scholarly work and even inspiring new ideas.

    Equally AI is able to generate entire pieces of work. Sometimes it’s difficult to distinguish original work written by an individual and work generated by AI.

    This is a serious concern in the academic world – for universities, researchers, lecturers and students. Some uses of AI are seen as acceptable and others are not (or not yet).

    As editor and editorial board member of several journals, and in my capacity as a researcher and professor of psychology, I have grappled with what counts as acceptable use of AI in academic writing. I looked to various published guidelines:

    The guidelines are unanimous that AI tools cannot be listed as co-authors or take responsibility for the content. Authors remain fully responsible for verifying the accuracy, ethical use and integrity of all AI-influenced content. Routine assistance does not need citation, but any substantive AI-generated content must be clearly referenced.

    Let’s unpack this a bit more.

    Assisted versus generated content

    In understanding AI use in academic writing, it’s important to distinguish between AI-assisted content and AI-generated content.

    AI-assisted content refers to work that is predominantly written by an individual but has been improved with the aid of AI tools. For example, an author might use AI to assist with grammar checks, enhance sentence clarity, or provide style suggestions. The author remains in control, and the AI merely acts as a tool to polish the final product.

    This kind of assistance is generally accepted by most publishers as well as the Committee on Publication Ethics, without the need for formal disclosure. That’s as long as the work remains original and the integrity of the research is upheld.

    AI-generated content is produced by the AI itself. This could mean that the AI tool generates significant portions of text, or even entire sections, based on detailed instructions (prompts) provided by the author.

    This raises ethical concerns, especially regarding originality, accuracy and authorship. Generative AI draws its content from various sources such as web scraping, public datasets, code repositories and user-generated content – basically any content that it is able to access. You can never be sure about the authenticity of the work. AI “hallucinations” are common. Generative AI might be plagiarising someone else’s work or infringing on copyright and you won’t know.

    Thus, for AI-generated content, authors are required to make clear and explicit disclosures. In many cases, this type of content may face restrictions. Publishers may even reject it outright, as outlined in the Committee on Publication Ethics guidelines.

    What’s allowed and what’s not

    Based on my readings of the guidelines, I offer some practical tips for using AI in academic writing. These are fairly simple and could be applicable across disciplines.

    • The guidelines all say AI tools can be used for routine tasks like improving grammar, revising sentence structure, or assisting with literature searches. These applications do not require specific acknowledgement.
    • Across the guidelines reviewed, AI generated content is not allowed unless there are clear reasons why this was necessary for the research and the content is clearly marked and referenced as such. Thus, depending on how AI is used, it must be referenced in the manuscript. This could be in the literature review, or in the methods or results section.
    • Sage and the Committee on Publication Ethics emphasise that authors must disclose when AI-generated content is used by citing this appropriately. There are different conventions for citing AI use but all seem to agree that the name of the generative tool used, the date accessed and the prompt used should be cited. This level of transparency is necessary to uphold the credibility of academic work.
    • Other aspects linked to AI assistance like correcting code, generating tables or figures, reducing word count or checking on analyses cannot be referenced directly in the body of the manuscript. In line with current best practice recommendations, this should be indicated at the end of the manuscript.
    • Authors are responsible for checking the accuracy of any AI content, whether AI assisted or AI generated, ensuring it’s free from bias, plagiarism, and potential copyright infringements.

    The final word (for now)

    AI tools can undoubtedly enhance the academic writing process, but their use must be approached with transparency, caution, and respect for ethical standards.

    Authors must remain vigilant in maintaining academic integrity, particularly when AI is involved. Authors should verify the accuracy and appropriateness of AI-generated content, ensuring that it doesn’t compromise the originality or validity of their work.

    There have been excellent suggestions as to when the declaration of AI should be mandatory, optional and unnecessary. If unsure, the best advice would be to include the use of any form of AI (assisted or generated) in the acknowledgement.

    It is very likely that these recommendations will be revised in due course as AI continues to evolve. But it is equally important that we start somewhere. AI tools are here to stay. Let’s deal with it constructively and collaboratively.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source: The Conversation Africa

    The Conversation Africa is an independent source of news and views from the academic and research community. Its aim is to promote better understanding of current affairs and complex issues, and allow for a better quality of public discourse and conversation.

    Go to: https://theconversation.com/africa

    About Sumaya Laher

    Sumaya Laher, Professor, University of the Witwatersrand
    Related
    More news
    Let's do Biz