OPINION: AI in the media: why an ethics-by-design approach is essential
An AI-generated image to depict the era of AI and Journalism. Image I Gemini
Audio By Vocalize

The adoption of Artificial Intelligence in Kenya is revolutionising industries by improving efficiency and supporting data-driven decision-making. The flipside of this is the ethical questions that the use of AI has brought to the fore.
Regardless of whether one supports or opposes it, what is not open for debate is that it is here to stay, and there is a need for artificial intelligence literacy among those deploying it.
AI literacy is more than simply acquiring the skills and knowledge to incorporate this emerging technology into the workflow; it is about learning how to conceptualise, design, train and deploy it responsibly.
In the media industry, the use of Artificial Intelligence — especially generative AI — poses many ethical questions.
Accuracy and public trust are paramount, and therefore the dilemma of whether to use it, how to use it, and the ability to recognise where the ethical boundaries lie has never been more palpable.
If the media acts in the public interest, is the use of AI for the public good, or merely a strategic decision hinged on leveraging emerging technologies?
The truth is that media houses in Kenya are already using AI in their workflows, from the story-pitching and ideation stage through to news gathering, content production, dissemination, moderation, audience analysis, engagement and fact-checking.
The risks of using AI include algorithmic bias and hallucination, disinformation and misinformation, all of which threaten accuracy and damage media trust.
The most pertinent question now is whether there is an ethics-by-design approach in place, and whether those deploying it are AI-fluent.
What does an ethics-by-design approach and AI fluency look like in practice? For instance, when deploying an AI tool for audience analysis, does the media management team worry more about the cost implications, or whether the tool will be transparent, fair, inclusive and protective of audience privacy?
Do they interrogate what data was used to train the AI tool and consider the biases it might perpetuate? Equally, when a journalist is developing a story and chooses to use one of the available generative AI tools, do they have the foresight to ask what the ethical implications are, and at what point do they pause and say: “Thank you, AI, for helping me think through my story.
I need to retain the authenticity of my work, so I will take it from here. I am willing to disclose your role in developing this piece, but I am accountable for the final product.”
The Code of Conduct for Media Practice in Kenya seeks to provide the ethical threshold for the use of AI by journalists and media enterprises in Kenya.
The code makes clear where responsibility ultimately lies and affirms the centrality of the human-in-the-loop concept.
Human oversight is emphasised as the guiding principle in the use of Artificial Intelligence. Part Four (Section 27) of the code, which addresses user-generated content, the use of Artificial Intelligence and other technologies, sets out a clear checklist that a media house must satisfy before deploying this technology.
At its heart, the ethical application of AI in media must begin from the understanding that at every stage there is a human directing proceedings, and that person must be accountable for how the technology is used.
Media houses must therefore develop AI policies that reflect their core values and the ethics of the profession. They must ensure that AI use is fair, unbiased, accurate and respectful of intellectual property rights and data privacy rights.
The principle of active disclosure dictates that when a journalist or media enterprise uses AI, they must inform their audience — particularly where it has been used to modify images, videos or editorial content.
The Media Guide on the Use of Artificial Intelligence, developed by the Media Council of Kenya, explores in greater depth the nuances of using AI in journalism and serves as a practical manual for the use of the technology within a Kenyan context.
It draws on the guidance of organisations such as the United Nations Educational, Scientific and Cultural Organisation (UNESCO) and the Paris Charter on AI and Journalism to provide a framework for journalists navigating AI use.
What is clear is that this technology will evolve and take on different dimensions as it advances into the Artificial General Intelligence and Artificial Super Intelligence stages.
The need for journalists to develop AI fluency at each stage of this development, however, remains constant. Media enterprises and media development partners should invest in equipping journalists with the relevant AI literacy skills.
Rebecca Mutiso is the Manager, Accreditation and Compliance at the Media Council of Kenya


Leave a Comment