In 2001, the collapse of Enron sent shockwaves through the business world, prompting global accounting and regulatory reforms. More recently in South Africa, KPMG has been grappling to recover after having been implicated in state capture through its ties to the Gupta family. The fallout saw the firm’s CEO, COO, chairperson, and five senior partners resign, and cost the company millions in reparations and reputational damage. Again, the audit profession was spotlighted, and accounting reforms were adopted.
I’d like to suggest that we need to start to consider AI in a similar way – something that requires better regulation and constant ethical oversight.
When I decided to do my MPhil thesis for GIBS on the topic “Ethical Considerations in the Implementation of AI technologies for Business Process Management (BPM) Multinational Corporation: South Africa”, my aim was to explore the ethical dilemmas and challenges that arise in how businesses are adopting AI. These include ensuring data privacy, maintaining transparency regarding AI algorithms, and promoting equal access to AI-driven solutions. I wanted to explore how ethics can be integrated into the AI practices currently being implemented for BPM in multinationals.
However, I was unprepared for the scale of the problem I would uncover. How AI is being used
Interviewees shared how AI is being used in ways that may unwittingly compromise data privacy, compliance, and accuracy, thus increasing business risk, and even, in one case, costing jobs.
One interviewee told me how, when merging nine organisations and automating processes, there were significant job losses due to poor planning around skills assessments and operating model changes.
Another explained that, when dealing with clients in jurisdictions such as the UK, their bots didn’t recognise certain international regulatory frameworks. In such cases, human intervention was needed to override the AI or risk non-compliance with local laws.
A third organisation explained they’d integrated an AI-powered tool into business systems, allowing clients to conduct self-assessments for insurance claims using their smartphones. While impressive, several clients reported inaccuracies in claim assessments, which led to claims being unjustly denied. The organisation had to implement an additional validation step – ensuring a human verifies the accuracy of the outcomes.
Others flagged that some workers use AI tools such as ChatGPT for coding and debugging, unknowingly uploading proprietary source code into environments not approved for secure data handling. This breaches internal security protocols and increases the risk of cyber-attacks.
The real-world ramifications of unchecked AI implementation are only just beginning to be felt. Many businesses are launching automations without strategies, policies, or sufficient oversight. I believe this lack of intention and accountability could have catastrophic consequences, particularly in sectors such as financial services. While the scope of my thesis did not extend to the public sector, similar challenges there could have even more far-reaching implications.
The devil’s in the data
Even seemingly neutral AI and data-driven systems can replicate and exacerbate societal biases when implemented without rigorous governance and ethical oversight. For example, a 2019 investigation by the Council for Medical Schemes into the fraud, waste and abuse (FWA) detection systems of major South African medical administrators uncovered stark racial disparities. While no evidence of explicit racial bias in the algorithms was found, the outcomes revealed that Black practitioners were 1.4 times more likely to be flagged for fraud than their non-Black counterparts – outcomes the panel deemed statistically impossible to attribute to chance.
In late 2024, in the UK, an AI system used to detect welfare fraud was found to disproportionately target people from some groups more than others when recommending who to investigate.
The risks of such issues arising are exponentially greater with unregulated AI usage. Scale, lack of transparency, and automation can quickly entrench discrimination in high-stakes decisions, such as credit scoring, loan approvals, or fraud detection. In banking, this could translate into entire demographics being unfairly excluded from financial services.
In 2021, Apple Card faced scrutiny when women were granted lower credit limits than men with similar or better credit profiles. Investigations showed the algorithm used proxies that indirectly led to gender-based disparities. In South Africa, this could further marginalise vulnerable groups.
As these instances escalate in frequency and severity, organisations risk an Enron-scale fallout with serious financial, legal, and reputational consequences.
In the public sector, similar biases in procurement or social grants systems could deepen inequality and spark legal challenges – all without clear accountability. Robust, transparent governance is not optional.
Ethical concerns
From what my research showed, ethics are often viewed as theoretical rather than being translated into practical frameworks. Many organisations relied on common sense or company culture rather than documentation or training. The major concerns I identified included:
- Absence of internal organisational frameworks governing AI use
- Gaps in leadership involvement
- Lack of AI integration into corporate strategy
- Insufficient training programmes for upskilling
- Failure to adopt effective change management strategies
- Absence of effective data management strategies
- Deficiencies in transparency and accountability (with responsibility defaulting to execution specialists rather than C-suite executives.
Governance gaps and organisational blind spots
The core issue is lack of governance. Execution specialists might be making decisions, but the C-suite should be accountable. However, AI doesn’t even feature in most strategies or executive KPIs. While companies are excited to talk about AI innovation, AI is still often excluded from corporate strategy and leadership training.
As one interviewee told me, “It’s challenging to maintain ethical consistency, as there is no existing legal framework or policies set to governing ethics across financial institutions. Each institution defines its own ethical guidelines, which makes alignment and accountability more difficult.”
Without a top-down approach, there’s a lack of transparency in decision-making processes, insufficient employee education and upskilling, minimal change management efforts, and poor data management practices.
The result? High failure rates for AI projects, missed opportunities, and significant risk exposure. Without human validation and oversight, AI systems can make decisions that are biased, unethical, or simply incorrect. Financial and operational risks
Some organisations see AI as a fast track to savings, but without proper implementation, they risk substantial losses. However, without a clear business case, relevant use cases aligned to the organisation’s objectives, adequately trained users, and proper post-implementation support, the investment is unlikely to deliver value and may ultimately be wasted.
One interviewee explained how their financial institution had adopted AI automation without considering the impact of load-shedding. When the power went out, key systems failed. Business continuity was only preserved because of a backup in another country. That wasn’t strategy; it was luck.
Something many companies haven’t considered is AI’s potential role in fraud and cybercrime. If someone uploads proprietary data into AI tools, that data could be compromised or leaked.
Auditors – the new custodians of AI governance?
As my research has highlighted, there is a significant gap in the current AI governance landscape. In my opinion, the audit function is well placed to oversee AI governance.
Audit professionals are debating how their profession needs to evolve in the face of AI, focusing on how AI will cannibalise much of their work that can be automated. I’d suggest that they shift their focus to understanding how AI itself needs to be audited, and building that into the solutions they offer clients. This includes defining clear principles, policies, and methodologies for the ethical use and auditing of AI within organisations, as well as enforcing compliance through appropriate penalties for violations.
They also need to understand how AI intersects with cybersecurity and data protection, especially as most AI systems rely on cloud infrastructure.
Context matters
Across Africa, access to technology remains uneven. While some countries and companies lead in innovation, many struggle with basic connectivity. Any conversation about AI ethics must acknowledge these disparities and adapt accordingly, and yet few seem to.
An interviewee reported that AI tools often lack relevant African data: “When working on a project in Zambia, the AI outputs were skewed toward developed economies. We had to manually gather local data to fill in the gaps.”
There is also a notable absence of formal national AI governing bodies to provide oversight and enforce compliance in Africa. While South Africa has published a national AI policy framework, it borrows heavily from international models and lacks contextual relevance or concrete implementation mechanisms.
In contrast, countries such as the UAE have taken progressive steps by appointing a Minister of AI to oversee the ethical and strategic use of AI. Establishing similar governance structures is crucial to ensure responsible AI development and alignment with national priorities.
Collaboration between private and public sectors is vital to drive AI investment and infrastructure. This includes accessible training, AI-focused startups, school curricula, and local research and development. Institutions such as GIBS have an important role to play in this regard.
During my research visit to Rwanda, I saw what’s possible. They’ve automated processes, reduced corruption, and ensured proper oversight by bringing the right people to the table and thoroughly analysing the implications of AI implementation.
Businesses should do the same by treating AI like any major investment. If leadership doesn’t understand what the AI is doing, how can they measure success or protect against risks?
Executives must rethink their roles. Digital transformation can’t be left to IT departments. It must be integrated into strategy and leadership development.
The risks of inaction are stark: regulatory breaches, data leaks, AI-driven fraud, broken systems, and failed projects. The future of responsible AI in Africa depends on intentional, ethical, and accountable leadership.
We must not assume AI can operate autonomously. Human oversight remains essential to ensure ethical, accurate, and aligned decision-making.
What needs to be done?
- Conduct thorough feasibility and impact assessments
- Establish a strategic implementation roadmap and adopt staggered approach to ensure a scalable and sustainable integration
- Establish clear internal ethical guidelines
- Mandate employee participation in AI training
- Establish governance committees with defined responsibilities. For example, use the Responsible, Accountable, Consulted, and Informed (RACI) project management tool to define who’s supposed to be sitting at the table when the AI decisions are made, who approves decisions, who is consulted, who needs to be informed, who’s responsible for execution, and what does adoption mean for the various roles?
- Reduce reliance on consultants; involve employees early for user feedback, to fine-tune, and to improve usability
- Measure success and plan for disaster recovery
- Establish transparent implementation processes
- Ensure human oversight as a control measure for business-critical processes
- Continuously monitor, audit, and review AI systems
Boitumelo Mahlaba is a tech enabler and AI enthusiast currently leading business transformation initiatives at Rand Merchant Bank. She did her MPhil dissertation with GIBS on the ethical considerations in implementing AI technologies for business process management.