Kenyan independence: a paper tiger against digital colonialism?
For Kenyan Independence (Jamhuri) Day on 12 December, paralegal Nick Queffurus discusses the history of Kenya’s digital democracy and how recent legal developments could offer protection to Kenyans who suffer online harm at the hands of tech giants.
Posted on 10 December 2021
I vividly remember arriving in Nairobi in March 2018 to find the former CEO of Cambridge Analytica splashed across the pages of The Standard newspaper.
Since then, much has been written about the company’s role in global politics over the past decade, with Kenyan activist Nanjala Nyabola having pointedly called the company’s activities an example of digital colonialism.
This refers to an extractive data collection model in which companies from the Global North take advantage of African nations without regard for the safety of their citizens and the stability of institutions.
When being interviewed by Channel 4 on this topic, opposition leader Raila Odinga noted there were sufficient “raw materials” (a proliferation of Kenyan social media users) for these companies to exploit.
In the most recent 2017 elections, a key aspect of the anti-opposition online campaign was to suggest that a vote for Odinga would lead Kenya to apocalyptic scenes, playing on fears of recent post-election violence and ethnic divisions of the type that Leigh Day brought to the attention of the English courts in AAA v Unilever PLC [2017] EWHC 371 (QB).
Nyabola gets to the heart of the matter when she asks: what does accountability for political misinformation look like when a British company uses an American platform to influence political discourse in a Kenyan election, resulting in deaths and destruction in Kenya?
Wind of change? Legislative developments
In his Channel 4 interview, Odinga highlighted the lack of a data protection legal framework in Kenya as a cause for concern. The following year the Kenyan government enacted the Data Protection Act (DPA) of 2019, which had been slowly making its way through parliament for several years.
Formally, the DPA shares many similarities with the GDPR and UK Data Protection Act. A recent judgment of the Kenyan High Court about the Huduma ID Card has further clarified how the Data Protection Act is meant to give effect to Article 31 (right to privacy) of Kenya’s Constitution.
The introduction of the DPA 2019 raises the question of whether it will protect against the kind of online political micro-targeting facilitated by data analytics companies like Cambridge Analytica in recent elections. Arguably, the DPA 2019 has made unlawful much of the political targeting and micro-targeting that was legal and rife during the 2013 and 2017 elections. This is due to the creation of an opt-in mechanism for processing personal data. However, the broad framing of the public interest exception could be used as a workaround to this requirement.
Considering the DPA purely as it written does not take account of the political and economic pressures which could influence how the DPA is enforced in practice. The Act’s impact could largely turn on its robust and independent implementation by the newly created Kenyan Data Protection Commissioner.
Additionally, tech companies have been willing to risk large fines in the Global North from data regulators. It remains to be seen whether enforcement will be more successful in the Kenyan context, which lacks the economic strength of the European Single Market.
Data protection is only one part of this jigsaw. Regulating political speech should factor in any consideration of the history of data-driven politics in Kenya. Content moderation is a thorny issue, and blunt instruments are best avoided in this space.
At the same time, Facebook whistle-blower Frances Haugen’s recent testimonies demonstrate how the current model of platform governance is unsustainable, especially in combustible political contexts.
An important, very recent development in this context is the claims being brought against Facebook in the UK and US on behalf of the Rohingya people. In Kenya, non-profit technology company, Ushahidi, in its Umati project, has shown how hate speech has spread online in recent elections.
In future, it will be interesting to see how Kenya reacts to the international growth of online harms regulation, such as the UK’s Online Safety Bill or Germany’s NetzDG, which aims to tackle the tension between freedom of expression and online harm.
Accountability – London calling?
The Cambridge Analytica scandal also illustrates the global reach of the UK’s data protection regime.
US citizen David Carroll – of Netflix documentary fame – sued Cambridge Analytica in the English High Court to recover his data from the now defunct company and complained successfully to the UK Information Commissioner’s Office.
The ICO in turn prosecuted the UK-based parent company of Cambridge Analytica. This was before the coming into force of the GDPR, under which sanctions are much stronger.
In the absence of a comprehensive online harms regime in Kenya, it is worth speculating about how the UK Online Safety Bill (OSB) could provide redress for Kenyan citizens in future. Section 3 of the OSB provides for wide application outside of the UK. To be caught by its provisions, a user-to-user service or search service must have links with the UK. A service has links with the UK if it has a significant number of UK users.
There has not been much indication so far of what constitutes a “significant” number of users. However, arguably British-Kenyans living in the UK, for example, who feel that an international tech company is failing in its duty of care to protect them from hate speech on a Kenyan webpage could be protected by the UK’s Bill.
As the Bill’s White Paper notes, the duty of care created by the OSB may help individuals bring private claims against tech companies, such as for negligently failing to protect them from online harm. It does this by creating a regulatory model which provides evidence and sets standards which could be relied on in a legal case. That being said, the OSB strictly does not intend to create new rights for individuals to sue companies, so any legal case would need to fit within the existing laws of negligence.
It may seem surprising to turn to British legislation and courts to try to hold unscrupulous tech companies to account for harm caused by their actions in Kenya. However Leigh Day has successfully represented Kenyan claimants in English courts for a variety of wrongs: from horrific colonial violence inflicted during the Mau Mau Emergency, to recent alleged victims of sexual and physical abuse at avocado farms in Murang’a County.
It is hoped that recent Kenyan regulatory and legislative developments will protect Kenyan citizens from the type of digital colonialism and exploitation described in this article, particularly with the 2022 elections looming large.
Yet if the judicial framework in Kenya is not yet ready to provide effective solutions to victims of online harm, then bringing such cases in the UK may provide an avenue to enable victims to obtain justice and redress.