News

May 21

2021

By

||

||

1 Like (6)

||

Print Friendly, PDF & Email

Algorithmic Apartheid? African Lives Matter in Responsible AI Discourse.

What happens when machines are not given enough data to learn and accurately represent those on the African continent? 

Scholarship is on a rise revealing rampant racism in decision-making software. Algorithms are used to assist institutions and big tech companies in recruitment processes, healthcare, to determine people’s creditworthiness and maintain protection online through various community standards and policies. 

Artificial Intelligence (AI) is transforming lives all over the world. While such digital transformation is taking centre stage in digital rights debates, researchers argue that algorithmic systems can lead to biased decisions, with the encoding of existing human biases into machine learning being the most significant underlying cause. 

Computer scientist and founder of the Algorithmic Justice League (AJL), Joy Boulamwini says there needs to be a choice, right now in the digital sphere. “What’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late”, she echoed. Therefore, questions of representation are central in data and algorithm ethic discourse, and rightfully so, because machines mirror society’s behaviour. 

With such arguments at the fore, it becomes clear that previously marginalised communities are more inclined to experience algorithmic biases. Africans continue to face discrimination offline in various sectors and should machines continue to consume and process biased data, eliminating existing inequalities could become an even harder challenge. Therefore, the time to start dissecting data and algorithm biases in Africa is now and digital rights activists need to prioritise this.

Algorithmic biases should concern Africa right now! 

Algorithms are simply defined as a process or set of rules to be followed.  They aim to solve problems and make our lives easier. Everything we do in our daily lives, to an extent, follows an algorithm; from waking up to having breakfast and sleeping. Our lives are designed to follow certain patterns or rules and the same happens in machine learning. In AI, algorithms can be any form of automated instructions. 

Big tech companies use a process called Machine Learning (ML) to teach Artificial Intelligence (AI) certain patterns about the society they are to ‘serve’. Quinyx’s Berend Berendsen, defines ML as a set of algorithms that are fed with structured data in order to complete a task.

Algorithms are constantly trained by feeding them data, such as biometrics (facial recognition, finger prints, iris, voice etc), typing patterns, personal information such as dates of birth, users’ behaviour online etc.  IBM says that through the use of statistical methods, they train algorithms to make classifications or predictions, and uncover key insights within data mining projects. 

Platforms such as Facebook, Twitter, Tiktok and Instagram not only use such algorithms to ‘tailor’ individual users’ content, they also use algorithms to design and determine their community standards and policies. However, for this to fully function, machines need to be provided with ample data. 

AI ethicists argue that most algorithms do not represent the contexts that we live in, for example, it is harder for them to recognise black people. Even though AI captures preferences and mimics human behaviour, such systems can inherit human biases. Therefore, algorithm developers should not only place emphasis on the technological design, but closely  monitor how data is prepared and processed. It is not about feeding machines with big data. If the data is inaccurate, unjust or unrepresentative, biased behaviours perpetuated through the black box will discriminate against millions of people in Africa. 

Representation matters: Data colonialism and algorithmic biases 

In their book, Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism, scholars Nick Couldry and Ulises A. Mejias, refer to this as a newer form of digital land grabbing. “The acquisition and construction of data for corporate use, that’s the land grabbing going on” says Couldry. Further hinting that colonialism is the only word that can describe how big tech companies are utilising people’s data. 

Colonialism is not a new concept in Africa and people who already form part of previously marginalised communities risk being enveloped and trapped in emerging AI apartheid systems. This is already visible through various platforms. For example, Africa has an estimated population of over 1.3 billion people and 54 countries with most living in internet deserts.  Although most countries in Africa either speak English, French, Spanish or Portuguese, Africa is home to over 2 000 languages.

The question, however, that should begin to concern digital activists on the continent  should be: with such diversity in Africa, whose knowledge is represented and how is it disseminated through AI?

Voice-based interfaces like Google’s ‘Hey Google’, Amazon’s ‘Alexa’ and other virtual assistants are taught various languages. Currently, iOS’s ‘Siri’ speaks nine different English accents with the main ones being American, Australian, British, Indian, Irish, and ‘South African’. Facebook currently uses AI to proactively detect hate speech in 40 languages, however, most voice assistants do not support African languagesLanguage inequalities risk excluding certain members of society. 

Responsible AI: African lives matter in the digital sphere 

If we reinforce the wrong kind of machine learning, then we bolster past and current societal biases. Geographical biases in AI can harm those in the global south and we need to constantly ask; who is teaching machines and what type of knowledge is dominant?

AI is serving as a gatekeeper when it comes to accessing healthcare, hiring processes and predictive policing. Much of what is fed to machines about black people, however, is from specific geographical areas. Data that currently teaches machines on subjects of representation might not be as inclusive and just as we would like it to be. Machines are taught by humans and sadly potential biases embedded in society make their way into decision-making software. 

Algorithms don’t follow rules, they follow patterns. With facial recognition for example, misidentification is common as original AI-designed models are often white and male.  Humans are not designed to fit it into one model and this sector needs to be simplified. Researchers have proposed algorithmic audits as one method that can reduce biases. 

In a research paper titled, Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, researchers propose that these audits should serve “as a mechanism to check that the engineering processes involved in AI system creation and deployment meet declared ethical expectations and standards, such as organizational AI principles”.  

 Perhaps now is the time for African governments to start discussing and implementing frameworks that deal with algorithmic auditing. Additionally, in order to fully tackle algorithmic biases, users need to be educated on how their data is processed and how machines translate and utilise them.

Those of us on the African continent are not just users. Our data is providing big tech companies with insights into our behavioural patterns. If such data is not inclusive and just, current algorithmic structures could reinforce AI apartheid systems. 

 

By Emsie Erastus | Paradigm Initiative Digital Rights and Inclusion Media Fellow 2021

 

2 Responses

  1. This is very insightful. “With such arguments at the fore, it becomes clear that previously marginalised communities are more inclined to experience algorithmic biases. Africans continue to face discrimination offline in various sectors and should machines continue to consume and process biased data, eliminating existing inequalities could become an even harder challenge. Therefore, the time to start dissecting data and algorithm biases in Africa is now and digital rights activists need to prioritise this”

    this call, I think that we need to start discussing this more

  2. Interesting. This is a topic that needs awareness creation through media platforms so that the message can reach all (inclusive) – educational interviews on this topic must be conducted on various local languages especially here in Namibia. Thanks for the insight – it’s really empowering.

Leave a Reply

Your email address will not be published. Required fields are marked *