On 25th May 2020, two months after the UK was put into lockdown due to the Covid-19 pandemic, an unarmed Black man, George Floyd, was murdered by the police in Minneapolis, USA. The death of George Floyd reignited the Black Lives Matter movement and sparked protests across the globe, bringing to light the continued problem that innocent Black people continue to fall victim to police brutality.

Although the topics surrounding race are nothing new, the murder of George Floyd seemed to significantly challenge how people perceive racism and disrupt the idea that racism towards people of colour is a rarity. It is important to remember that George Floyd was not the first innocent person to be murdered by the police; his death demonstrated that he merely followed in the footsteps of the likes of Philando Castille, Sandra Bland, and Trayvon Martin (a list that is not exhaustive). These repeated tragedies question the racial and unconscious biases that exist and demonstrates how throughout history, such biases and discrimination have a significantly negative impact on people of colour.

 

Algorithmic bias

The issue of racism is not always overt and needs to be looked at more intricately. One crucial thing to note is that bias is not just restricted to human prejudice, whether it be conscious or unconscious. Rather, bias is now arising also within technologies, which were previously viewed as impartial. Joy Buolamwini, a computer scientist and digital activist, has found in her research that Artificial Intelligence (AI) systems, through algorithmic bias, can perpetuate racism, sexism, ableism and other harmful forms of discrimination, therefore presenting significant threats to our society. In a TedTalk, she stated that “algorithms, like viruses can spread bias on a massive scale at a rapid pace.

Machine learning algorithms learn from examples, and what a model learns is determined by the examples that have been fed to it. Bias in data will thus lead to biased algorithms. This may take form in two ways – sampling bias and historical bias. The former is where some group of the population is misrepresented, meaning said group is under-sampled and their data is more prone to error. For example, a main reason that Google’s facial recognition algorithms inadvertently identified and tagged Black people as gorillas was because of insufficient training data on Black faces. Historical bias, on the other hand, is where algorithms learn from historical data, reproducing discrimination that is uncalled for in the present society but was the norm historically. As such, Black and other minority communities that have historically been unduly targeted by law enforcement are now at risk of being profiled by algorithms to have higher recidivism scores.

 

Machine learning algorithms and the GDPR

AI technologies and machine learning algorithms are no doubt vulnerable to bias and prejudice. Hence, data protection laws, such as the General Data Protection Regulation (GDPR), require the sensitive information that these technologies often process to be protected effectively.

Due to the fast-paced nature of technological developments, emergent technologies often fall in a legal grey area. For example, in response to the Covid-19 pandemic, all exams in the United Kingdom were suspended and an algorithm was proposed instead to predict A-Level grades. This algorithm could have been in violation of Article 22 of the GDPR, which prohibits decisions about an individual from being “based solely on automatic processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Similarly, in August 2020, the UK Court of Appeal found that South Wales Police’s (SWP) use of automated facial recognition technology was unlawful and did not meet the standards of the GDPR. In particular, the SWP failed to carry out a sufficient Data Protection Impact Assessment (DPIA), pursuant to the law, to systematically analyse, identify and minimise the data protection risks of a project or plan.

It can be deduced, therefore, that before adopting modern sophisticated technologies, it is always important to consider the data protection aspect. This applies especially to those technologies involving profiling abilities because poor data protection programmes may result in discrimination and adverse consequences for individuals.

 

Why diversity matters in data protection

The GDPR requires the processing of personal data to be fair. Despite having no statutory definition of fairness, the GDPR appears to refer to the connection between fairness and discrimination. This is because the law obliges data controllers to carry out measures during profiling in order to prevent “potential risks” such as discriminatory effects on individuals due to, amongst others, racial or ethnic origin. As such, it is clear that racial bias and discrimination are not approved of in the data protection framework. The best way to eliminate discrimination is to ensure there is no bias in the collection or processing of personal data. Therefore, diversity is crucial in providing a large range and assortment of data needed to create unbiased databases for algorithms to learn from.

As rapid technological growth has an increasing impact on our society, it is critical to be able to spot and identify biases in their infancy. Robust data privacy programmes within high-tech developments will provide transparency and help expose tainted data. It is beneficial to involve an independent Data Protection Officer (DPO) to oversee the privacy aspects and data assessments within high-tech systems, which will minimise discrimination and violation of data protection laws.

 

HewardMills – your global Data Protection Office

HewardMills is trying to break racial stereotypes whilst simultaneously helping other companies comply with data protection laws. HewardMills is a data protection company that provides DPO and other data protection support for organisations. The company was founded by Dyann Heward-Mills, who represents a small number of Black women in leadership roles and has been appointed as an Ethics expert for the European Commission in research and innovation. Being a Black owned business, HewardMills stands out with a remarkable ability to be truly diverse. At HewardMills, 40% of employees are Black, with 36% holding positions in leadership roles. It is imperative that these statistics are taken note of, as it illustrates to members of the Black community that their voices are being used in industries where they have often been drowned out.

It has been demonstrated that AI technologies and machine learning algorithms are affecting Black communities, as well as those from other ethnic minorities. The importance of HewardMills, and other companies with similar vision and make up, and the position they hold can start to pave the way for the safer use of technology and how companies store personal data. The ethnic diversity and multi-disciplinary nature of HewardMills allows the company to spot the racial biases arising within the realms of GDPR and other data protection laws quicker than some other organisations. The company can, therefore, play an important role in efforts to eliminate racial discrimination in technology.

Since launching in 2018, HewardMills has already achieved a number of milestones. HewardMills has now established operations in UK, Singapore, Germany, Switzerland, Ireland, USA and Ghana. The company is also the registered DPO service in over 70 jurisdictions and are able to deliver high quality work thanks to their highly skilled and trained professionals. In addition, they have had extensive articles published in Privacy Law & Business on facial recognition, SC Media UK on using data protection as a means to safeguard cultural assets and IAPP on protecting children’s data.

Additional blog contributions: Helga Turku, data protection and privacy director and Peter Boaz, data protection and privacy consultant