Exclusion risk: experts warn of discriminatory algorithm AI systems


Artificial intelligence data is not yet sufficiently inclusive, creating a high risk of exclusion and discrimination. Experts warn of a lack of data inclusiveness that could lead to discriminatory algorithms in education, the labour market, healthcare and in (online) security. This is according to a new report by knowledge institute Movisie commissioned by the Ministry of Education, Culture and Science.

Artificial intelligence is frequently deployed in the four fields studied (education, labour market, healthcare and security), but the data on which AI systems are trained are insufficiently inclusive. When training AI systems, insufficiently diverse datasets are often used, in which lhbti+ people are underrepresented, among others. Thus, there may be biases in the data, allowing systems to discriminate in any domain. In addition, there is a lack of accessible information about lhbti+ persons, as algorithms misclassify this as pornography.

Lack of protection and privacy

AI copies processes from society and thus also adopts stereotypes and prejudices of the developer, which can lead to discrimination. This is a major concern experts stress, as AI systems train themselves which can even reinforce discrimination. They also warn about the lack of protection against online violence and the lack of clarity about storing and using data on lhbti+ persons. This can violate the right to privacy. For example, an unsecured comment in a lhbti+ person's file about sexual orientation may be visible, even though this data is not always relevant.

Teachers can be relieved

Besides risks, the researchers also see opportunities. Teaching professionals can be supported in administrative tasks, leaving more time for teaching. Recruitment and selection without discrimination and prejudice is also seen as an opportunity. Experts see opportunities within the healthcare theme to make healthcare more inclusive and better by giving lhbti+ persons more participation. When it comes to the safety theme, the extent of online discriminatory comments could be better measured.

The report highlights that there is still little research and knowledge on the relationship between artificial intelligence and lhbti+ empowerment. More research on inclusive AI is needed, according to experts. They also call for a multi-stakeholder approach and involving lhbti+ communities from the outset to achieve more inclusive AI systems.

For more information, you can download the report Artificial intelligence and lhbti+ empowerment: an exploration of opportunities and risks, stakeholders and possible interventions in four social sub-areas here.

Read more from us: here.


Caseware Welcomes Danielle Supkis Cheek as Vice President, Head of Analytics and AI

'On paper, engineering education cannot inspire'

'State actors employ increasingly sophisticated cyber attacks'

Several vulnerabilities in Microsoft Office

© Dutch Tech On Heels - 2024
Made with
Web Wings