Racist & Sexist Computers - More Reasons to Use HUMAN Super Recognisers

On 16th July, the Washington Post reported that:

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

See the full report via the link below. This is more reasons to use HUMAN super recognisers, to identify people AND to verify identifications made by AI.

Robots trained on AI exhibited racist and sexist behavior - The Washington Post