The social network Twitter announced a new contest aimed at researchers and hackers to identify and resolve the racial bias of the algorithm used to crop images that are uploaded by users.
Second announced on Twitter, the company that owns the social network launched a new challenge to investigators, programmers and hackers, to identify and resolve the apparent racial bias that your algorithm introduces when users cut the images they upload to their posts.
whether the published however on its blog, Twitter also detailed that the bug bounty is intended to “identify potential damage from this algorithm, beyond what we could find ourselves”.
The winner of the challenge will receive a prize of $3,500, around 3,000 euros, and the finalists in the competition will be equally remunerated with values between 1,000 and 500 dollars.
You bug bounties, or “bug hunting” challenges, are challenges launched by companies that reward those they find bugs or security flaws in its technological infrastructures.
In the post posted on its Twitter profile, the company explained the challenge: “Attention to all bounty hunters: it’s time to get started. We’ve just released the full details of our Algorithm Bias Hunt Challenge, which will be open until August 6th. More details on our blog”.
Calling all bounty hunters – it’s officially go time! We’ve just released the full details of our algorithmic bias bounty challenge which is open through August 6. For more details on the challenge, head over to our blog 👇 https://t.co/foXUdMGwRc
— Twitter Engineering (@TwitterEng) July 30, 2021
“With this challenge, we aim to set a precedent on Twitter and across the industry so that damage caused by algorithms is proactively identified by the community,” added the company.
Last year, recalls Business Insider, the doctoral student Colin Madland drew attention to this problem, denouncing the bias of a ZOOM algorithm that erased the face of black users when choosing a background image to make a video call.
The problem of algorithm bias, which Twitter has been struggling with for some years, is related to the choices made by the automatic tool used to crop user uploaded images, which appear to be racially biased against black people.
When choosing the most interesting part of an image to preview the post, the algorithm machine learning do Twitter privileges, for example, showing heads of people instead of necks or foreheads.
The problem is that when an image contains faces of people of different ethnicities, the algorithm looks like invariably choose as “most interesting part” of the image the faces of White people.
Alerted by post of Colin Madland, several Twitter users conducted tests on the behavior of the algorithm — including the programmer Tony Arcieri, who tried uploading an image with the photos of the former president Barack Obama and US Senator Mitch McConnell.
Archers tested various combinations of Obama and McConnell’s positions, including a version with inversion of colors (and even changing the colors of the ties of the portrayed). Invariably, Twitter’s algorithm chose the senator’s image to the detriment of the former president’s image.
Other users have tried replicate the experience of Arcieri to see if the bias of the microblog network’s algorithm was real. And apparently, it is.
The last few months, several analysts tried to explain why the Twitter algorithm seems to favor white people. According to some of these experts, the reason is simple: algorithms are not run by machines, but they are programmed by people, and Twitter programmers are allegedly mostly white.
But this approach doesn’t explain how programmers would have introduced this peculiar taste of theirs into the algorithm — nor why Twitter can’t. identify responsible lines of code by bias.
Another, more sophisticated approach holds that the artificial intelligence of the algorithm, based on machine learning, will have probably identified patterns in the images that proved most appealing to Twitter’s target audience — and “learned” to highlight white people to capture their attention. A question of number$, therefore.
According to a third hypothesis, the explanation may be simpler: the algorithm may be analyze image lighting and to favor the lighter areas. But such a bias would seem, from the outset, easy to resolve.
Whatever its origin, the fact is that Twitter has so far failed to identify and resolve the problem — something that, for a few thousand dollars, now hopes to achieve with the help of the community of hackers, programmers, researchers and, of course, activists committed to banish the racism of its algorithm from Twitter.