Image Credit: pixabay.com
Microblogging giant Twitter has announced a bug bounty program aimed at finding algorithm biases in artificial intelligence (AI) algorithms. The company is offering up to $ 3,500 worth of rewards to identify biases in image cropping algorithms, an initiative announced by Twitter in May.
“We were inspired by how the research and hacker communities helped the security community establish best practices for identifying and mitigating vulnerabilities to protect the general public.” Said the company. Blog post.. “We want to foster a similar community focused on ML ethics so that we can identify a wider range of issues than we can do ourselves. With this challenge, we are on Twitter. And the industry aims to set a precedent for the positive and collective identification of algorithmic harm. “
Social media platforms share the code for the saliency model used to generate cropped versions of images on the platform. “If the entry is successful, the approach will consider both quantitative and qualitative methods,” the post said. We asked community members to submit their entries through the HackerOne, a vulnerability adjustment and bug bounty platform.
Twitter will announce the winners on August 8th at this year’s DEFCON conference AI Village. It invites the winners to present their work during the meeting and offers the first-place winner $ 3,500.
The 2nd and 3rd place winners will receive $ 1,000 and $ 500, respectively. In addition, the “most innovative” and “most generalized” entries will receive a $ 1,000 award. The latter is an entry that applies to most types of AI algorithms.
Technology companies such as Twitter run bug bounty programs on a regular basis to keep their platforms secure, but the programs that find algorithm bias are fairly new.
The move is in line with what the company’s CEO Jack Dorsey said in his fourth-quarter earnings announcement in February. Dorsey proposed a market approach to recommendation algorithms, giving users the opportunity to choose the type of algorithm they want to use.
“One of the things we raised last year to address some of the issues facing Section 230 (of the US Communications Decency Act) is to give you more choices about the ranking algorithm you are using.
It’s about giving people, “he said at the time. “I can imagine a more marketable approach to the algorithm, which we can host as well as participate in,” he added.
Bias and inaccuracy of recommended algorithms used on platforms such as Facebook, Twitter and Google are also a major part of the focus of future regulation.
The controversy over Google, letting go of AI ethics researcher Timnit Gebru, also raises the question of whether big tech companies are really interested in finding algorithmic inaccuracies and prejudices, especially when it hinders profits.