accessallports

every news you want

How to stop AI from recognizing your face in selfies

How to stop AI from recognizing your face in selfies

[ad_1]

Fawkes has been downloaded nearly 500,000 times. Project website.A user also created a Online version, To make it easier for people to use (although Wenger will not provide guarantees for third parties who use the code and warn: “When that person processes the data, you don’t know what the data is about”). Wenger said that there is no phone application, but there is nothing to stop someone from making a phone application.

Fox may prevent new facial recognition systems from recognizing you, such as the next Clearview. But this will not break existing systems that have been trained on unprotected images. However, technology has been advancing. Wenger believes that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland (one of ICLR’s research groups this week) may solve this problem.

Called Low-key, The tool is extended on the basis of Fawkes, it is based on a more powerful adversarial attack to interfere with the image, which also makes the pre-trained business model stupid.Like fox LowKey is also available online.

Erfani and her colleagues added a bigger twist. Together with Daniel Ma of Deakin University and researchers from the University of Melbourne and Peking University, Erfani developed a method to transform images into “Unlearnable example“, effectively making AI completely ignore your selfies. “I think this is great. “Fox said, “Fox trained a model to learn about your mistakes, and this tool trained a model but didn’t know anything about you. “

My image (top) grabbed from the Internet became an unlearnable example (bottom) that the facial recognition system would ignore. (Copyright belongs to Sarah Erfani, Daniel Ma and colleagues)

Unlike Fox and his followers, the unlearnable examples are not based on adversarial attacks. Ma’s team did not introduce changes to the images that forced the AI ​​to make mistakes, but added minor changes to trick the AI ​​into ignoring it during training. When presented with the image later, the evaluation of its contents will not be better than random guessing.

Examples that cannot be learned may be more effective than adversarial attacks because they cannot be trained. The more adversarial examples that artificial intelligence sees, the stronger the ability to recognize them. But since Erfani and her colleagues blocked the AI ​​training images from the beginning, they claim that this will not be achieved with unlearnable examples.

However, Wenger resigned, and a battle is going on. Her team recently noticed that Microsoft Azure’s facial recognition service is no longer deceived by certain images. She said: “Suddenly, it somehow became robust to the camouflage images we generated.” “We don’t know what happened.”

Microsoft may have changed the algorithm, or the artificial intelligence may simply have seen too many images from people using Fawkes and learned to recognize them. Either way, Wenger’s team released an update to its tool last week, which is once again available for use with Azure. She said: “This is another cat and mouse arms race.”

For Wenger, this is the story of the Internet. She said: “​​Companies like Clearview are using data that they think is free and using it to do what they want.”

In the long run, regulation may help, but it will not prevent companies from exploiting vulnerabilities. She said: “There is always a disconnect between what is legally acceptable and what people actually want.” “Tools such as Fawkes fill this gap.”

She said: “Let people have unprecedented power.”

[ad_2]

Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *