Artificial intelligence consumes a lot of energy. Hackers can make it consume more.
attack: But this kind of neural network means that if you change the input (such as the input image), you can change the amount of calculation required to solve the input. As researchers at the Maryland Cyber Security Center outline in a new paper, this opens a loophole that can be exploited by hackers. International Study Congress this week. By adding a small amount of noise to the network’s input, they make it more difficult for the network to perceive the input and increase its computational load.
When they assume that the attacker has all the information about the neural network, they can make the most of its energy. When they think that the attacker has no information, they can still slow down the network’s processing speed and increase energy usage by 20% to 80%. As the researchers discovered, the reason is that attacks can be transmitted well between different types of neural networks. Doctoral student and co-author of the paper Yiğitcan Kaya said that designing an attack for an image classification system is enough to destroy many images.
Precautions: This attack is still theoretical to some extent. The architecture of adaptive input has not been widely used in practical applications. But researchers believe that this will quickly change industry pressure to deploy lighter-weight neural networks, such as those used in smart homes and other IoT devices. Professor Tudor Dumitraş, who advised the study, said that more work is needed to understand the extent to which this threat can cause damage. However, he added that this article is the first step in raising awareness: “For me, it is important to get people’s attention. This is a new threat model and this type of attack can be carried out.”