A diverse team of experts develops a defense system for neural networks

A diverse team of engineers, biologists, and mathematicians from the University of Michigan have developed a neural network defense system based on the adaptive immune system. The system can defend neural networks against various types of attacks.

Nefarious groups can adjust the input of a deep learning algorithm to direct it in the wrong direction, which can be a major problem for applications such as identification, machine vision, natural language processing (NLP), language translation, quarrel detection, etc.

Robust immunity-based learning system

The newly constructed defense system is called the Robust Adversarial Immune-Inspired Learning System. The book was published in IEEE access.

Alfred Hero is Distinguished Professor at John H. Holland University. He co-directed the work.

“RAILS represents the first-ever adversarial learning approach that draws inspiration from the adaptive immune system, which works differently from the innate immune system,” Hero said.

The team found that deep neural networks, which are already inspired by the brain, can also mimic the biological process of the mammalian immune system. This immune system generates new cells designed to defend against specific pathogens.

Indika Rajapakse is associate professor of computational medicine and bioinformatics, as well as co-leader of the study.

“The immune system is made for surprises. He has an amazing design and will always find a solution,” Rajapakse said.

Mimic the immune system

RAILS mimics the natural defenses of the immune system, allowing it to identify and address suspicious neural network inputs. The biological team first studied how the adaptive immune system of mice responded to an antigen before creating a model of the immune system.

The analysis of the data on the information was then carried out by Stephen Lindsly, then a doctoral student in bioinformatics. Lindsly helped translate this information between biologists and engineers, which allowed Hero’s team to model the biological process on computers. To do this, the team integrated biological mechanisms into the code.

RAILS defenses have been tested with conflicting inputs.

“We weren’t sure we really captured the biological process until we compared the learning curves from RAILS to those taken from experiments,” Hero said. “They were exactly the same.”

RAILS outperformed two of the most common machine learning processes currently used to combat adversarial attacks. These two processes are Roust Deep k-Nearest Neighbor and convolutional neural networks.

Ren Wang is a researcher in electrical and computer engineering. He was largely responsible for the development and implementation of the software.

“A very promising part of this work is that our general framework can defend against different types of attacks,” Ren Wang said.

The researchers then used image identification as a test case to evaluate RAILS against eight contradictory attack types in various datasets. He showed improvement in all cases, and he even protected against the Projected Gradient Descent attack, which is the enemy’s most damaging attack type. RAILS also improved overall accuracy.

“It’s an amazing example of using math to understand this beautiful dynamic system,” Rajapakse said. “We may be able to take what we’ve learned from RAILS and help reprogram the immune system to work faster.”

Comments are closed.