Courtesy of Phillip Jordan

February 16, 2022

Cornell Researchers Train Physical Systems, Revolutionize Machine Learning

Print More

A Cornell research group led by Prof. Peter McMahon, applied and engineering physics,has successfully trained various physical systems to perform machine learning computations in the same way as a computer. The researchers have achieved this by turning physical systems, such as an electrical circuit or a Bluetooth speaker, into a physical neural network — a series of algorithms similar to the human brain, allowing computers to recognize patterns in artificial intelligence.

Machine learning is at the forefront of scientific endeavors today. It is used for a host of real-life applications, from Siri to search optimization to Google translate. However, chip energy consumption constitutes a major issue in this field, since the execution of neural networks, forming the basis of machine learning, uses an immense amount of energy. This inefficiency severely limits the expansion of machine learning.

The research group has taken the first step towards solving this problem by focusing on the convergence of the physical sciences and computation. 

The physical systems that McMahon and his team have trained — consisting of a simple electric circuit, a speaker and an optical network — have identified handwritten numbers and spoken vowel sounds with a high degree of accuracy and more efficiency than conventional computers. 

According to the recent Nature.com paper, “Deep Physical Neural Networks Trained with Backpropagation,” conventional neural networks are usually built by applying layers of mathematical functions. This relates to a subset of machine learning known as deep learning, in which the algorithms are modeled on the human brain and the networks are expected to learn in the same way as the brain.

“Deep learning is usually driven by mathematical operations. We decided to make a physical system do what we wanted it to do – more directly,” said co-author and postdoctoral researcher Tatsuhiro Onodera. 

A physical neural network built using a speaker. Credit: Robert Kurcoba/Cornell University

This novel approach results in a much faster and more energy-efficient method of executing machine learning operations, providing an alternative for the energy-intensive requirements of conventional neural networks.

It might seem as though this advantage of energy efficiency would be limited to small computations, which would not require a significant amount of energy to begin with. However, larger computations result in greater energy efficiency, according to Onodera. 

The potential of these physical neural networks extends beyond saving energy. According to McMahon, larger and more complex physical systems would have the ability to operate with much bigger data sets and with greater accuracy. 

Further, it is possible to connect a series of different physical systems together. For example, a speaker could be connected with an electrical circuit to obtain a more complex system with greater potential. 

“As you make the system bigger, it is more intelligent,” Onodera said. “The range of things it can accomplish is more versatile.”

Most of these physical systems can perform all the functions necessary for machine learning computations by themselves in the same way as conventional systems. For example, when fed handwritten numbers for image classification, the physical networks can extract the spatial features and determine the number by itself in the same way as conventional neural networks. 

The team also theorizes that many problems associated with the training of conventional networks — such as the unintended decrease or increase of the loss calculation in the feedback process — would go away in the case of physical networks. 

“If you look at each individual component [of the physical system], it might be doing something completely different,” said co-author and postdoctoral researcher Logan Wright. “It gets from Point A to Point B, but the trajectory is potentially completely different.” 

Even if the physical systems undergo some form of wear and tear, which disrupts their computational abilities, they can always be retrained, thus nullifying the ill-effects of any physical damage.

Currently, the physical neural networks are only capable of a feed-forward process. This means they cannot train and retrain themselves in the same way as recurrent neural networks — which have a constant feedback mechanism and can update their parameters as required. Onodera, however, expressed optimism about training these systems to execute a recurrent feedback process.

Even though physical neural networks present a novel approach to machine learning, they could potentially change the face of the field in the future. Wright wrote that one key reason for this potential is that these systems replicate our brains more closely than other types.

Different types of physical systems are tuned to different kinds of operations and learning computation.. It might, however, take a while for these physical networks to widely integrate into the machine learning ecosystem — largely driven by conventional neural networks.

“The brain evolved, [to the point] where the physics and the algorithms are all intertwined,” Wright said. “This is what we are moving closer to – physical algorithms instead of just hardware or software.”