A study recently published in Computational Biology PLOS is the latest addition to the literature exploring how to make artificial neural networks (ANNs) more efficient by designing them to mimic human neurological patterns. In this new study, the authors found that using “spike networks,” that is, networks that transmit information between nodes in bursts of data rather than in a continuous stream, to design ANN prevents ANNs from “forgetting” the data.
A problem with many ANNs is that they “forget” or catastrophically overwrite information when trained on the data sequentially. The University of California San Diego research team, consisting of neuroscience graduate student Ryan Golden and sleep researcher Maxim Bazhenov, PhD, along with a few others, teamed up to build an ANN which mimics the way the human brain sizes learned information and consolidates memories during sleep. This has been accomplished with state-of-the-art networks. According to the study, “Intertwining novel training tasks with periods of offline responsiveness, mimicking biological sleep, alleviates catastrophic forgetfulness.”
This study is not the first to explore the subject. In 2020, researchers at Los Alamos National Laboratory in New Mexico used neuromorphic hardware to implement spiked neural networks. The researchers found that when the network learned continuously without having time to sleep, that is, it fed data comparable to the brain waves that humans produce when they sleep, its neurons fired in continuous, whatever data is fed, essentially “mind-blowing” nonsense data. from learned data. It was only after “sleeping” that the network became stable again.
In addition to reinforcing the importance of good sleep in organisms, this research could prove very influential in the future development of artificial neural networks.