SEOULTECH Researchers Develop VFF-Net, A Revolutionary Alternative to Backpropagation That Transforms AI Training
The proposed algorithm applies the concept of forward-forward network to CNNs, avoiding away traditional back-propagation and its limitations
SEOUL, South Korea, Oct. 16, 2025 /PRNewswire/ -- Deep neural networks (DNNs), which power modern artificial intelligence (AI) models, are machine learning systems that learn hidden patterns from various types of data, be it images, audio or text, to make predictions or classifications. DNNs have transformed many fields with their remarkable prediction accuracy. Training DNNs typically relies on back-propagation (BP). While it has become indispensable for the success of DNNs, BP has several limitations, such as slow convergence, overfitting, high computational requirements, and its black box nature. Recently, forward-forward networks (FFN) have emerged as a promising alternative, where each layer is trained individually, bypassing BP. However, applying FFNs to convolutional neural networks (CNNs), which are widely used for image analysis, has proven difficult.
To address this challenge, a research team led by Mr. Gilha Lee and Associate Professor Hyun Kim from the Department of Electrical and Information Engineering at Seoul National University of Science and Technology has developed a new training algorithm, called visual forward-forward network (VFF-Net). The team also included Mr. Jin Shin. Their study was made available online on June 16, 2025, and published in Volume 190 of the journal Neural Networks on October 01, 2025.
Explaining the challenge of FNN for training CNN, Mr. Lee says, "Directly applying FFNs for training CNNs can cause information loss in input images, reducing accuracy. Furthermore, for general purpose CNNs with numerous convolutional layers, individually training each layer can cause performance issues.Our VFF-Net effectively addresses these issues."
VFF-Net introduces three new methodologies: label-wise noise labelling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG). In LWNL, the network is trained on three types of data: the original image without any noise, positive images with correct labels, and negative images with incorrect labels. This helps eliminate the loss of pixel information in the input images.
CSCL modifies the conventional goodness-based greedy algorithm, applying a contrastive loss function based on the cosine similarity between feature maps. Essentially, it compares the similarity between two feature representations based on the direction of the data patterns. This helps preserve the meaningful spatial information necessary for image classification. Finally, LG solves the problem of individual layer training by grouping layers with the same output characteristics and adding auxiliary layers, significantly improving performance.
Thanks to these innovations, VFF-Net significantly improves image classification performance compared to conventional FFNs. For a CNN model with four convolutional layers, test errors on the CIFAR-10 and CIFAR-100 datasets were reduced by 8.31% and 3.80%, respectively. Additionally, the fully connected layer-based VFF-Net achieved a test error of just 1.70% on the MNIST dataset.
"By moving away from BP, VFF-Net paves the way toward lighter and more brain-like training methods that do not need extensive computing resources," says Dr. Kim. "This means powerful AI models could run directly on personal devices, medical devices, and household electronics, reducing reliance on energy-intensive data centres and making AI more sustainable."
Overall, VFF-Net will allow AI to become faster and cheaper, while allowing more natural, brain-like learning, facilitating more trustworthy AI systems.
Reference
Title of original paper:VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights
Journal: Neural Networks
DOI: 10.1016/j.neunet.2025.107697
About the Institute Seoul National University of Science and Technology (SEOULTECH)
Website: https://en.seoultech.ac.kr/
Contact:
Eunhee Lim
82-2-970-9166
402714@email4pr.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/seoultech-researchers-develop-vff-net-a-revolutionary-alternative-to-backpropagation-that-transforms-ai-training-302585916.html
SOURCE Seoul National University of Science and Technology