News Meet ConvNeXt V2: An AI Model Using Masked Autoencoders to Improve ConvNet Performance and Scalability
The field of computer vision has made remarkable progress in the past decade, and this progress is mainly due to the advent of convolutional neural networks (CNN). Due to its hierarchical feature extraction mechanism, CNN’s impeccable ability to process 2D data is a key factor for its success.
Modern CNNs have come a long way since their introduction. Updated training mechanisms, data augmentation, enhanced network design paradigms, and more. The literature is full of success stories of these proposals to make CNNs more powerful and efficient.
On the other hand, the open source aspects of the computer vision field have contributed to major improvements. Feature learning has become more efficient due to the widely available pre-trained large-scale vision models; thus, starting from scratch is not the case for most vision models.
Today, the performance of vision models mainly depends on three factors: the chosen neural network architecture, training method, and training data. Advances in any of these three can significantly improve overall performance.
Of the three, innovations in network architecture have played the most important role in progress. CNNs eliminate the need for manual feature engineering by allowing the use of general feature learning methods. Not long ago, we made a breakthrough in the Transformer architecture in the field of natural language processing and moved to the field of vision. Transformers have been very successful, thanks to their ability to scale strongly in terms of data and model size. Then finally the ConvNeXt architecture was introduced in the last few years. It modernizes traditional convolutional networks and shows us that pure convolutional models can also scale.
However, we have a small problem here. All of this “advancement” is measured by the performance of a single computer vision task, supervised image recognition on ImageNet. It remains the most common method for exploring the design space of neural network architectures.
On the other hand, we have researchers who are looking for a different way to teach neural networks how to process images. Instead of using labeled images, they used a self-supervised approach in which the network had to figure out what was in the image itself. Masked autoencoders are one of the most popular ways to achieve this. They are based on masked language modeling techniques widely used in natural language processing.
It’s possible to mix and match different techniques when training a neural network, but it’s tricky. It is possible to combine ConvNeXt with masked autoencoders. However, since masked autoencoders are designed to work best with transformers on sequential data, using them with convolutional networks can be computationally prohibitively expensive. Also, this design may not be compatible with convolutional networks due to the sliding window mechanism. Previous research has shown that it is difficult to achieve good results when using self-supervised learning methods such as masked autoencoders with convolutional networks. Therefore, it is important to remember that different architectures may have different feature learning behaviors that affect the quality of the final results.
This is where ConvNeXt V2 comes into play. It is a co-design architecture that uses masked autoencoders in the ConvNeXt framework to achieve results similar to those obtained with transformers. This is a step towards making mask-based self-supervised learning methods effective for ConvNeXt models.
Designing a masked autoencoder for ConvNeXt was the first challenge and they solved it in a clever way. They treat the masked input as a set of sparse patches and use sparse convolutions to process only the visible parts. In addition, the transformer decoder part in the masked autoencoder is replaced by a single ConvNeXt block, which makes the whole structure fully convolutional, thus improving the pre-training efficiency.
Finally, a global response normalization layer is added to the framework to enhance feature competition across channels. However, this change is effective when the model is pretrained using a masked autoencoder. Therefore, it may not be optimal to reuse fixed architectural designs in supervised learning.
ConvNeXt V2 improves performance when used with masked autoencoders. It is designed for self-supervised learning tasks. Pretraining with fully convolutional masked autoencoders can significantly improve the performance of purely convolutional networks.
Check Paper and Github. All credit for this study goes to the researchers on this project.Also, don’t forget to join Our Reddit page, discord channeland email newsletterwhere we share the latest AI research news, cool AI projects, and more.
Ekrem Çetinkaya holds a Bachelor of Science degree. Master in 2018 Graduated from Ozyegin University in Istanbul, Turkey in 2019. He wrote his master’s degree. Paper on Image Denoising Using Deep Convolutional Networks. He is currently working on a Ph.D. Degree at the University of Klagenfurt, Austria, and researcher for the ATHENA project. His research interests include deep learning, computer vision, and multimedia networking.