Dr. Fenglei Fan, The Chinese University of Hong Kong
In recent years, a plethora of deep artificial neural networks have been developed with impressive successes in many mission-critical tasks. However, up to date the design of these networks focuses on architectures, such as shortcut connections. Indeed, neural architecture search is to find networks of similar topological types. Almost exclusively, the mainstream network models are constructed with neurons of the same type, which is composed of two parts: inner combination and nonlinear activation. Despite that a conventional network does simulate certain important aspects of a biological neural network such as a hierarchical representation, attention mechanism, and so on, a conventional network and a biological neural system are fundamentally different in terms of neuronal diversity and complexity. In particular, a biological neural system coordinates numerous types of neurons which contribute to all kinds of intellectual behaviors. Considering that an artificial network is invented to mimic the biological neural system, the essential role of neuronal diversity should be taken into account in deep learning research.
Here summarizes Dr. Fenglei Fan's work on neuronal diversity.
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks, Feng-Lei Fan, Yingxin Li, Hanchuan Peng, Tieyong Zeng, Fei Wang, arXiv preprint arXiv:2301.09245. [paper]
Quadratic Graph Attention Network (Q-GAT) for Robust Construction of Gene Regulatory Networks, Hui Zhang, Xuexin An, Qiang He, Yudong Yao, Feng-Lei Fan, Yueyang Teng, arXiv preprint arXiv:2303.14193. [paper]
One Neuron Saved Is One Neuron Earned: On Parametric Efficiency of Quadratic Networks, Feng-Lei Fan, Hang-Cheng Dong, Zhongming Wu, Lecheng Ruan, Tieyong Zeng, Yiming Cui, Jing-Xiao Liao, arXiv preprint arXiv: arXiv:2303.06316. [paper]
Heterogeneous Autoencoder Empowered by Quadratic Neurons, JX Liao, BJ Hou, HC Dong, H Zhang, J Ma, J Sun, S Zhang*, FL Fan*, arXiv preprint arXiv:2204.01707. This paper proposed a heterougeneous model that consists of conventional neurons and quadratic neurons. This model achieved the state-of-the-art performance in anomaly detection. [paper] [Github]
Attention-Embedded Quadratic Network (Qttention) for Effective and Interpretable Bearing Fault Diagnosis, IEEE Transactions on Instrumentation and Measurement, JX. Liao, HC. Dong, ZQ. Sun, J. Sun, S. Zhang* and FL. Fan*, vol. 72, pp. 1-13, 2023, Art no. 3511113, doi: 10.1109/TIM.2023.3259031. This paper reveals that a quadratic neuron can induce an attention mechanism, whereas the conventional neuron has no such kind of advantages. [paper][Github]
Universal approximation with quadratic deep networks, FL Fan, J Xiong, G Wang, Neural Networks, 124, 383-392. This paper demonstrates the approximation superiority and uniqueness of quadratic networks. [paper]
Fuzzy logic interpretation of quadratic networks. Neurocomputing, 374, 10-21, FL Fan, G Wang, This paper provides a novel angle to understand quadratic networks, which takes each quadratic neuron as a fuzzy logic gate. [paper]
Generalized backpropagation algorithm for training secondāorder neural networks, FL Fan, W Cong, G Wang, International journal for numerical methods in biomedical engineering, 34(5), e2956. This paper designed a backpropagation algorithm to train quadratic networks. [paper]
A new type of neurons for machine learning, FL Fan, W Cong, G Wang, International journal for numerical methods in biomedical engineering, 34(2), e2920. This paper designed the quadratic neurons (Hello, world!). [paper]
Goal
This tutorial is intended for Artificial Intelligence researchers and practitioners, as well as domain experts interested in neuronal diversity in deep learning. After this tutorial, the participants will 1) have a comprehensive understanding of basic concepts; 2) learn major recent advances in this field; 3) explore novel research opportunities in this area and master how to use or even design novel algorithms with new type of neurons for tasks of interest.
Outline
Slides
Towards NeuroAI: Introducing Neuronal Diversity into Deep Learning (AAAI 23) [OneDrive] [GoogleDrive]
Perspective
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks [paper]
Presenters
Dr. Fenglei Fan is currently a Research Assistant Professor in Department of Mathematics, The Chinese University of Hong Kong. His primary research interests lie in deep learning theory and methodology, neuroscience, and image processing. He has authored 16 papers in flagship AI and medical imaging journals. He was the recipient of the IBM AI Horizon Scholarship which covered his tuition and living expenses during his PhD research. He was also selected as the award recipient for the 2021 International Neural Network Society Doctoral Dissertation Award. He has served as a PC member in many conferences such as IJCAI and AAAI. His representative works are indroducing neuronal diversity into deep learning and the equavalence of width ande depth of neural networks.
Dr. Fei Wang is currently an Associate Professor of Health Informatics in Department of Population Health Sciences, Weill Cornell Medicine, Cornell University. His major research interest is data mining and its applications in health data science. He has published more than 250 papers in AI and medicine, which have received more than 21.3K citations (GoogleScholar). His H-index is 70. His papers have won 8 best paper awards at top international conferences on data mining and medical informatics. His team won the championship of the NIPS/Kaggle Challenge on Classification of Clinically Actionable Genetic Mutations in 2017 and Parkinson's Progression Markers' Initiative data challenge organized by Michael J. Fox Foundation in 2016. Dr. Wang is the recipient of the NSF CAREER Award in 2018, the inaugural research leadership award in IEEE International Conference on Health Informatics (ICHI) 2019. Dr. Wang is the chair of the Knowledge Discovery and Data Mining working group in American Medical Informatics Association (AMIA).