Brain tumors represent one of the most critical and life-threatening conditions in clinical neurology, requiring early and accurate diagnosis to improve patient outcomes and guide effective treatment strategies. Magnetic resonance imaging (MRI) has emerged as the primary non-invasive imaging modality for brain tumor assessment, offering superior soft-tissue contrast and multi- sequence capabilities. However, manual interpretation of MRI scans by radiologists remains time- consuming, subjective, and prone to inter-observer variability, particularly in resource-limited clinical settings. In recent years, deep learning (DL) techniques—especially convolutional neural networks (CNNs) and vision transformers (ViTs)—have demonstrated remarkable potential in automating brain tumor classification from MRI data. This keynote presents a comprehensive overview of recent advances in applying DL architectures to the early detection and classification of brain tumors, encompassing both multi-class classification (glioma, meningioma, pituitary tumor, and no tumor) and binary glioma grading (high-grade vs. low-grade gliomas). We review the evolution of classification performance across three recent papers from our research team. The first study (2023) evaluated seven CNN architectures—including InceptionV3, ResNet50, Xception, InceptionResNetV2, MobileNetV2, and EfficientNetB0—on the Msoud MRI dataset (7,023 images, four classes), achieving a best accuracy of 97.12% with InceptionV3 using transfer learning and 5-fold cross-validation. The second study (2025) introduced transformer-based architectures alongside advanced CNNs, comparing DeiT3_base_patch16_224, Xception41, Inception_v4, and Swin_tiny_patch4_window7_224 on the same Msoud dataset. The Swin
Transformer achieved the highest classification accuracy of 99.24%, with balanced precision, recall, F1-score, and Matthews correlation coefficient (MCC) of 0.9898, demonstrating the superiority of hierarchical shifted-window self-attention mechanisms over traditional CNN feature extraction.
Furthermore, the best-performing model was successfully deployed on embedded AI platforms
(NVIDIA Jetson AGX Xavier and Jetson Orin Nano), validating the feasibility of real-time, edge-
based clinical inference. The third study (2026) focused on binary glioma classification using the
BraTS 2019 dataset with patient-wise data separation, evaluating six DL architectures, including
DeiT3, Inception_v4, Xception41, Swin Transformer, ConvNeXtV2_tiny, and EfficientNet_B0.
DeiT3 achieved the highest accuracy of 99.40% with only 25% of the training data, demonstrating
the remarkable capability of vision transformers to generalize effectively under data-limited
conditions.
Everardo Inzunza-González received his Ph.D. in Electrical Sciences from the Universidad Autónoma de Baja California, Mexico, in 2013, his M.Sc. in Electronics and Telecommunications from the Centro de Investigación Científica y de Educación Superior de Ensenada in 2001, and his B.Sc. in Electronics Engineering from the Culiacán Institute of Technology in 1999. He is a full-time professor and researcher in electronics engineering at the Facultad de Ingeniería, Arquitectura y Diseño at UABC in Mexico. He holds a Level 2 distinction in the Mexican National System of Researchers (SNII). In 2021, he received the Academic Merit Recognition from UABC in the area of
Engineering and Technology for his valuable and integral academic career in teaching, research, and Dissemination. He has co-authored over 60 scholarly works, including 47 research articles in indexed journals, 7 conference papers, 5 book chapters, 2 patents granted by the Mexican Institute of Industrial Property (IMPI), and 1 co-edited book published by CRC Press (Taylor & Francis). His work has accumulated more than 1600 citations and an H-index of 19 on Google Scholar. Dr. Inzunza-González also serves as a Guest Editor and Reviewer for several international journals. His research interests span artificial intelligence, data science, machine learning, deep learning, the internet of things, cybersecurity, electronic instrumentation, wireless communication, image and signal processing, wireless sensor networks, pattern recognition, wearable devices, and edge computing devices.
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.