Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/40042
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAiadi, Oussama-
dc.contributor.authorBenzine, Mohammed Aymen-
dc.contributor.authorKiouer, Issam Eddine-
dc.date.accessioned2026-01-21T12:01:17Z-
dc.date.available2026-01-21T12:01:17Z-
dc.date.issued2025-
dc.identifier.citationFACULTY OF NEW TECHNOLOGIES OF INFORMATION AND COMMUNICATIONen_US
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/40042-
dc.descriptionArtificial Intelligence and Data Scienceen_US
dc.description.abstractThe increasing demand for deploying deep learning models on mobile and edge devices has brought the need for lightweight and efficient neural networks to the forefront of research. This is particularly crucial in medical imaging, where real-time and accurate diagnostic support is required. However, most high-performing models rely on large architectures, making them impractical in constrained environments. This thesis proposes an enhanced knowledge distillation (KD) framework that aims to build compact student models while preserving high diagnostic performance for brain tumor classification. The approach leverages a training-free student selection method based on the DisWOT score and introduces two key enhancements: the use of Layer-wise Relevance Propagation (LRP) instead of Grad-CAM for fine-grained semantic supervision, and cosine similarity instead of L2 loss for robust and scale-invariant feature alignment. Our experimental evaluations on a publicly available brain MRI dataset demonstrate that our improved KD pipeline significantly lessens the quantity of parameters and computation cost with minimal impact on accuracy of the teachers. Our best student model achieved 95.35% classification accuracy, while reducing over 85% of the parameters in the teacher model. Our results substantiate the use of high-order distillation signals for generalization and training stability. In conclusion, the proposed framework provides an effective and efficient solution for interpretable lightweight model development, and has great potential for use in real-world contexts such as in medical or other constrained resource scenarios.en_US
dc.description.sponsorshipDEPARTMENT OF CUMPUTER SCIENCE AND INFORMATION TECHNOLOGYen_US
dc.language.isoenen_US
dc.publisherUNIVERSITY OF KASDI MERBAH OUARGLAen_US
dc.subjectKnowledge Distillation (KD)en_US
dc.subjectLightweight Modelsen_US
dc.subjectBrain Tumor Classificationen_US
dc.subjectLayer-wise Relevance Propagation (LRP)en_US
dc.subjectCosine Similarityen_US
dc.titleImproving Brain Tumor Classification Using Knowledge Distillationen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
BENZINE-KIOUER.pdfArtificial Intelligence and Data Science3,17 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.