Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/40231
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKhaldi, Belal-
dc.contributor.authorKhouildat, Houria-
dc.contributor.authorHideb, Hadjer-
dc.date.accessioned2026-02-03T08:21:19Z-
dc.date.available2026-02-03T08:21:19Z-
dc.date.issued2025-
dc.identifier.citationFACULTY OF NEW TECHNOLOGIES OF INFORMATION AND COMMUNICATIONen_US
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/40231-
dc.descriptionArtificial Intelligence and Data Scienceen_US
dc.description.abstractThis research aims to explore and develop an integrated framework that combines Fed- erated Learning (FL) with Explainable Artificial Intelligence (XAI), with the objective of enhancing data privacy and achieving transparency in AI decision-making processes, particularly within sensitive environments such as Internet of Things (IoT) networks and Intelligent Medical Systems (IoMT). The study is motivated by the growing need for intelligent solutions that not only ensure high performance but also respect ethical and legal standards regarding data confidentiality and users’ right to un- derstand automated decisions. The work addresses the core challenges faced by federated learning, especially the difficulty of applying traditional interpretability techniques due to the decentralized nature of data and the lack of unified access to a global model. It also discusses the inherent tension between applying strong privacy-preserving techniques (such as differential privacy and encryption) and the need for human-understandable interpretations that clarify the model’s reasoning process. In this context, the study adopts a convolutional neural network (CNN) architec- ture deployed in a federated learning environment, evaluating a range of prominent FL strategies (e.g., FedAvg, FedOpt, FedAdam) across different scenarios involving varied data distributions (balanced, random, skewed), synchronization modes (synchronous and asynchronous), and different numbers of participating clients. Special attention is given to integrating visual interpretability tools like GradCAM and NormGrad, which provide insight into the key input regions influencing the model’s predictions. The proposed framework also considers performance-related aspects such as computational efficiency, communication cost, number of message exchanges, training time, and privacy protection. This project aspires to contribute toward the design of decentralized, interpretable AI models capable of balancing predictive accuracy with data protection and ethical trans- parency—laying the foundation for responsible applications in fields such as medicine, cybersecurity, and financial services.en_US
dc.description.sponsorshipDepartment of Computer Science and Information Technologyen_US
dc.language.isoenen_US
dc.publisherUNIVERSITY OF KASDI MERBAH OUARGLAen_US
dc.subjectFederated Learningen_US
dc.subjectPrivacy Preservationen_US
dc.subjectExplainable Artificial Intelligence (XAI)en_US
dc.subjectIoT Networksen_US
dc.subjectPerformance Evaluation in Federated Learningen_US
dc.titleTowards Privacy-Preserving Federated Learning with Explainable AI: Applications in IoT Networksen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
KHOUILDAT-HIDEB.pdfArtificial Intelligence and Data Science3,77 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.