Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/40231
Title: Towards Privacy-Preserving Federated Learning with Explainable AI: Applications in IoT Networks
Authors: Khaldi, Belal
Khouildat, Houria
Hideb, Hadjer
Keywords: Federated Learning
Privacy Preservation
Explainable Artificial Intelligence (XAI)
IoT Networks
Performance Evaluation in Federated Learning
Issue Date: 2025
Publisher: UNIVERSITY OF KASDI MERBAH OUARGLA
Citation: FACULTY OF NEW TECHNOLOGIES OF INFORMATION AND COMMUNICATION
Abstract: This research aims to explore and develop an integrated framework that combines Fed- erated Learning (FL) with Explainable Artificial Intelligence (XAI), with the objective of enhancing data privacy and achieving transparency in AI decision-making processes, particularly within sensitive environments such as Internet of Things (IoT) networks and Intelligent Medical Systems (IoMT). The study is motivated by the growing need for intelligent solutions that not only ensure high performance but also respect ethical and legal standards regarding data confidentiality and users’ right to un- derstand automated decisions. The work addresses the core challenges faced by federated learning, especially the difficulty of applying traditional interpretability techniques due to the decentralized nature of data and the lack of unified access to a global model. It also discusses the inherent tension between applying strong privacy-preserving techniques (such as differential privacy and encryption) and the need for human-understandable interpretations that clarify the model’s reasoning process. In this context, the study adopts a convolutional neural network (CNN) architec- ture deployed in a federated learning environment, evaluating a range of prominent FL strategies (e.g., FedAvg, FedOpt, FedAdam) across different scenarios involving varied data distributions (balanced, random, skewed), synchronization modes (synchronous and asynchronous), and different numbers of participating clients. Special attention is given to integrating visual interpretability tools like GradCAM and NormGrad, which provide insight into the key input regions influencing the model’s predictions. The proposed framework also considers performance-related aspects such as computational efficiency, communication cost, number of message exchanges, training time, and privacy protection. This project aspires to contribute toward the design of decentralized, interpretable AI models capable of balancing predictive accuracy with data protection and ethical trans- parency—laying the foundation for responsible applications in fields such as medicine, cybersecurity, and financial services.
Description: Artificial Intelligence and Data Science
URI: https://dspace.univ-ouargla.dz/jspui/handle/123456789/40231
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
KHOUILDAT-HIDEB.pdfArtificial Intelligence and Data Science3,77 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.