Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/35008
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBoukhamla, Akram Zine Eddine-
dc.contributor.authorBOUREGA, Lokmane-
dc.contributor.authorGHERBI, Leila-
dc.date.accessioned2023-11-14T08:44:41Z-
dc.date.available2023-11-14T08:44:41Z-
dc.date.issued2023-
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/35008-
dc.description.abstractFederated learning (FL) provides convenience for cross-domain machine learning applicationsandhasbeenwidelystudied. However,theoriginalFLisstillvulnerabletopoisoning and inference attacks, which will hinder the landing application of FL. Therefore, it is essential to design a trustworthy federation learning (TFL) to eliminate users’ anxiety. In this paper, we aim to provide a well-researched picture of the security and privacy issues in FL that can bridge the gap to TFL. Firstly, we define the desired goals and critical requirements of TFL, observe the FL model from the perspective of the adversaries and extrapolate the roles and capabilities of potential adversaries backward. Subsequently, we summarize the current mainstream attack and defense means and analyze the characteristics of the different methods. Based on a priori knowledge, we propose directions for realizing the future of TFL that deserve attention.en_US
dc.language.isoenen_US
dc.publisherUNIVERSITY OF KASDI MERBAH OUARGLAen_US
dc.subjectTrusten_US
dc.subjectFederated Learningen_US
dc.subjectPrivacyen_US
dc.titleTrust-based in Federated Learningen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
BOUREGA-GHERBI.pdf3,9 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.