Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/30512
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBensaci, Ramla-
dc.contributor.authorkhaldi, Belal-
dc.date.accessioned2022-09-15T09:45:13Z-
dc.date.available2022-09-15T09:45:13Z-
dc.date.issued2022-
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/30512-
dc.descriptionIntelligence Artificielle et Technologies de l'Informationen_US
dc.description.abstractAs imaging equipment has advanced, the number of photosets has increased, making manual annotation impractical, necessitating the development of accurate and time-efficient image annotation systems. We consider the fundamental Computer Vision problem of image annotation, where an image must be automatically tagged with a set of discrete labels that best describe its semantics. As more digital images become available, image annotation can help automatically archival and retrieval of extensive image collections.Image annotation can assist in other visual learning tasks, such as image captioning, scene recognition, multi-object recognition, and image annotation at the heart of image understanding. Much literature on AIA has been proposed, primarily in probabilistic modelling, classification-based approaches, etc. This research explores image annotation approaches published in the last 20 years. In this thesis, we study the image annotation task from two aspects. First, The focus is mainly on machine learning models and AIA techniques based on the basic theory, feature extraction method, annotation accuracy, and datasets.Second, we attempt to address the annotation task by a CNN-kNN framework. Furthermore, we present a hybrid approach that combines both advantages of CNN and the conventional concept-to-image assignment approaches. J-image segmentation (JSEG) is firstly used to segment the image into a set of homogeneous regions. A CNN is employed to produce a rich feature descriptor per area. Then, a vector of locally aggregated descriptors (VLAD) is applied to the extracted features to generate compact and unified descriptors. After that, the not too deep clustering (N2D clustering) algorithm is performed to define local manifolds constituting the feature space, and finally, the semantic relatedness is calculated for both image-concept and concept–concept using KNN regression to grasp better the meaning of concepts and how they relate. Through a comprehensive experimental evaluation, our method has indicated a superiority over a wide range of recent related works by yielding F1 scores of 58.89% and 80.24% with the datasets Corel 5k and MSRC v2, respectively. Additionally, it demonstrated a relatively high capacity for learning more concepts with higher accuracy, which results in N+ of 212 and 22 with the datasets Corel 5k and MSRC v2, respectively.en_US
dc.language.isoenen_US
dc.publisherUniversity Kasdi Merbah Ouarglaen_US
dc.relation.ispartofseries2022;-
dc.subjectAutomatic image annotationen_US
dc.subjectmachine learning techniquesen_US
dc.subjectImage segmentationen_US
dc.subjectfeatures extractionen_US
dc.subjectdeep learningen_US
dc.subjectCNNen_US
dc.subjectAnnotation automatique des imagesen_US
dc.subjecttechniques d'apprentissage automatiqueen_US
dc.subjectSegmentation d'imagesen_US
dc.subjectextraction de caractéristiquesen_US
dc.subjectApprentissage en profondeuren_US
dc.titleUsing machine learning techniques for automatic annotation of personal image collectionsen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Doctorat

Files in This Item:
File Description SizeFormat 
RAMLA_BENSACI_Doctorat.pdf2,28 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.