Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/36824
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorOussama Aiadi-
dc.contributor.authorRezzag Bedida, Tahar-
dc.contributor.authorHammouya, Abdeldjalil-
dc.date.accessioned2024-09-24T08:32:40Z-
dc.date.available2024-09-24T08:32:40Z-
dc.date.issued2024-
dc.identifier.citationFACULTY OF N EW I NFORMATION AND C OMMUNICATION T ECHNOLOGIESen_US
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/36824-
dc.descriptionArtificial Intelligence and Data Scienceen_US
dc.description.abstractEarly detection of polyps in the colon is crucial for preventing colorectal cancer, the second leading cause of cancer-related deaths globally. However, accurate identification of polyps can be challenging due to factors like subtle visual cues, variable lighting, and human fatigue. This work aims to adapt the Segment Anything Model (SAM) to segment colonoscopy polyp by replacing its encoder with a lightweight convolutional neural net- work. Additionally, we strive to enhance the model’s accuracy and automation through the implementation of zero-shot learning. This approach involves utilizing a pre-trained object detection model with K-means clustering algorithm to extract the bounding box prompt, which serves as auxiliary information for SAM, thereby improving its performance on un- seen polyp data without the need for fine-tuning or manual prompt design. The proposed method reduces the number of SAM encoder parameters from 91M to 3M. It demonstrates superior performance compared to some existing approaches that work to fine-tune SAM with large number of parameters.This work offers a contribution to computer-aided polyp detection. It paves the way for more efficient and accurate polyp segmentation systems, ultimately improving early cancer diagnosis and patient care.en_US
dc.description.sponsorshipDepartment of Computer Science and Information Technologyen_US
dc.language.isoenen_US
dc.publisherKASDI MERBAH UNIVERSITY OUARGLAen_US
dc.subjectMedical Imagingen_US
dc.subjectPolyps Segmentationen_US
dc.subjectSegment Anything Model (SAMen_US
dc.subjectVision Transformers (ViTs)en_US
dc.titleImproving SAM model for medical image segmentationen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
REZZAGBEDIDA_HAMMOUYA.pdfArtificial Intelligence and Data Science12,72 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.