Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/40056
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMechalkh, Charaf Eddine-
dc.contributor.authorFennouh, Marya Douniazad-
dc.date.accessioned2026-01-26T11:31:03Z-
dc.date.available2026-01-26T11:31:03Z-
dc.date.issued2025-
dc.identifier.citationFACULTY OF NEW TECHNOLOGIES OF INFORMATION AND COMMUNICATIONen_US
dc.identifier.urihttps://dspace.univ-ouargla.dz/jspui/handle/123456789/40056-
dc.descriptionArtificial Intelligence and Data Scienceen_US
dc.description.abstractThe rapid growth in data generation has led to an increasing demand for efficient data compression techniques. Traditional compression methods, such as Huffman coding, LZ- based algorithms, and arithmetic coding, have proven effective in reducing file sizes. However, these techniques often fail to account for the contextual nature of data, which can limit their performance when handling complex, variable-length content such as text, images, or multi- modal data. In recent years, Large Language Models (LLMs) have demonstrated impressive capabilities in understanding and generating human-like text, making them a promising candidate for enhancing compression techniques through context-awareness. LLMs, with their ability to process large amounts of sequential data and recognize pat- terns, offer significant potential in improving compression by leveraging context in a more dynamic and adaptive manner. Unlike traditional methods that rely on fixed algorithms, LLM-based compression could adjust to the content being compressed, leading to more effi- cient encoding and potentially higher compression ratios. This thesis explores the potential of LLMs in context-aware compression. We investigate how LLMs, specifically GPT-like models, can be integrated into compression pipelines to op- timize encoding strategies based on the context within the data. Our objectives are to assess the advantages of LLM-enhanced compression methods compared to traditional techniques and demonstrate how context-awareness can lead to more efficient compression, particularly in complex or varied datasets. The results of our study show that LLM-based approaches can outperform traditional methods in certain scenarios, offering promising avenues for fu- ture research and practical applications in data compression.en_US
dc.description.sponsorshipDepartment Of Computer Science And Information Technologyen_US
dc.language.isoenen_US
dc.publisherUNIVERSITY OF KASDI MERBAH OUARGLAen_US
dc.subjectLarge Language Modelsen_US
dc.subjectLLMsen_US
dc.subjectContext-Awareen_US
dc.subjectText Compressionen_US
dc.subjectCom- pressionen_US
dc.titleExploring the Use of Large Language Models for Lossless Text Compressionen_US
dc.typeThesisen_US
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
FENNOUH.pdfArtificial Intelligence and Data Science896,24 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.