Please use this identifier to cite or link to this item: https://dspace.univ-ouargla.dz/jspui/handle/123456789/33974
Title: Using deep learning in: dynamic multi-objective optimization problems.
Authors: Cheriet, Abdelhakim
Boukraa, Ilyes Ali
Keywords: Dynamic multi-objective optimization
pymoo, evolutionary algorithms pymoo, evolutionary algorithms
deep learning
re current neural networks
semi-supervised learning
bayes optimization
Issue Date: 2023
Publisher: KASDI MERBAH UNIVERSITY - OUARGLA
Abstract: This work introduces a novel approach for addressing limitations and exploring the potential of Recurrent Neural Networks (RNNs) in Dynamic Multi-Objective Optimization Problems (DMOOPs). The study con ducts a comprehensive performance comparison between RNN models and the Dynamic Non-dominated Sorting Genetic Algorithm 2 (DNSGA-II), utilizing DNSGA-II for semi-supervision of the RNN models. The evaluation of their performance is carried out using key metrics such as Inverted Generational Dis tance (IGD) and Mean Inverted Generational Distance (MIGD). Extensive experiments are conducted on CEC2018 benchmarks, which specifically target continuous and unconstrained DMOOPs. The evaluation of solutions generated by both approaches is performed with respect to the true Pareto Front (PF) provided by the CEC2018 benchmarks. The findings demonstrate that DNSGA-II generally outperforms RNN models in achieving lower IGD values. However, the competitive performance of RNN models, particularly in relation to MIGD, suggests their potential as an alternative to DNSGA-II in specific scenarios. Notably, the research highlights the RNN model’s efficient adaptation to changes in DMOOPs, leveraging matrix multiplications for efficient identification of the next Pareto Set (PS). The thesis delves into an in-depth exploration of widely adopted performance measures and evaluation metrics in Dynamic Multi-Objective Optimization (DMOO), with a specific focus on IGD and MIGD. These metrics provide objective and quantitative assessments of solution quality, convergence, and diversity within optimization algorithms. Additionally, Bayesian Optimization (BO) is thoroughly investigated as a technique for optimizing model hyper-parameters, aiming to enhance the performance and efficiency of Deep Learning (DL) models by effectively addressing challenges like overfitting and achieving faster convergence through the incorporation of early stopping strategies. The thesis underscores the importance of considering problem characteristics and emphasizes the accessibility of open-source libraries such as Pymoo, which boost the development and implementation of optimization algorithms used for DMOOPs. In summary, this research significantly contributes to advancing our understanding of the capabilities of RNN models in DMOO and offers valuable insights into their comparative performance with DNSGA II. The findings strongly emphasize the need to consider diverse optimization techniques and performance metrics when tackling DMOOPs. Furthermore, the obtained results lay a solid foundation for future re search endeavors focused on further exploring and refining the application of RNN models, including the proposed surrogate for DNSGA-II, in the context of DMOOPs, thus ensuring continuous improvements in their effectiveness and applicability
Description: People’s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research KASDI MERBAH UNIVERSITY - OUARGLA Faculty of New Technologies of Information and Telecommunication Department of Computer Science and Information Technology
URI: https://dspace.univ-ouargla.dz/jspui/handle/123456789/33974
Appears in Collections:Département d'informatique et technologie de l'information - Master

Files in This Item:
File Description SizeFormat 
BOUKRAA.pdf2,41 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.