Dimo Brockhoff, Tea Tusar “Benchmarking Multiobjective Optimizers 2.0”
Benchmarking is an important part of algorithm design, selection and recommendation. In this tutorial, we will discuss the past and future of benchmarking multiobjective optimizers. In particular, we will discuss the benchmarking of multiobjective algorithms by falling back on single-objective (quality indicator values) comparisons and thus how we are able to use all methodologies and tools from the single-objective domain such as empirical distributions of runtimes. We will also discuss the advantages and drawbacks of some widely used multiobjective test suites, that we all have become familiar with over the years, and explain how we can do better: by going back to the roots of what a multiobjective problem is in practice, namely the simultaneous optimization of multiple objective functions. Finally, we discuss recent advances in the visualization of (multiobjective) problem landscapes and compare the previous and newly proposed benchmark problems in the context of those landscape visualizations.
Amiram Moshaiov “Evolutionary Multi-Concept Optimization”
Evolutionary multi- and many-objective optimization algorithms (EMaOAs) iteratively evolve a set of solutions, towards a good Pareto Front approximation. The availability of multiple solution sets over successive generations, makes EMaOAs amenable to application of machine learning (ML), for different pursuits. This tutorial will begin by highlighting the existing studies on ML-based enhancements for EMaOAs, before focusing on the recently proposed innovized progress operators within the gamut of reference vector (RV) based EMaOAs. This will include a detailed discussion on how the convergence and diversity capabilities of RV-EMaOAs can be simultaneously enhanced, by learning efficient search directions through a judicious mapping of inter- and intra-generational solutions, respectively. Results on hard-to-solve test problems will demonstrate the utility of the above approach, in light of convergence-diversity balance, ML-based risk-reward tradeoff, and avoidance of extra solution evaluations. This tutorial will conclude by proposing a list of ML-based enhancements that could be explored in future.