Confirmed speakers


Laura Cruz Reyes

ITCM of TecNM, Mexico


Michael Emmerich

Leiden University, The Netherlands


Una-May O'Reilly



Lee Spector

Amherst College, USA




1) Laura Cruz Reyes
Where is research going in multi-objective evolutionary optimization incorporating user preferences?

Abstract: A crucial step in solving a multi-objective optimization problem (MOOP) is to identify a set of conflicting solutions in which improving one objective will worsen the performance of the others. This set of optimal solutions in the objective space is the Pareto front. However, obtaining this front does not solve a MOOP because there is usually no single optimal solution. The DM must provide information about his preferences to choose and implement only one solution, the most preferred. In multi-objective metaheuristic approaches, preference information is used to guide the search toward the DM's region of interest and find the best compromise solution. Of interest for this talk is the incorporation of user preferences in multi-objective evolutionary algorithms (EAs). The research area in multi-objective evolutionary optimization began to be recognized in the late 1990s. The first attempt to incorporate preferences in an EA dates back to 1993; in 2000, still very few researchers addressed this issue, and in recent years, research has increased, gaining important advances. The first part of this talk will look at some of these advances, emphasizing the most recent and important ones. In the second part of the talk, a critical analysis of the research developed will be conducted. In the third and last part of the talk, some of the future research challenges will be mentioned.

Short bio: Laura Cruz-Reyes received the Ph.D. degree in computer science from the National Center for Research and Technological Development the MS degree in computer science and the MS degree in information systems from the Technological Institute of Monterrey, and a BS degree in Chemical Engineering from the Technology Institute of Ciudad Madero. She is a full-time Professor of Computer Science at the Technological Institute of Ciudad Madero of the National Mexican Institute of Technology.  At this institution, she heads the consolidated research group on Intelligent Optimization. She is a member of the Mexican Research System of the government agency CONACYT, with level III, the highest level. She is a member of the Mexican Academy of Computing (AMEXCOMP), the Mexican Society of Operation Research (SMIO), the Mexican Society of Artificial Intelligence (SMIO), and the Mexican Society of Computer Science (SMCC). Her work has focused on modeling and solving complex optimization problems (NP-hard) for environments with many objectives, dynamic conditions, uncertainty, and preferences, supported mainly by artificial intelligence and operational research disciplines. In this context, her main research interests include metaheuristics, machine learning, fuzzy logic, multicriteria decision, and logistics. She has served as Guest Editor and referee of national and international journals in artificial intelligence and optimization. 



2) Michael Emmerich

From Darwin to Newton: Evolutionary and Numerical Methods for Indicator-based Pareto Front Approximation.

Abstract: The concept of indicator-based optimization originated in the field of population-based metaheuristics for multi-objective optimization. The key idea is to measure the quality of a Pareto front approximation using a performance indicator and such indicators have since been important to measure the performance of metaheuristics on benchmark problems, where the goal is to achieve a good approximation in terms of coverage and closeness to a Pareto front. Indicator-based multiobjective optimization methods use this indicator directly to guide their search, for instance by measuring individual contributions of points to the indicator in the fitness assignment step of evolutionary algorithms. The idea to interpret a population as a vector in a higher-dimensional space of concatenated points, more recently, gave rise to the development of numerical methods (gradient-based, Newton method) for the approximation of the Pareto front using a population (set vector). These methods not only achieve a good closeness to the Pareto front but also good coverage of the dominated subspace. In particular, indicators based on the Lebesgue integral over the dominated subspace (e.g., the hypervolume indicator) and indicators inspired by the Hausdorff metric (e.g. the Delta p metric)  have been successfully developed. We show how to compute the Gradients and Hessian matrix of a population of points and how to hybridize these numerical methods successfully with indicator-based metaheuristics. The topic will show an example of the fruitful interplay between research in evolutionary stochastic algorithms and research in numerical deterministic methods, and there are various new perspectives for research on set-oriented optimization.


Short bio: Michael Emmerich is a Germany-born Computer Scientist  who currently lives in Finland and in The Netherlands. Since 2016 he is appointed at Associate Professor at Leiden University, The Netherlands, where he leads the Multicriteria Optimization and Decision Analytics Group and since 2019 he is a visiting researcher at Jyvaskyla University, Finland in the Multiobjective and Industrial Optimization Group. He also is Lead AI Scientist at, a provider of AI solutions n the Nordic Countries. He has received his Doctorate in Natural Sciences from the Technical University of Dortmund on the topic of Gaussian Processes for Surrogate-Assisted Multiobjective Design Optimization (2005) under the supervision of Prof. Dr. Ing. H.-P. Schwefel. He also worked as a visiting fellow at the Center for Applied Systems Analysis, ICD e.V. Dorttmund, Institut für Erstarrung unter Schwerelosigkeit e.V. (Aachen), Institute for Fundamental Research of Matter (FOM) Amsterdam, Unversity of the Algarve, IST Lisbon, and Jyvaskyla University. His main contributions are in the field of  indicator-based multiobjective optimization, computational geometry algorithms for performance indicator computations, and chemistry and engineering design optimization. He has coordinated four Lorenz Center Workshops and was general chair of three scientific conferences (Global Optimization Workshop 2018 (LeGO), EVOLVE 2013 (Leiden, The Netherlands), and EVOLVE 2015 (Iasi), as well as the organizer and co-initiator of the Modern Machine Learning Technologies (MoMLeT) workshop held annually in Ukraine since 2019. He is general chair of the forthcoming EMO 2023 conference, to be held in Leiden. Together with co-authors he has published five books, 52 journal papers, and more than 120 conference papers, of which five received a best paper award. He has successfully supervised more than 10 Ph.D. students on topics of multiobjective optimization.


3) Una-May O'Reilly

Modeling Adversarial Dynamics

Abstract: My interest is in the intelligence of adversaries, particularly how they learn from their conflicts and how their strategic and tactical adaptive behavior can be modeled. I investigate a variety of machine learning methods to model adversarial dynamics. I will describe algorithmic frameworks we have developed for cyber security which draw upon symbolic behavioral descriptions and evolutionary adapation.


Short bio: Una-May O'Reilly is founder and leader of the AnyScale Learning For All (ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory. ALFA focuses on Artificial Adversarial Intelligence through machine learning and evolutionary algorithm lenses. She received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She was a Fellow of the International Society of Genetic and Evolutionary Computation, now ACM SIGEVO, which transferred to special  recognition for contributions. She has served as Vice-Chair of ACM SIGEVO. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and ACM Transactions on Evolutionary Learning and Optimization, and editor for Evolutionary Computation (MIT Press).


4) Lee Spector

Honor all the things: A lexicase selection manifesto

Abstract: Usually, we care about a lot of things. That is, we have many objectives. Even when we describe a goal in terms of a single objective, that single objective is often an aggregate of many components such as measures of performance on individual tests. The wide-spread, long-standing practice for dealing with such compound objectives is to focus on aggregates, either with a single measure of overall performance or with a collection of measures, each of which aggregates sub-objectives of a specific kind. In this talk I will make the case for an alternative approach that avoids any form of aggregation of objectives or sub-objectives. The lexicase parent selection algorithm, developed for evolutionary computation, is an example of this approach. Rather than aggregating errors on individual tests into an overall "fitness" measure on which selection is then based, lexicase selection uses all of the individual test errors, without aggregation, as the basis for selection. It does this by filtering candidate parents by performance on individual tests, considered sequentially in random order. In doing so, it "honors" all of the individual tests and all possible collections of tests when they appear together at the beginning of test sequences. This can have dramatic effects on search performance, significantly increasing the probability that a solution will be found and decreasing the effort required to find it. I will present an overview of recent work on lexicase selection and speculate about the future of methods that similarly "honor all the things."


Short bio: Lee Spector is the Class of 1993 Professor of Computer Science at Amherst College, an Adjunct Professor in the College of Information and Computer Sciences at the University of Massachusetts, Amherst, and an Emeritus Professor of Computer Science at Hampshire College. He received a B.A. in Philosophy from Oberlin College and a Ph.D. in Computer Science from the University of Maryland, College Park. His areas of teaching and research include evolutionary computation, quantum computation, and intersections of computer science, cognitive science, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer), an Associate Editor for the IEEE Transactions on Evolutionary Learning and Optimization, and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is a member of the ACM SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation. He has won several other awards and honors, including two gold medals in the Human Competitive Results contest of the Genetic and Evolutionary Computation Conference, and the highest honor bestowed by the National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars.