EVOLUTIONARY COMPUTATION

Cross-Representation Genetic Programming: A Case Study on Tree-Based and Linear Representations
Huang Z, Mei Y, Zhang F, Zhang M and Banzhaf W
Existing genetic programming (GP) methods are typically designed based on a certain representation, such as tree-based or linear representations. These representations show various pros and cons in different domains. However, due to the complicated relationships among representation and fitness landscapes of GP, it is hard to intuitively determine which GP representation is the most suitable for solving a certain problem. Evolving programs (or models) with multiple representations simultaneously can alternatively search on different fitness landscapes since representations are highly related to the search space that essentially defines the fitness landscape. Fully using the latent synergies among different GP individual representations might be helpful for GP to search for better solutions. However, existing GP literature rarely investigates the simultaneous effective evolution of multiple representations. To fill this gap, this paper proposes a cross-representation GP algorithm based on tree-based and linear representations, which are two commonly used GP representations. In addition, we develop a new cross-representation crossover operator to harness the interplay between tree-based and linear representations. Empirical results show that navigating the learned knowledge between basic tree-based and linear representations successfully improves the effectiveness of GP with solely tree-based or linear representation in solving symbolic regression and dynamic job shop scheduling problems.
Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem
Antipov D, Neumann A, Neumann F and Sutton AM
Diversity optimization is the class of optimization problems in which we aim to find a diverse set of good solutions. One of the frequently-used approaches to solve such problems is to use evolutionary algorithms that evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyze EDO on a three-objective function LOTZk, which is a modification of the two-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in O(kn3) expected iterations. We also analyze the runtime of the GSEMOD algorithm (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures: the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMOD optimizes it in O(kn2log⁡(n)) expected iterations (which is asymptotically faster than the upper bound on the runtime until it finds a Pareto-optimal population), and for the second measure we show an upper bound of O(k2n3log⁡(n)) expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures. The results of experiments suggest that our bounds for the total imbalance measure are tight, while the bounds for the imbalances vector are too pessimistic.
On Stochastic Operators, Fitness Landscapes, and Optimization Heuristics Performances
Aboutaib B, Verel S, Fonlupt C, Derbel B, Liefooghe A and Ahiod B
Stochastic operators are the backbone of many optimization algorithms. Besides the existing theoretical analysis that studies the asymptotic runtime of such algorithms, characterizing their performance using fitness landscape analysis is far away. The fitness landscape approach proceeds by considering multiple characteristics to understand and explain an optimization algorithm's performance or the difficulty of an optimization problem, and hence would provide a richer explanation. This paper analyzes the fitness landscapes of stochastic operators by focusing on the number of local optima and their relation to the optimization performance. The search spaces of two combinatorial problems are studied: the NK-landscape and the Quadratic Assignment Problem, using binary string-based and permutation-based stochastic operators. The classical bit-flip search operator is considered for binary strings, and a generalization of the deterministic exchange operator for permutation representations is devised. We study small instances, ranging from randomly generated to real-like instances, and large instances from the NK-landscape. For large instances, we propose using an adaptive walk process to estimate the number of locally optimal solutions. Given that stochastic operators are usually used within population and single-solution-based evolutionary optimization algorithms, we contrast the performance of the (μ+λ)-EA, and an Iterated Local Search, versus the landscape properties of large size instances of the NK-landscapes. Our analysis shows that characterizing the fitness landscapes induced by stochastic search operators can effectively explain the optimization performances of the algorithms we considered.
Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multiobjective Continuous Optimization Problems
Seiler MV, Kerschke P and Trautmann H
In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, for example, from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is-to the best of our knowledge-very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others, point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out of the box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.
Exploring Automated Algorithm Design Synergizing Large Language Models and Evolutionary Algorithms: Survey and Insights
Yu H and Liu J
Designing algorithms for optimization problems, no matter heuristic or meta-heuristic, often relies on manual design and domain expertise, limiting their scalability and adaptability. The integration of Large Language Models (LLMs) and Evolutionary Algorithms (EAs) presents a promising new way to overcome these limitations to make optimization be more automated, where LLMs function as dynamic agents capable of generating, refining, and interpreting optimization strategies, while EAs explore complex searching spaces efficiently through evolutionary operators. Since this synergy enables a more efficient and creative searching process, we first review important developments in this direction, and then summarize an LLM-EA paradigm for automated optimization algorithm design. We conduct an in-depth analysis on innovative methods for four key EA modules, namely, individual representation, selection, variation operators, and fitness evaluation, addressing challenges related to optimization algorithm design, particularly from the perspective of LLM prompts, analyzing how the prompt flow evolving with the evolutionary process, adjusting based on evolutionary feedback (e.g., population diversity, convergence rate). Furthermore, we analyze how LLMs, through flexible prompt-driven roles, introduce semantic intelligence into fundamental EA characteristics, including diversity, convergence, adaptability, and scalability. Our systematic review and thorough analysis into the paradigm can help researchers better understand the current research and boost the development of synergizing LLMs with EAs for automated optimization algorithm design.
GESR: A Geometric Evolution Model for Symbolic Regression
Ma Z and Zhong J
Symbolic regression is a challenging task in machine learning that aims to automatically discover highly interpretable mathematical equations from limited data. Keen efforts have been devoted to addressing this issue, yielding promising results. However, there are still bottlenecks that current methods struggle with, especially when dealing with the datasets that characterize intricate mathematical expressions. In this work, we propose a novel Geometric Evolution Symbolic Regression algorithm. Leveraging geometric semantics, the process of symbolic regression in GESR is transformed into an approximation to an unimodal target in n-dimensional semantic space. Then, three key modules are presented to enhance the approximation: (1) a new semantic gradient concept, proposed from the observation of inaccurate approximation results within semantic backpropagation, to assist the exploration in the semantic space and improve the accuracy of semantic approximation; (2) a new geometric semantic search operator, tailored for efficiently approximating the target formula directly in the sparse semantic space, to obtain more accurate and interpretable solutions under strict program size constraints; (3) the Levenberg-Marquardt algorithm with L1 regularization, used for the adjustment of expression structures and the optimization of global subtree weights to assist the proposed geometric semantic search operator. Assisted with these modules, GESR achieves state-of-the-art accuracy performance on SRSD benchmark datasets. The implementation is available at https://github.com/MZT-srcount/GESR.
R2 v2: The Pareto-compliant R2 Indicator for Better Benchmarking in Bi-objective Optimization
Schäpermeier L and Kerschke P
In multi-objective optimization, set-based quality indicators are a cornerstone of benchmarking and performance assessment. They capture the quality of a set of tradeoff solutions by reducing it to a scalar number. One of the most commonly used setbased metrics is the R2 indicator, which describes the expected utility of a solution set to a decision-maker under a distribution of utility functions. Typically, this indicator is applied by discretizing the latter distribution, yielding a weakly Pareto-compliant indicator. In consequence, adding a nondominated or dominating solution to a solution set may - but does not have to - improve the indicator's value. In this paper, we reinvestigate the R2 indicator under the premise that we have a continuous, uniform distribution of (Tchebycheff) utility functions. We analyze its properties in detail, demonstrating that this continuous variant is indeed Pareto-compliant - that is, any beneficial solution will improve the metric's value. Additionally, we provide efficient computational procedures that (a) compute this metric for bi-objective problems in O(NlogN), and (b) can perform incremental updates to the indicator whenever solutions are added to (or removed from) the current set of solutions, without needing to recompute the indicator for the entire set. As a result, this work contributes to the state-of-the-art Pareto-compliant unary performance metrics, such as the hypervolume indicator, offering an efficient and promising alternative.
Fast Pareto Optimization Using Sliding Window Selection for Problems with Determinstic and Stochastic Constraints
Neumann F and Witt C
Submodular optimization problems play a key role in artificial intelligence as they allow to capture many important problems in machine learning, data science, and social networks. Pareto optimization using evolutionary multi-objective algorithms such as GSEMO (also called POMC) has been widely applied to solve constrained submodular optimization problems. A crucial factor determining the runtime of the used evolutionary algorithms to obtain good approximations is the population size of the algorithms which usually grows with the number of trade-offs that the algorithms encounter. In this paper, we introduce a sliding window speed up technique for recently introduced algorithms. We first examine the setting of deterministic constraints for which bi-objective formulations have been proposed in the literature. We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime bounds of the classical GSEMO algorithm and achieves the same theoretical performance guarantees as previous approaches within less computation time. Our experimental investigations for the classical maximum coverage problem confirm that our sliding window technique clearly leads to better results for a wide range of instances and constraint settings. After we have shown that the sliding approach leads to significant improvements for bi-objective formulations, we examine how to speed up a recently introduced 3-objective formulation for stochastic constraints. We show through theoretical and experimental investigations that the sliding window approach also leads to significant improvements for such 3-objective formulations as it allows for a more tailored parent selection that matches the optimization progress of the algorithm.
P-NP Instance Decomposition Based on the Fourier Transform for Solving the Linear Ordering Problem
Benavides X, Hernando L, Ceberio J and Lozano JA
The Fourier transform over finite groups has proved to be a useful tool for analyzing combinatorial optimization problems. However, few heuristic and metaheuristic algorithms have been proposed in the literature that utilize the information provided by this technique to guide the search process. In this work, we attempt to address this research gap by considering the case study of the Linear Ordering Problem (LOP). Based on the Fourier transform, we propose an instance decomposition strategy that divides any LOP instance into the sum of two LOP instances associated with a P and an NP-Hard optimization problem. By linearly aggregating the instances obtained from the decomposition, it is possible to create artificial instances with modified proportions of the P and NP-Hard components. Conducted experiments show that increasing the weight of the P component leads to a less rugged fitness landscape suitable for local search-based optimization. We take advantage of this phenomenon by presenting a new metaheuristic algorithm called P-Descent Search (PDS). The proposed method, first, optimizes a surrogate instance with a high proportion of the P component, and then, gradually increases the weight of the NP-Hard component until the original instance is reached. The multi-start version of PDS shows a promising and predictable performance that appears to be correlated to specific characteristics of the problem, which could open the door to an automatic tuning of its hyperparameters.
All-Quadratic Mixed-Integer Problems: A Study on Evolution Strategies and Mathematical Programming
Zepko G and Shir OM
Mixed-integer (MI) quadratic models subject to quadratic constraints, known as All- Quadratic MI Programs, constitute a challenging class of NP-complete optimization problems. The particular scenario of unbounded integers defines a subclass that holds the distinction of being even undecidable. This complexity suggests a possible soft-spot for Mathematical Programming (MP) techniques, which otherwise constitute a good choice to treat MI problems. We consider the task of minimizing MI convex quadratic objective and constraint functions with unbounded decision variables. Given the theoretical weakness of white-box MP solvers to handle such models, we turn to black-box meta-heuristics of the Evolution Strategies (ESs) family, and question their capacity to solve this challenge. Through an empirical assessment of all-quadratic test-cases, across varying Hessian forms and condition numbers, we compare the performance of the CPLEX solver to modern MI ESs, which handle constraints by penalty. Our systematic investigation begins where the CPLEX solver encounters difficulties (timeouts as the search-space dimensionality increases, D < 30), and we report in detail on the D = 64 case. Overall, the empirical observations confirm that black-box and white-box solvers can be competitive over this MI problem class, exhibiting 67% similar performance in terms of the attained objective function values in a fixed-budget perspective. Despite consistent termination in timeouts, CPLEX demonstrated superior or comparable performance to the MIESs in 98% of the cases. This trend is flipped when unboundedness is amplified by a significant translation of the optima, leading to a totally inferior performance of CPLEX across 81% of the cases. We also conclude that conditioning and separability are not intuitive factors in determining the hardness degree of this MI problem class.
On the Use of the Doubly Stochastic Matrix Models for the Quadratic Assignment Problem
Santucci V and Ceberio J
Permutation problems have captured the attention of the combinatorial optimization community for decades due to the challenge they pose. Although their solutions are naturally encoded as permutations, in each problem, the information to be used to optimize them can vary substantially. In this paper, we consider the Quadratic Assignment Problem (QAP) as a case study, and propose using Doubly Stochastic Matrices (DSMs) under the framework of Estimation of Distribution Algorithms. To that end, we design efficient learning and sampling schemes that enable an effective iterative update of the probability model. Conducted experiments on commonly adopted benchmarks for the QAP prove doubly stochastic matrices to be preferred to the other four models for permutations, both in terms of effectiveness and computational efficiency. Moreover, additional analyses performed on the structure of the QAP and the Linear Ordering Problem (LOP) show that DSMs are good to deal with assignment problems, but they have interesting capabilities to deal also with ordering problems such as the LOP. The paper concludes with a description of the potential uses of DSMs for other optimization paradigms, such as genetic algorithms or model-based gradient search.
Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization
Santoni ML, Raponi E, Neumann A, Neumann F, Preuss M and Doerr C
In real-world applications, users often favor structurally diverse design choices over one high-quality solution. It is therefore important to consider more solutions that decision makers can compare and further explore based on additional criteria. Alongside the existing approaches of evolutionary diversity optimization, quality diversity, and multimodal optimization, this paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold while maximizing their average quality. We obtain first insight into these objectives by performing a subset selection on the search trajectories of different well-established search heuristics, whether they have been specifically designed with diversity in mind or not. We emphasize that the main goal of our work is not to present a new algorithm but to understand the capability of off-the-shelf algorithms to quantify the trade-off between the minimum pairwise distance within batches of solutions and their average quality. We also analyze how this trade-off depends on the properties of the underlying optimization problem. A possibly surprising outcome of our empirical study is the observation that naive uniform random sampling establishes a very strong baseline for our problem, hardly ever outperformed by the search trajectories of the considered heuristics. We interpret these results as a motivation to develop algorithms tailored to produce diverse solutions of high average quality.
Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single-Objective and Multiobjective Continuous Optimization Problems
Seiler MV, Kerschke P and Trautmann H
In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, i.a., from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is-to the best of our knowledge-very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1) a strong correlation between multiple features, as well as (2) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multiobjective optimization problems. Our proposed framework can either be used out-of-the-box for analyzing single- and multi-objective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.
Analysis and simplification of the winner of the CEC 2022 optimization competition on single objective bound constrained search
Biedrzycki R
Extending state-of-the-art evolutionary algorithms is a widespread research direction. This trend has resulted in algorithms that give good results but are complex and challenging to analyze. One of these algorithms is EA4Eig - the winner of the CEC 2022 competition on single objective bound constrained search. The algorithm internally uses four optimization algorithms with modified components. This paper presents an analysis of EA4Eig and proposes a simplified version thereof exhibiting better optimization performance. The analysis found that the original source code contains errors that impact the algorithm's rank. The code was corrected, and the CEC 2022 competition ranking was recalculated. The impact of individual EA4Eig components on its performance was empirically analyzed. As a result, the algorithm was simplified by removing two of them. The best remaining component was analyzed further, which made it possible to remove some unnecessary and harmful code. Several versions of the algorithm were created and tested, varying in the degree of simplification. The simplest of them is implemented in 244 lines of C++ code, whereas the original implementation used 716 lines of Matlab code. Further analyses focused on the parameters of the algorithm. The constants hidden in the source code were named and treated as additional configurable parameters that underwent tuning. The ablation analyses showed that two of these hidden parameters had the most significant impact on the improvement achieved by the tuned version. The results of the original and simplified versions were compared on CEC 2022 and BBOB benchmarks. The results confirm that the simplified version is better than the original one on both these benchmarks.
Genetic Programming with Tabu List for Dynamic Flexible Job Shop Scheduling
Zhang F, Ardeh MA, Mei Y and Zhang M
Dynamic flexible job shop scheduling (DFJSS) is an important combinatorial optimisation problem, requiring simultaneous decision-making for machine assignment and operation sequencing in dynamic environments. Genetic programming (GP), as a hyper-heuristic approach, has been extensively employed for acquiring scheduling heuristics for DFJSS. A drawback of GP for DFJSS is that GP has weak exploration ability indicated by its quick diversity loss during the evolutionary process. This paper proposes an effective GP algorithm with tabu lists to capture the information of explored areas and guide GP to explore more unexplored areas to improve GP's exploration ability for enhancing GP's effectiveness. First, we use phenotypic characterisation to represent the behaviour of tree-based GP individuals for DFJSS as vectors. Then, we build tabu lists that contain phenotypic characterisations of explored individuals at the current generation and across generations, respectively. Finally, newly generated offspring are compared with the individuals' phenotypic characterisations in the built tabu lists. If an individual is unseen in the tabu lists, it will be kept to form the new population at the next generation. Otherwise, it will be discarded. We have examined the proposed GP algorithm in nine different scenarios. The findings indicate that the proposed algorithm outperforms the compared algorithms in the majority of scenarios. The proposed algorithm can maintain a diverse and well-distributed population during the evolutionary process of GP. Further analyses show that the proposed algorithm does cover a large search area to find effective scheduling heuristics by focusing on unseen individuals.
BlindSMOTE: Synthetic minority oversampling based only on evolutionary computation
Garcí-Pedrajas NE, Cuevas-Muñoz JM and de Haro-García A
One of the most common problems in data mining applications is the uneven distribution of classes, which appears in many real-world scenarios. The class of interest is often highly underrepresented in the given dataset, which harms the performance of most classifiers. One of the most successful methods for addressing the class imbalance problem is to oversample the minority class using synthetic samples. Since the original algorithm, the synthetic minority oversampling technique (SMOTE), introduced this method, numerous versions have emerged, each of which is based on a specific hypothesis about where and how to generate new synthetic instances. In this paper, we propose a different approach based exclusively on evolutionary computation that imposes no constraints on the creation of new synthetic instances. Majority class undersampling is also incorporated into the evolutionary process. A thorough comparison involving three classification methods, 85 datasets, and more than 90 class-imbalance strategies shows the advantages of our proposal.
MO-SMAC: Multi-objective Sequential Model-based Algorithm Configuration
Rook JG, Benjamins C, Bossek J, Trautmann H, Hoos HH and Lindauer M
Automated algorithm configuration aims at finding well-performing parameter configurations for a given problem, and it has proven to be effective within many AI domains, including evolutionary computation. Initially, the focus was on excelling in one performance objective, but, in reality, most tasks have a variety of (conflicting) objectives. The surging demand for trustworthy and resource-efficient AI systems makes this multi-objective perspective even more prevalent. We propose a new general-purpose multi-objective automated algorithm configurator by extending the widely-used SMAC framework. Instead of finding a single configuration, we search for a non-dominated set that approximates the actual Pareto set. We propose a pure multi-objective Bayesian Optimisation approach for obtaining promising configurations by using the predicted hypervolume improvement as acquisition function. We also present a novel intensification procedure to efficiently handle the selection of configurations in a multi-objective context. Our approach is empirically validated and compared across various configuration scenarios in four AI domains, demonstrating superiority over baseline methods, competitiveness with MO-ParamILS on individual scenarios and an overall best performance.
Solving Many-objective Optimization Problems based on PF Shape Classification and Vector Angle Selection
Wu YT, Ge FZ, Chen DB and Shi L
Most many-objective optimization algorithms (MaOEAs) adopt a pre-assumed Pareto front (PF) shape, instead of the true PF shape, to balance convergence and diversity in high-dimensional objective space, resulting in insufficient selection pressure and poor performance. To address these shortcomings, we propose MaOEA-PV based on PF shape classification and vector angle selection. The three innovation points of this paper are as follows: (I) a new method for PF classification; (II) a new fitness function that combines convergence and diversity indicators, thereby enhancing the quality of parents during mating selection; and (III) the selection of individuals exhibiting the best convergence to add to the population, overcoming the lack of selection pressure during environmental selection. Subsequently, the max-min vector angle strategy is employed. The solutions with the highest diversity and the least convergence are selected based on the max and min vector angles, respectively, which balances convergence and diversity. The performance of algorithm is compared with those of five state-of-the-art MaOEAs on 41 test problems and 5 real-world problems comprising as many 15 objectives. The experimental results demonstrate the competitive and effective nature of the proposed algorithm.
Beyond Landscape Analysis: DynamoRep Features For Capturing Algorithm-Problem Interaction In Single-Objective Continuous Optimization
Cenikj G, Petelin G, Doerr C, Korošec P and Eftimov T
The representation of optimization problems and algorithms in terms of numerical features is a well-established tool for comparing optimization problem instances, for analyzing the behavior of optimization algorithms, and the quality of existing problem benchmarks, as well as for automated per-instance algorithm selection and configuration approaches. Extending purely problem-centered feature collections, our recently proposed DynamoRep features provide a simple and inexpensive representation of the algorithmproblem interaction during the optimization process. In this paper, we conduct a comprehensive analysis of the predictive power of the DynamoRep features for the problem classification, algorithm selection, and algorithm classification tasks. In particular, the features are evaluated for the classification of problem instances into problem classes from the BBOB (Black Box Optimization Benchmarking) suite, selecting the best algorithm to solve a given problem from a portfolio of three algorithms (Differential Evolution, Evolutionary Strategy, and Particle Swarm Optimization), as well as distinguishing these algorithms based on their trajectories. We show that, despite being much cheaper to compute, they can yield results comparable to those using state-ofthe-art Exploratory Landscape Analysis features.
The Cost of Randomness in Evolutionary Algorithms: Crossover Can Save Random Bits
Kneissl C and Sudholt D
Evolutionary algorithms make countless random decisions during selection, mutation and crossover operations. These random decisions require a steady stream of random numbers. We analyze the expected number of random bits used throughout a run of an evolutionary algorithm and refer to this as the cost of randomness. We give general bounds on the cost of randomness for mutation-based evolutionary algorithms using 1-bit flips or standard mutations using either a naive or a common, more efficient implementation that uses Θ(logn) random bits per mutation. Uniform crossover is a potentially wasteful operator as the number of random bits used equals the Hamming distance of the two parents, which can be up to n. However, we show for a (2+1) Genetic Algorithm that is known to optimize the test function ONEMAX in roughly (e/2)nlnn expected evaluations, twice as fast as the fastest mutation-based evolutionary algorithms, that the total cost of randomness during all crossover operations on ONEMAX is only Θ(n). A more pronounced effect is shown for the common test function JUMPk, where there is an asymptotic decrease both in the number of evaluations and in the cost of randomness. Consequently, the use of crossover can reduce the cost of randomness below that of the fastest evolutionary algorithms that only use standard mutations.
Quality Diversity under Sparse Interaction and Sparse Reward: Application to Grasping in Robotics
Huber J, Helenon F, Coninx M, Amar FB and Doncieux S
Quality-Diversity (QD) methods are algorithms that aim to generate a set of diverse and highperforming solutions to a given problem. Originally developed for evolutionary robotics, most QD studies are conducted on a limited set of domains'mainly applied to locomotion, where the fitness and the behavior signal are dense. Grasping is a crucial task for manipulation in robotics. Despite the efforts of many research communities, this task is yet to be solved. Grasping cumulates unprecedented challenges in QD literature: it suffers from reward sparsity, behavioral sparsity, and behavior space misalignment. The present work studies how QD can address grasping. Experiments have been conducted on 15 different methods on 10 grasping domains, corresponding to 2 different robot-gripper setups and 5 standard objects. The obtained results show that MAP-Elites variants that select successful solutions in priority outperform all the compared methods on the studied metrics by a large margin. We also found experimental evidence that sparse interaction can lead to deceptive novelty. To our knowledge, the ability to efficiently produce examples of grasping trajectories demonstrated in this work has no precedent in the literature.