CEC 2021 Tutorial on Evolutionary Machine Learning

IEEE Congress on Evolutionary Computation (CEC)

18 June - 01 July 2021

Tutorial Title

Evolutionary Machine Learning (T-22)


A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

Consequently, various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges. The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight into the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule-based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques’ insights can be reused to other EML branches, which shapes the new directions of EML techniques.


This tutorial includes three talks.

Talk1: Gentle Introduction of Evolutionary Machine Learning

The first talk will be a gentle introduction to EML, giving an overview of the paradigm of EML including the following topics:

It will be delivered by Associate Prof Will Browne who has recently co-authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Then, the other two talks focus on the two major approaches of EMLs: evolutionary rule-based machine learning (i.e. Learning Classifier Systems) and evolutionary neural networks. These talks cover basics and advances including recent applications of both techniques.

Talk2: Evolutionary Rule-based Learning – A Learning Classifier System’s way-

For the second talk provided by Prof. Nakata, we will focus on the Learning Classifier System technique, which is the origin of evolutionary rule-based machine learning (ERML); Evolutionary rule-based learning techniques, e.g. Learning Classifier Systems (LCSs), provide a powerful technique, which has received a sustained amount of research attention over nearly four decades. Since Holland’s formalization of GA and his conceptualization of the first LCS, the LCS paradigm has broadened greatly into a framework. It encompasses many algorithmic architectures, rule coding, rule discovery mechanisms, and additional integrated heuristics. This specific kind of EML technique holds a great potential of applicability to various problem domains such as behavior modeling, regression, classification, prediction. Clearly, these systems uniquely benefit from their adaptability, flexibility, minimal assumptions, and interpretability. In this talk, the focus will be on the key issues of evolutionary rule-based machine learning, i.e. rule-generalization hypothesis/algorithm, rule-learning theory and rule-evolution architecture. A summary of the latest works of LCS fields over the last decade will include the following more specific content:

Talk3: Evolutionary Neural Networks

For the third talk provided by Prof. Shirakawa, we will focus on the evolutionary neural networks (ENNs) including the extension to deep learning; Deep neural networks (DNNs) have seen great success in various tasks such as image recognition. In NNs, it is hard to get the gradient information for the structure parameters, while the one for the weight parameters can be computed by back-propagation. As the EC techniques can be viewed as a flexible optimizer (black box optimizer), optimizing the structure by EC techniques is a promising way. The typical ENN methods (e.g., neuro-evolution) optimize the weights and structure of NN using the EC approach. In this talk, we start with a brief review of the history and concept of ENN and then discuss the difficulty of the structure and weight optimization in DNNs. After that, several algorithms of ENNs are introduced and classified from the optimization perspective. Finally, we provide the recent trends and research direction of evolutionary deep neural networks. This talk consists of the recent trends of ENNs together with the traditional approaches and their applications, including the following topics:

Tutorial Presenters