Our research interest is on the design, analysis and understanding of general-purpose principled single- and multi-objective optimization algorithms, where parallelism and distributed computations are expected to play a crucially important role, both to take full benefit from the ever-increasing growth of high performance and large scale compute facilities, and to accelerate the design of novel effective approaches for tackling increasingly complex and massive optimization problems. In this context, our contributions can be classified following three main axes. The first one is on the design of highly scalable parallel and distributed algorithms for exact (complete) optimization. More precisely, given a large scale heterogeneous environment (clusters, grids), where heterogeneity can occur either at the node level (muli-core, multi-GPU) or at the communication level (network links, latency), we consider to leverage state-of-the-art HPC techniques based on the work-stealing paradigm in order to adaptively and dynamically manage the irregular workload of parallel branch-and-Bound, considered as a representative example of highly unbalanced tree search algorithms. The second axis is on the design of online adaptive operator selection techniques targeting high-level heuristic search algorithms (local search, evolutionary algorithms, metaheuristics, etc.) and focusing on a number of novel abstract distributed models and benchmark optimization scenarios, namely, the so-called heterogeneous island model and the fitness cloud model. This is tightly related to a more general research line that we are actively developing: the selection, parameter setting, and automated configuration of optimization algorithms using machine-learning and artificial intelligence inspired techniques (multi-armed bandits, portfolio design, hyper-heuristics, offline tuning, etc.). The last axis is on evolutionary multi-objective optimization. In particular, we consider a state-of-the-art decomposition-based framework, and we address its core algorithmic components, as well as its distributed and parallel design. Decomposition basically uses some scalarizing functions in order to transform the original problem into a number of sub-problems that are solved cooperatively in parallel. We show that decomposition, besides leading to powerful divide-and-conquer algorithms that can flexibly fit the distributed and large scale nature of a compute platform, is a key concept allowing to leverage previous well-established evolutionary algorithms from single- and multi-objective optimization.
defended on 11/12/2017