Season's Greetings

edited by S. Kaiser, A.R. Gottu Mukkula, T. Ebrahim and Prof. S. Engell

Season's Greetings

Dear co-workers, project partners, colleagues, friends, and former members of the dyn and pas groups!

We hope that this message reaches you in good health and good spirits, despite the risks and restrictions of another Corona winter! Sometimes one would wish that every decision maker had to take (and pass!) a compulsory short course in fundamentals of feedback control, just some basics: What is runaway, why it is better to use indicators which react fast rather than indicators that provide information weeks later when the problems have accumulated, and that when controlling a system the dynamics of which you do not fully understand, you better use small probing moves and react fast to the outcomes than doing bang-bang control...

Looking back on 2021, we believe that our groups managed the fully online semesters very well with good outcomes, thanks to the enormous efforts of all group members. The only issue for which we did not come up with solutions that were received favourably by all was how to offer online exams assuring fairness, but also flexibility.

This semester we switched back to classroom teaching, at least until now, which the students seem to enjoy very much. We have been using all channels of communication from the start of the semester, live lectures, live broadcasting via Zoom, and providing the recordings for asynchronous and multiple use. We believe in the maturity of the students to use these opportunities in the way that fits them best. After the experience of the online semesters, the interaction in the lectures so far was livelier than ever, and certainly our teaching materials also have improved.

With the new lecture “Machine Learning Methods for Engineers” the PAS group introduced a new course entirely dedicated to machine learning which has been very well received by students.

Regarding research, the DFG Transregio InPROMPT will end in mid-2022 after 12.5 successful years. The achievements are collected in an impressive book. The KEEN project continues to provide a stimulating collective learning experience about the potential and limitations of machine learning when applied to real-world examples and data, both for the partners from industry and academia, and has already triggered interesting methodological work.

In the new EU project Circular Foam that started in October 2021, our groups will contribute to the development of large-scale chemical recycling solutions for polyurethane foam from refrigerators and construction waste, in particular to system-wide modelling, simulation and optimization.

The pas group successfully applied for funding in the framework of a new Priority Program (SPP) of DFG, dealing with machine learning in chemical engineering. The project will explore new ideas to achieve safe reinforcement learning for the optimal startup and operation of complex chemical processes.

Just today, we got the news that the successor project of SIMPLIFY, SIMPLI-DEMO will receive funding from the EU for 4 years, providing a jump-start for the PAS group in the application domain of the production of particles and highly viscous materials.

You find a lot more information on research, people, publications etc. on our Seasons’s Greetings web page.

Unfortunately, due to the pandemic, we were not able to plan a live event in 2021, we hope for an opportunity where we can meet many of you in person in 2022!

We would like to thank all group members, project partners and colleagues for the pleasant and rewarding collaborations in 2021, and we wish you enjoyable holidays and a successful and happy year 2022 in good health!

Sebastian Engell and Sergio Lucia

EU-Project Circular Foam Started

Closing the materials cycle for rigid polyurethane foams: This is the ambitious goal of the new pan-European "CIRCULAR FOAM" project. The EU-funded lighthouse project brings together 22 partners from 9 countries from industry, academia and society. Within four years, the consortium will jointly establish a complete circular value chain for raw materials for rigid polyurethane foams used as insulation material in refrigerators and the construction industry. Once implemented across Europe, the system could help to save 1 million tons of waste, 2.9 million tons of CO2 emissions and 150 million euros in incineration costs annually, starting in 2040.

The CIRCULAR FOAM project aims at bringing multiple improvements to the existing material cycle and building a new sustainable circular e cosystem for rigid polyurethane foam. Besides the development of two novel chemical recycling routes for end-of-life materials, waste collection systems and dismantling and sorting solutions and logistic solutions will be set-up and demonstrated. The different elements will be combined into an optimized systemic solution, based on integrated system modelling and simulation.

In CIRCULAR FOAM, we will develop an integrated system-wide modelling, simulation and optimization framework into which models of the different elements, collection and separation of waste, logistics, chemical processing and separation and feed-in into industrial sites can be embedded. We will also contribute to the design of the chemical processing and separation steps, in particular with respect to robustness and flexibility in terms of the throughput and the composition of the feed streams.

More information:

InPROMPT Project

After more than 12 years, the SFB Transreagio 63 ”Integrated Chemical Processes in Liquid Multiphase Systems”. will end in 2022. The Transregio currently consists of 14 research projects with researchers from TU Berlin, TU Dortmund, OVGU Magdeburg, HTW Berlin, TU Darmstadt, KIT, MPI Magdeburg and HS Anhalt. The goal is understand and develop processes based on reactive multiphase system. All levels of the process, from the molecular level to the design and operation of the processes in miniplants are considered.

The results of the research projects within InPROMPT will be published in a book “Integrated chemical processes in liquid multiphase systems”, by De Gruyter. The book covers the fundamentals of the thermodynamics of multiphase systems, kinetic modeling and modelling of mass transfer in multiphase systems. It is discussed how three types of phase systems, thermomorphic multiphase systems, microemulsion systems and pickering emulsions can be characterized and which aspects have to be considered for process design. Tools for process systems engineering including modelling and simulation, process optimization and model-based monitoring are presented and in the final chapter, the integration of the tools into a comprehensive design methodology is presented.

The dyn group contributed sections on surrogate modeling of thermodynamic equilibria, optimization under uncertainty in process development, iterative real-time optimization applied to the hydroformylation of 1-dodecene in a TMS-system on miniplant scale, and on the integrated model-based process design methodology.

A final colloquium of the Transregio is planned for March 31 – April 1, 2022 at DECHEMA, Frankfurt. We would like to thank all colleagues from the Transrigio SFB for the pleasant and productive collaboration!

Image of the miniplant with highlighted process units.

  • M.Jokiel et al., "Miniplant-Scale Evaluation of a Semibatch-Continuous Tandem Reactor System for the Hydroformylation of Long-Chain Olefins", Industrial & Engineering Chemistry Research, 58(7), pp. 2471-2480, 2019.
  • SIMPLIFY - Sonication and microwave processing of material feedstock

    The SIMPLIFY project is an EU Innovation Action aiming at the electrification of the chemical industry. Until April 2023, a consortium of 11 European organizations including the dyn group will focus on the development of flexible electrified continuous processes for three industrial case studies.

    One of the three case studies is the production of polyurethanes which are used as rheology modifiers in water-based paints. These rheology modifiers are conventionally produced in batches of several cubic meters and directly formulated with water within the same vessel. This production process takes several hours per batch. Within the SIMPLIFY project, the transition to a continuous production of these paint thickeners by reactive extrusion on a twin-screw extruder is investigated, which offers numerous advantages. This transition significantly reduces the time and costs associated with cleaning and enables a fully electrified production using renewable energy sources.

    The dyn group is contributing to the project by our expertise in the fields of process modelling, automation and control. A twin-screw extruder model has been extended to account for the reactive extrusion process and is combined with a new model of the chemical system based on the experimental work carried out by the project partners. Model based optimization is used for the decision making process of the optimal screw design and the operating region. The experimental validation of the results and process demonstration is carried out atFraunhofer ICT in Pfinztal on an 18mm Leistritz Maxx extruder as shown in Figure 1. In collaboration with Fraunhofer ICT a process automation and measurement concept for this extruder was implemented and validated. This concept includes the control and recording of all process values centrally with a soft PLC using OPC-UA. With this automation system it was possible to demonstrate a stable production of 4kg/h of product over a duration of 8 hours this year. Furthermore, the technical requirements are now met to implement advanced process control in the coming year. Recent theoretical investigations of the application of MAWQA to reactive extrusion processes showed that this control method is well suitable for the reactive extrusion process and offers major economic and ecologic benefits.

    Figure 1: Picture of the reactive extrusion setup with temperature, pressure and viscosity measurements (left) and the ultrasound sonotrode (right) realized at Fraunhofer ICT. © Fraunhofer ICT.


  • M. Cegla and S. Engell, 2021, "Reliable Modelling of Twin-screw Extruders by Integrating the Backflow Cell methodology into a Mechanistic Model". Computer Aided Chemical Engineering 48, 175-180.
  • M. Cegla and S. Engell, 2021, "Application of Model Predictive Control to the reactive extrusion of e-Caprolactone in a twin-screw extruder". IFAC-PapersOnLine 54 (3), 225–230.
  • Cegla,M. and Engell,S. 2022, "Application of Real-Time Optimization with Modifier Adaptation to the Reactive Extrusion of Hydrophobically Modified Ethoxylated Urethanes". In Review.
  • KEEN-TP7: Self-optimizing plants - Dynamic gray-box modeling of fermentation processes

    KEEN connects 20 partners, industrial end users, solution providers and scientific institutions with the objective to introduce artificial intelligence (AI) technologies and methods in the process industry and to evaluate and realize their technical, economical, and social potential.

    The goal of the work on self-optimizing plants in KEEN is to improve process operations using machine learning (ML) models within advisory systems or in closed-loop control. In this context, a it is crucial to ensure the reliability of the model predictions, especially for applications in feedback control.

    So-called gray-box models are a promising direction to increase reliability and interpretability of ML-models. Here, mechanistic model parts are combined with ML-models. Among others, we consider models, where the dynamics are described by mechanistic relations, but some embedded variables are represented by ML-models. An example could be a (bio-) chemical reaction system, where the underlying reactions are known, but the dependency of a reaction rate on temperature or concentration is unknown and modelled using ML-models. Because the ML-model is here embedded into the dynamic equations, it is challenging to choose an appropriate ML-model structure along with its parameters. This is because the dynamic behavior of the system has to be simulated, to evaluate the fitness of one set of model structure and parameters.

    We approach this problem in several steps by first estimating what values the ML-models should output to accurately describe the experimental data. This is done by using continuous piece-wise linear functions of time instead of the complex ML-model. This model depends only on the values at the knot points. These knot point values are a lot easier to fit than e.g. a neural net. In the next step, the ML-submodels can be trained using any ML-toolbox, using the values of the piece-wise linear functions as training data. Finally, a full parameter estimation is performed on the basis of a dynamic simulation of the complete model. This procedure is illustrated in Fig. 1.

    Figure 1: Steps of decomposed parameter estimation problem for parameterizing dynamic gray-box models with embedded machine learning models

    We have investigated different algorithmic options and obtained promising results for the fermentation of a sporulating bacterium [1, 2]. The methodology is used on real-world experimental data obtained in pilot-plant scale at Evonik in Hanau.

    Figure 2: Picture of the investigated pilot plant. Image by Air Liquide

    In one of the investigated use-cases, the dyn group collaborates with Air Liquide on the optimal control of a pre-reforming process. Air Liquide provides data from a pilot plant located at their “Innovation Campus” in Frankfurt where ML-based control algorithms can also be tested. In an adiabatic pre-reformer, higher hydrocarbons are catalytically converted into H2/Syngas. The pre-reforming reactor is installed in front of the main reformer, improving the overall process in terms of efficiency, catalyst lifetime and feedstock flexibility. The goal in KEEN is to increase the energy and material efficiency and to reduce the green-house emissions by improved process control. By the dyn group, data-based (ML) models of the pilot plant are developed based on plant data of the controlled plant supplied by Air Liquide. The developed models will be used in a model predictive controller. Additionally, a mechanistic model for the process has been designed to be used as a benchmark. The solutions will be applied to the pilot plant in the coming years


  • [1] Winz, J., & Engell, S. (2021). A methodology for gray-box modeling of nonlinear ODE systems. Submitted to ESCAPE32.
  • [2] Winz, J., & Engell, S. (2021). A methodology for reliable dynamic nonlinear gray-box modeling. Submitted to DYCOPS2022.
  • OptiProd project

    Make-and-pack process plants make up a significant slice of the total of production plants in the process industry. Their competitiveness, productivity, and resource efficiency largely depend on the quality of the production scheduling. Today, production scheduling is often done manually, which is tedious, expensive, and inefficient. For those reasons, there is an increasing industrial drive towards automated and optimal scheduling solutions.

    A schematic representation of a two-stage formulation and filling plant that is investigated as a case study in the OptiProd.NRW project, is shown in Figure 1. It consists of a formulation stage, which is decoupled from the downstream filling stage by a set of buffer tanks that are connected through the transfer panel. In addition, the logistics of raw materials and final products as well as the shift schedules of the operators must be considered.

    The difficulty of such scheduling problems is the tremendous degree of detail that must be considered to guarantee the feasibility of the plans on the shop floor level. To capture the complex interactions precisely, we make use of the commercial simulation software from INOSIM. It provides a powerful and flexible formalism to capture the features of industrial production processes and can be configured via an easy-to-use interface that enables the design of models without deep knowledge on simulation. The INOSIM software can simulate simple scheduling rules, but an optimization of schedules is not offered.

    Figure 1: Schematic representation of the industrial formulation plant.

    During the OptiProd.NRW project, an optimization framework [1] is developed to generate schedules automatically. The framework is based upon customizable Evolutionary Algorithms that propose schedules which are evaluated by the INOSIM simulation model and which are iteratively improved based on the simulation results.

    The simulation-optimization approach under development will generate schedules that are validated by a very detailed simulation model and optimize the operation of the plant. In 2022, a testing phase with the industrial partner is planned. INOSIM intends to commercialize the project results thereafter.


  • [1] Klanke, C., Bleidorn, D., Koslowski, C., Sonntag, C., Engell, S., 2021a. Simulation-based scheduling of a Large-scale Industrial Formulation Plant Using a heuristics-assisted Genetic Algorithm, in: GECCO’21: Proceedings of the Genetic and Evolutionary Computation Conference Companion, Association for Computing Machinery. pp. 1587–1595.
  • HyPro Project

    Motivation Blast furnace-based steel-making is the dominant route for the worldwide production of steel which accounts for 70% of the final product. An industrial blast furnace is a huge gas-solid reactor that typically produces around 4 million tons of liquid iron per year with an annual total CO2 emission of more than 7 million tons. The stable, economically optimal, and environmental-friendly operation of blast furnaces is a challenge due to the complexity of the multi-phase and multi-scale physical and chemical phenomena, the lack of direct measurements of key inner variables, and the occurrence of a wide range of unknown disturbances. Blast furnaces are operated in a semi-automated manner and the quality of the control depends on the skills and dedication of the operators. The steel industry is strongly interested in better support for the operators or full automation to achieve a more energy-efficient and stable operation of blast furnaces.


    A main objective of the HYPRO project is to develop new control strategies that achieve an improved energetic efficiency. In collaboration with partners such as VDEh-Betriebsforschungsinstitut GmbH (BFI) and thyssenkrupp Steel Europe AG, we developed a hybrid dynamic model-based control scheme for achieving the desired operational objectives of a blast furnace.


    Within the hybrid model, we integrated a first-principles-based model of medium complexity with data-based dynamic neural net model in order to combine the advantages of these two types of models. The hybrid dynamic model provides insights into the blast furnace operation status in terms of thermal status, safety, productivity, and efficiency, by predicting quantities at the furnace boundaries such as the molten iron and slag quality indices and the off-gas analysis parameters (temperature (TG), pressure drop (DP), and efficiency factor (η)). Free run simulation results of the multi-step ahead predictions of the hot metal silicon content ([Si]) and slag basicity (SB) are shown in Fig. 2. Using this model, an optimizing model predictive controller (MPC) was developed for controlling the hot metal silicon content and slag basicity at their desired set-points, subject to operational constraints. This controller regulates the gas phase variables to counteract the process disturbances that are caused by variations in the solid feed. Low values of [Si] and SB lead to improved energy efficiency of the blast furnace process. The simulation results of the control scheme are shown in Fig. 3.

    Fig. 2. Multi-step ahead prediction of silicon content ([Si]) and slag basicity (SB) by the hybrid model. Fig. 3. Simulation results of the MPC scheme.


  • Azadi, P., Klock, R. and Engell, S., "Model Predictive Control of Molten Iron and Slag Quality Indices in a Large-scale Ironmaking Blast Furnace using a Hybrid Dynamic Model", submitted to IFAC MMM2022.
  • Azadi, P., Winz, J., Leo, E., Klock, R., and Engell, S., "A hybrid dynamic model for the prediction of molten iron and slag quality indices of a large-scale blast furnace", 2022, Computers & Chemical Engineering, 156, 107573.
  • Azadi, P., Klock, R., and Engell, S., "Efficient Utilization of Active Carbon in a Blast Furnace through a Black-Box Model-Based Optimizing Control Scheme", IFAC-PapersOnLine, 2021, 54(3), 128-133.
  • Azadi, P., Minaabad, S. A., Bartusch, H., Klock, R., and Engell, S., "Nonlinear Prediction Model of Blast Furnace Operation Status", In Computer Aided Chemical Engineering (Vol. 48, pp. 217-222), 2020.
  • dyn Members 2021

    We all would like to thank our partners, colleagues, students and alumni for their support and the fruitful collaboration all throughout 2021. We wish you a bright, happy and successful year 2021!

    Farewell to our colleagues

    Fabian Schweers

    Fabian Schweers started his research career with the dyn group in 2015. He worked in the EU project CONSENS. Since November 2021, Fabian is employed at BP Gelsenkirchen.

    Afaq Ahmad

    Afaq did his BSc in electrical and electronic engineering, focusing on automation and control. Later, he completed his master's in automation & robotics in Dortmund. During his master's studies, he focused on process automation. He then started his research career with the dyn group in May 2016 as a Research Associate, where he focused on Real-Time Optimization. He worked in DACH, DAAD-PPP, CoPro, and KEEN projects. Since November 2021, Afaq is employed at the BASF SE in large capital projects.

    Jesus David Hernandez Ortiz

    Jesus joined the dyn group in early 2017 as a Marie-Curie fellow. His research focused on the modeling and optimization of the electric arc furnace steelmaking process. As a Marie-Curie Early Stage Researcher, Jesus spent most of his time at Acciai Speciali Terni in Italy. Since August 2020, he is employed at BASF SE as E&I engineer.

    New dyn Members

    Marion Fiebiger

    Marion started working as Prof. Engell´s administrative Assistant in April 2021. Getting to know the administrative work for the various research projects of the dyn and PAS group and managing the finances for the chair, she was able to get a position at the Department of Educational Sciences and Psychology of the TU Dortmund. We will miss her and she will keep in good memory the collegial work atmosphere.

    Engelbert Pasieka

    Engelbert Pasieka studied Chemical Engineering at the TU Dortmund completing both the Bachelor and the Master program. He conducted his master thesis with the title “Metaheuristic Optimal Batch Production Scheduling with Direct and Heuristics-based Encodings” in cooperation with the INOSIM Software GmbH. He joined the group in August 2021 and is currently involved in the OptiProd.NRW Project.

    New funded DFG Project

    A new project entitled „Safe Reinforcement Learning for Start-up and Operation of Chemical Processes“ has been awarded by the DFG this year. The project is part of the Priority Program 2331 with the title “Machine Learning in Chemical Engineering. Knowledge Meets Data: Interpretability, Extrapolation, Reliability, Trust”. Our project will explore how promising approaches from the field of reinforcement learning can be combined with ideas from robust predictive control to achieve algorithms that can be achieve a safe operation. The algorithms will be demonstrated for the start-up and operation of a complex distillation column, together with our project partners at TU Berlin (Prof. Repke)

    Machine learning methods for engineers

    During the summer semester of 2021, we created a new course called “Machine learning methods for engineers”. This lecture is the first one at the Department that is completely focused on machine learning. It covers important basics on probability and optimization as well as efficient implementations and current limitations of machine learning. The course was very well received by the students and will be continued as an elective at the master level.

    PAS Members

    Lukas Lüken and Moritz Heinlein joined the newly founded Laboratory for Process Automation Systems (PAS) at TU Dortmund this year.

    Lukas Lüken

    Lukas Lüken received his M.Sc. in Automation Engineering from RWTH Aachen University in January 2021. His master thesis considered the nonlinear model predictive control (NMPC) of a large-scale air separation unit using model reduction techniques and surrogate modeling with neural networks. He started working as a research associate at PAS in February 2021. His research interests are in the conjunction of control engineering, optimization and machine learning. His main focus is on directly incorporating optimization problems such as MPC into end-to-end learning frameworks for efficient and robust learning and control in complex environments. He sincerely thanks his colleagues at DYN and PAS for the warm welcome and looks forward to great learning opportunities

    Moritz Heinlein

    Moritz Heinlein studied Chemical Engineering in Dortmund from 2015 to 2021 completing both the Bachelor and the Master program. His Bachelor thesis is titled “Theoretische Betrachtung kinetischer Modelle für die Methanchlorierung und Pyrolyse”. He completed his master thesis titled „Comparison of Robust Subspace Predictive Control Methods for non-linear Systems” at the PAS chair. In November 2021, he joined the PAS group as a research associate. Currently, his research focusses on reducing the number of branches in the scenario tree for nonlinear multi-stage MPC by exploiting certain system properties.

    Work­shop on Robust Model Predictive Control at Uni Freiburg

    DFG Work­shop Robust Model Predictive Control (2021). The Laboratory of Process Automation Sys­tems (PAS) is meeting the Sys­tems Control and Optimization Laboratory at Uni­ver­si­tät Freiburg.

    Felix Fiedler, Benjamin Karg, Lukas Lüken and Sergio Lucia from the Laboratory of Process Automation Systems visit the Sys­tems Control and Optimization (Syscop) Laboratory (Prof. Moritz Diehl) at Uni­ver­sity of Freiburg for a Workshop on Robust Model Predictive Control.
    The Work­shop is part of our ongoing collaborations with Syscop in the context of the DFG project Robust MPC with high-dimensional uncertainty. During the three day event (21.-23.06.2021) all participants had the opportunity to present their current re­search in the field of Robust Model Predictive Control.

    do-mpc developer con­fe­rence at TU Dortmund

    Prof. Dr.-Ing. Sergio Lucia during the presentation.

    The first do-mpc developer con­fe­rence was hosted from 13.09.-15.09.2021 at TU Dort­mund. Over the course of three days the Laboratory of Process Automation Systems hosted multiple on­line and offline events for developers, users and supporters do-mpc. The con­fe­rence was a great success and we are already looking forward to the next iteration.

    pas @ Process Control Conference 2021

    M.Sc. Benjamin Karg during the presentation.

    From June 1st to 4th, Benjamin Karg represened the pas group at the 23rd International Conference on Process Control in Bratislava, Slovakia. The conference was held in a fully virtual format. pas contributed the following paper to the conference:

  • Benjamin Karg and Sergio Lucia, "Reinforced approximate robust nonlinear model predictive control".
  • pas @ ECC 2021

    Fekix Fiedler represented the pas group at the European Control Conference from June 29th to July 2nd. Originally the conference was planned to take place in Rotterdam, Netherlands, but it took place in a virtual form due to the Covid-19 pandemic. He presented the paper:

  • Felix Fiedler and Sergio Lucia, "On the relationship between data-enabled predictive control and subspace predictive control".
  • Publications

    Journal Articles 2021
    Karg, B., Alamo, T., Lucia, S.:
    Probabilistic performance validation of deep learning-based robust NMPC controllers
    International Journal of Robust and Nonlinear Control, vol. 31, no. 18, pp. 8855-8876, Full Paper
    Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques. We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system.
    Karg, B., Lucia, S.:
    Approximate moving horizon estimation and robust nonlinear model predictive control via deep learning
    Computers & Chemical Engineering, vol. 148, pp. 107266, Full Paper
    Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using probabilistic validation techniques. We propose a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance indicators. The potential of the proposed approach is demonstrated with simulation results of an uncertain nonlinear system.
    Karg, B., Lucia, S.:
    Model Predictive Control for the Internet of Things
    Recent Advances in Model Predictive Control: Theory, Algorithms, and Applications, pp. 165-189, Full Paper
    In this chapter, we argue that model predictive control (MPC) can be a very powerful technique to mitigate some of the challenges that arise when designing and deploying control algorithms in the context of the internet of things. The development of new low-power communication technologies and the widespread availability of sensing and computing capabilities, which are a characteristic of the internet of things, enables the consideration of a large amount of interconnection and feedback loops. However, this also introduces important challenges such as the very limited communication capabilities of low-power wide area networks or the limited computational resources of low-cost micro-controllers. We argue that, as a predictive control scheme, MPC is a powerful technique to deal with limited communication capabilities and can be naturally extended to the context of distributed control, for cases where all the sensing information cannot be centralized. We also present an approach to approximate the solution of MPC problems using deep neural networks, rendering the embedded implementation of complex controllers possible even on very limited hardware. The ideas are illustrated with an example of distributed model predictive control of a smart building.
    Conference Articles 2021
    Fiedler, F., Lucia, S.:
    On the relationship between data-enabled predictive control and subspace predictive control
    ECC 2021: European Control Conference
    Data-enabled predictive control (DeePC) is a recently proposed approach that combines system identification, estimation and control in a single optimization problem, for which only recorded input/output data of the examined system is required. The same premise holds for the subspace predictive control (SPC) method in which a multi-step prediction model is identified from the same data as required for DeePC. This model is then used to formulate a similar optimal control problem. In this work we investigate the relationship between DeePC and SPC. Our primary contribution is to show that SPC is equivalent to DeePC in the deterministic case. We also show the equivalence of both methods in a special case for the non-deterministic formulation. We investigate the advantages and shortcomings of DeePC as opposed to SPC with and without measurement noise and illustrate them with a simulation example.
    Karg, B., Lucia, S.:
    Reinforced approximate robust nonlinear model predictive control
    PC 2021: 23rd International Conference on Process Control,
    Model predictive control (MPC) has established itself as the standard advanced process control method. However, solving the resulting optimization problems in real-time can be challenging, especially when uncertainty is explicitly considered in a robust nonlinear predictive control approach. An increasingly popular alternative to avoid the online solution of the resulting optimization problems is to approximate their solution using neural networks. The networks are trained using many solutions of the MPC problem for different system states and therefore this approach is often called imitation learning. Controllers obtained via imitation learning have important drawbacks, since it is difficult to learn behaviors that are not well represented in the data and they must be redesigned from scratch when the control task changes. In this work, we show that these two drawbacks can be mitigated by combining imitation learning and concepts from reinforcement learning. The central idea is to use imitation learning as a very good initialization of a control policy that is iteratively updated using reinforcement learning, taking advantage of the fact that an explicit and differentiable expression of the approximate MPC controller is available. The efficacy of the combination of the two learning paradigms is highlighted via simulations of a semi-batch industrial polymerization reactor.


    Master Theses 2021
    Rohit Panindra:
    Efficient Data Sampling for Learning-based Controllers
    supervised by: Benjamin Karg
    Model predictive control (MPC) is a popular optimisation-based control approach which allows direct consideration of multivariate, nonlinear systems and arbitrary constraints. But solving the underlying optimal control problem can be prohibitive, especially when the considered system needs a very fast sampling time or when the computational requirements for solving the optimization problem cannot be met. Various methods to reduce the computational complexity have been proposed, amongst which deep neural networks (DNN) were established as promising candidates due to their ability to learn complex functions based on data. The quality and amount of data necessary for obtaining the desired approximation quality is crucial for the application of DNN as imitators of MPC. Since generating data means solving many complex optimization problems, the goal of this work is to reduce the amount of data necessary to obtain a desired performance of the learningbased controller w.r.t. to commonly used approaches in literature. In this work, random sampling, quasi-random sampling, sampling from closed-loop trajectories, and active learning are the methods investigated to infer the most efficient method. Quasi-random sampling involves (scrambled) Halton and (randomised) Sobol sequences, and the batch Bayesian optimisation via local penalization (BBO-LP) method is adopted for active learning. A modification for the BBO-LP method is presented to overcome the problem of sampling infeasible states. The methods are analysed for a 2-dimensional nonlinear CSTR system with one control input and a 14-dimensional spring-mass system with 4 control inputs.
    Moritz Heinlein:
    Comparison of Robust Subspace Predictive Control Methods for non-linear Systems
    supervised by: Felix Fiedler
    Subspace predictive control (SPC) is a linear predictive control approach, that creates a linear multi-step ahead prediction model through projections of the input/output data of a system. It is thus easy to implement and can even be explicitly solved under certain assumptions. Due to the linear nature of the model, the applicability to nonlinear systems is restricted. However, related linear predictive control algorithms like the recently proposed data enabled predictive control showed good results in nonlinear case studies. In this work, the performance of SPC is compared to other linear predictive control algorithms by the means of case studies with nonlinear systems. In these it achieves partly similar performance to nonlinear model predictive control with the full system model. The main contribution in this work are two new approaches to further robustify the performance of SPC in terms of closed loop convergence and constraint satisfaction. Regularized SPC uses a ridge regression in the parameter estimation step to better condition the parameters. Better convergence rates can be achieved, as presented in the application to a nonlinear Lotka-Volterra model. Multicluster SPC uses within-cluster-regression to sort the input/output data after their proximity and linear prediction accuracy into clusters. This way the dynamics of different linearizations of the nonlinear system should be captured. These multiple models are implemented similar to multi-stage-MPC. Although no stability guarantees can be given, this leads to better constraint satisfactions and convergence rates, when the number of clusters can be chosen high, as shown by the means of the same Lotka-Volterra model.

    Journal Articles 2021

    Azadi, P., Winz, J., Leo, E., Klock, R., Engell, S.:
    A hybrid dynamic model for the prediction of molten iron and slag quality indices of a large-scale blast furnace
    Computers & Chemical Engineering, vol. 156, pp. 107573, Full Paper
    The stable, economically optimal, and environmental-friendly operation of blast furnaces is still a challenge. Blast furnaces consume huge amounts of energy and are among the biggest sources of CO2 in the metal industry. The operation of industrial blast furnaces is challenging because of their sheer size, multi-phase and multi-scale physics and chemistry, slow dynamics with response times of 8 hours and more, and the lack of direct measurements of most of the important inner variables. Model-based schemes are prime candidates for providing the missing information and improving the operation. However, only recently, such schemes have been applied successfully, and there is still a lot of room for improvements. The spatial extension, the lack of precise mechanistic knowledge about the chemical and physical phenomena, and the presence of unmeasured disturbances make the application of first-principle models to process operations extremely challenging. In this work, a hybrid dynamic model is developed for the prediction of the hot metal silicon content and the slag basicity in the blast furnace process. These two variables are the key indicators of the internal process conditions, and the ultimate goal of our work is to control them by a model-based scheme. The core relationships between the process variables are imposed by a first-principles-based steady-state model, and a parallel data-based model represents the process dynamics and compensates for the deficiencies of the mechanistic model. Validation results for real plant measurements of a world-scale blast furnace show that the hybrid model is more accurate than the rigorous model and a stand-alone data-based model in long-term predictions of the dynamic behavior of the process.
    Klanke, C., Yfantis, V., Corominas, F., Engell, S.:
    Short-term scheduling of make-and-pack processes in the consumer goods industry using discrete-time and precedence-based MILP models
    Computers & Chemical Engineering, vol. 154, pp. 107453, Full Paper
    This work deals with the short-term scheduling of a two-stage continuous make-and-pack process with finite intermediate buffer and sequence-dependent changeovers from the consumer goods industry. In the present coupled layout of the plant under consideration, the two stages, product formulation and packaging, are directly coupled, i.e. the products of the formulation stage go directly to their dedicated unit in the packaging stage. As for different products either the formulation or the packaging stage can be the bottleneck due to a stage and product dependent processing rate. A gain in productivity can be obtained if the two stages are decoupled by a buffer so that the formulation lines and the packaging lines can both run at full capacity. To evaluate the benefit of introducing a buffer between the stages, a rigorous discrete-time mixed-integer linear programming (MILP) model was developed. As the results of the discrete-time model were unsatisfactory with respect to total plant downtime due to changeovers and idle times, a two-step solution strategy including a second immediate precedence-based MILP model was developed. As the problem is intractable for the planning horizons of interest, an order-decomposition strategy for both models that is enhanced by several heuristics, was incorporated in the solution strategy. It is demonstrated that the redesign of the production plant yields significant productivity improvements that can be realized using the proposed scheduling approach. The computational results on several real cases show that the increased modeling and development effort of the two-step solution strategy pays off in terms of solution quality and computation times.
    Leo, E., Engell, S.:
    Condition-based maintenance optimization via stochastic programming with endogenous uncertainty
    Computers & Chemical Engineering, vol. 156, pp. 107550, Full Paper
    In this work we address the challenge of integrating production planning and maintenance optimization for a process plant. We consider uncertain predictions of the equipment degradation by adopting a stochastic programming formulation with decision-dependent uncertainty. The probability of the uncertain parameters, in this work the remaining useful time of the plant, depends on the operating conditions of the plant which is modeled by embedding a prognosis model, the Cox model, into the optimization problem. A separation of the variables is suggested to decompose the MINLP formulation via two different primal decomposition algorithms. We provide computational results and compare the performance of the proposed decompositions with the global solver BARON enhanced with a custom branching priority strategy.
    Leo, E., Dalle Ave, G., Harjunkoski, I., Engell, S.:
    Stochastic short-term integrated electricity procurement and production scheduling for a large consumer
    Computers & Chemical Engineering, vol. 145, pp. 107191, Full Paper
    This paper addresses the problem faced by large electricity consumers to simultaneously determine the optimal day-ahead electricity procurement and the optimal energy-aware production schedule. The inherent uncertainty of the problem, due to the bidding process in the day-ahead market, is dealt with by means of the stochastic programming modeling framework. In particular, a two-stage problem is formulated with the aim of establishing the optimal bidding strategy and the optimal production schedule hedging against price uncertainty. The optimal integrated solution is defined to minimize the overall cost and to control the risk of high cost scenarios due to uncertain price peaks. The stochastic model is solved with a scenario-decomposition approach. Extensive numerical experiments have been carried out to assess the performance of the proposed decision approach. The results collected when considering an industrial relevant case-study show the superiority of the proposed methodology in comparison with a deterministic approach.
    Gottu Mukkula, A.R., Mateáš, M., Fikar, M., Paulen, R.:
    Robust multi-stage model-based design of optimal experiments for nonlinear estimation
    Computers & Chemical Engineering, vol. 155, pp. 107499, Full Paper
    We study approaches to the robust model-based design of experiments in the context of maximum-likelihood estimation. These approaches provide robustification of model-based methodologies for the design of optimal experiments by accounting for the effect of the parametric uncertainty. We study the problem of robust optimal design of experiments in the framework of nonlinear least-squares parameter estimation using linearized confidence regions. We investigate several well-known robustification frameworks in this respect and propose a novel methodology based on multi-stage robust optimization. The proposed methodology aims at problems, where the experiments are designed sequentially with a possibility of re-estimation in-between the experiments. The multi-stage formalism aids in identifying experiments that are better conducted in the early phase of experimentation, where parameter knowledge is poor. We demonstrate the findings and effectiveness of the proposed methodology using four case studies of varying complexity.
    Gottu Mukkula, A.R., Engell, S.:
    Handling measurement delay in iterative real-time optimization methods
    Processes, vol. 9, no. 10, pp. 1800, Full Paper
    This paper is concerned with the real-time optimization (RTO) of chemical plants, i.e., the optimization of the steady-state operating points during operation, based on inaccurate models. Specifically, modifier adaptation is employed to cope with the plant-model mismatch, which corrects the plant model and the constraint functions by bias and gradient correction terms that are computed from measured variables at the steady-states of the plant. This implies that the sampling time of the iterative RTO scheme is lower-bounded by the time to reach a new steady-state after the previously computed inputs were applied. If analytical process measurements (PAT technology) are used to obtain the steady-state responses, time delays occur due to the measurement delay of the PAT device and due to the transportation delay if the samples are transported to the instrument via pipes. This situation is quite common because the PAT devices can often only be installed at a certain distance from the measurement location. The presence of these time delays slows down the iterative real-time optimization, as the time from the application of a new set of inputs to receiving the steady-state information increases further. In this paper, a proactive perturbation scheme is proposed to efficiently utilize the idle time by intelligently scheduling the process inputs taking into account the time delays to obtain the steady-state process measurements. The performance of the proposed proactive perturbation scheme is demonstrated for two examples, the Williams–Otto reactor benchmark and a lithiation process. The simulation results show that the proposed proactive perturbation scheme can speed up the convergence to the true plant optimum significantly.
    Rahimi-Adli, K., Leo, E., Beisheim, B., Engell, S.:
    Optimisation of the operation of an industrial power plant under steam demand uncertainty
    Processes, vol. 14, no. 21, pp. 7213, Full Paper
    The operation of on-site power plants in the chemical industry is typically determined by the steam demand of the production plants. This demand is uncertain due to deviations from the production plan and fluctuations in the operation of the plants. The steam demand uncertainty can result in an inefficient operation of the power plant due to a surplus or deficiency of steam that is needed to balance the steam network. In this contribution, it is proposed to use two-stage stochastic programming on a moving horizon to cope with the uncertainty. In each iteration of the moving horizon scheme, the model parameters are updated according to the new information acquired from the plants and the optimisation is re-executed. Hedging against steam demand uncertainty results in a reduction of the fuel consumption and a more economic generation of electric power, which can result in significant savings in the operating cost of the power plant. Moreover, unplanned load reductions due to lack of steam can be avoided. The application of the new approach is demonstrated for the on-site power plant of INEOS in Köln, and significant savings are reported in exemplary simulations.
    Janus, T., Engell, S.:
    Iterative Process Design with Surrogate-Assisted Global Flowsheet Optimization
    Chemie Ingenieur Technik, vol. 93, no. 12, pp. 2019-2028, Full Paper
    Flowsheet optimization is an important part of process design where commercial process simulators are widely used, due to their extensive library of models and ease of use. However, the application of a framework for global flowsheet optimization upon them is computationally expensive. Based on machine learning methods, we added mechanisms for rejection and generation of candidates to a framework for global flowsheet optimization. These extensions halve the amount of time needed for optimization such that the integration of the framework in a workflow for iterative process design becomes applicable.
    Winz, J., Nentwich, C., Engell, S.:
    Surrogate Modeling of Thermodynamic Equilibria: Applications, Sampling and Optimization
    Chemie Ingenieur Technik, vol. 93, no. 12, pp. 1898-1906, Full Paper
    Models based on first principles are an effective way to model chemical processes. The quality of these depends critically on the accurate description of thermodynamic equilibria. This is provided by modern thermodynamic models, e.g., PC-SAFT, but they come with a high computational cost, which makes process optimization challenging. This can be addressed by using surrogate models to approximate the equilibrium calculations. A high accuracy of the surrogate model can be achieved by carefully choosing the points at which the original function is evaluated to create data for the training of the surrogate models, called sampling. Using a case study, different approaches to sampling are discussed and evaluated with a focus on new approaches to adaptive sampling.
    Subramanian, S., Lucia, S., Paulen, R., Engell, S.:
    Tube-enhanced multi-stage model predictive control for flexible robust control of constrained linear systems with additive and parametric uncertainties
    International Journal of Robust and Nonlinear Control, vol. 31, no. 9, pp. 4458-4487, Full Paper
    The trade-off between optimality and complexity has been one of the most important challenges in the field of robust model predictive control (MPC). To address the challenge, we propose a flexible robust MPC scheme by synergizing the multi-stage and tube-based MPC approaches. The key idea is to exploit the nonconservatism of the multi-stage MPC and the simplicity of the tube-based MPC. The proposed scheme provides two options for the user to determine the trade-off depending on the application: the choice of the robust horizon and the classification of the uncertainties. Beyond the robust horizon, the branching of the scenario-tree employed in multi-stage MPC is avoided with the help of tubes. The growth of the problem size with respect to the number of uncertainties is reduced by handling small uncertainties via an invariant tube that can be computed offline. This results in linear growth of the problem size beyond the robust horizon and no growth of the problem size concerning small magnitude uncertainties. The proposed approach helps to achieve a desired trade-off between optimality and complexity compared to existing robust MPC approaches. We show that the proposed approach is robustly asymptotically stable. Its advantages are demonstrated for a CSTR example.
    Subramanian, S., Abdelsalam, Y., Lucia, S., Engell, S.:
    Robust Tube-Enhanced Multi-Stage NMPC with Stability Guarantees
    IEEE Control Systems Letters, vol. 6, pp. 1112-1117, Full Paper
    We propose a robust Nonlinear Model Predictive Control (NMPC) scheme that provides an improved trade-off between optimality and complexity when compared to other available strategies. Two controllers are employed in the proposed framework: A multi-stage primary controller that optimizes a given objective in the presence of uncertainties with tightened constraints and a multi-stage ancillary controller that tracks the predicted tree of state and input trajectories of the primary controller. The primary controller optimizes the original objective by considering different realizations of the most significant uncertainties in the predictions. The ancillary controller provides robustness against other uncertainties by tracking the predicted tree of state and input trajectories of the primary controller. We establish sufficient conditions for closed-loop stability. The advantages of the scheme are demonstrated for a continuous stirred tank reactor (CSTR) example.
    Hernández, J.D., Onofri, L., Engell, S.:
    Numerical Estimation of the Geometry and Temperature of An Alternating Current Steelmaking Electric Arc
    Steel Research International, vol. 92, no. 3, 2000386, Full Paper
    A channel arc model (CAM) that predicts the temperature and the geometry of an electric arc from its voltage and impedance set-points is presented. The core of the model is a nonlinear programming (NLP) formulation that minimizes the entropy production of a plasma column, the physical and electrical properties of which satisfy the Elenbaas–Heller equation and Ohm's law. The radiative properties of the plasma are approximated utilizing the net emission coefficient (NEC), and the NLP is solved using a global numerical solver. The effects of the voltage and impedance set-points on the length of the electric arc are studied, and a linear formula that estimates the length of the arc in terms of its electrical set-points is deducted. The length of various electric arcs is measured in a fully operative electric arc furnace (EAF), and the results are used to validate the proposed models. The errors in the predictions of the models are 0.5 and 0.4 cm. In comparison, the existing empirical and Bowman formulae estimate the length of the experimental arcs with errors of 2.1 and 2.6 cm. A simplified formula to estimate the temperature of an electric arc in terms of its electrical set-points is also presented.

    Conference Papers 2021

    Klanke, C., Bleidorn, D., Koslowski, C., Sonntag, C., Engell, S.:
    Simulation-based scheduling of a large-scale industrial formulation plant using a heuristics-assisted genetic algorithm
    GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1587-1595, Full Paper
    Research has brought forth several promising approaches, e.g. Mixed-integer linear programming or Constraint Programming to represent industrial batch production plants and to optimize their production schedules. But still, after decades of research in the field, scheduling is done (semi)-manually in industry in almost all cases. Main reasons for this besides the intrinsic combinatorial complexity are that model development and maintenance require expert knowledge. This work tackles these challenges by using a simulation-based optimization approach in which a Genetic Algorithm provides high-level encodings of schedules and an industrial-strength discrete-event simulator that provides a detailed model of the plant and is used as a fitness evaluator. To enable this approach to solve problems of industrial complexity, the scheduling heuristics are used to reduce the search space to a reasonable size that still contains the important degrees of freedom so that still optimal or at least high-quality solution are obtained. The case study that is considered here is an industrial two-stage formulation plant which is modeled in detail, down to the level of shiftmodels of the operators and mass-balances from source to sink, thus ensuring that the computed schedules are directly applicable at the real-world plant.
    Gerlich, S., Arab, H., Buchholz, M., Engell, S.:
    Experimental application of individual column state and parameter estimation in SMB processes to an amino acid separation
    ADCHEM 2021: 16th IFAC Symposium on Advanced Control of Chemical Processes, Full Paper
    The simulated moving bed (SMB) process is a highly efficient continuous chromatographic separation process. Due to its hybrid process dynamics that lead to discontinuities and sharp fronts on the state trajectories, optimal SMB process operation is challenging. Process performance can be improved by applying model-based optimizing control methods. For this, online information about states and individual column parameters are required. The strategy for simultaneous state and parameter estimation used here exploits the switching nature of the SMB process. The successful experimental application of the strategy is demonstrated for the continuous separation of two amino acids on an SMB pilot plant where extra-column equipment effects need to be considered.
    Cegla, M., Engell, S.:
    Application of model predictive control to the reactive extrusion of є-caprolactone in a twin-screw extruder
    ADCHEM 2021: 16th IFAC Symposium on Advanced Control of Chemical Processes, Full Paper
    In this work the control of the reactive extrusion of є-Caprolactone in a twin-screw extruder using nonlinear model predictive control with a tracking objective is investigated. For this, the modeling of the extrusion process using a one-dimensional mechanistic model is presented and extended to reactive extrusion systems. A novel modeling approach describing the pressure as a differential state is proposed to be able to use efficient optimization methods. Results for two scenarios, a change of the product quality and a change of the throughput are shown. The tuning of the controller is discussed and the benefits over traditional control methods are elaborated.
    Azadi, P., Klock, R., Engell, S.:
    Efficient utilization of active carbon in a blast furnace through a black-box model-based optimizing control scheme
    ADCHEM 2021: 16th IFAC Symposium on Advanced Control of Chemical Processes, Full Paper
    The daily operation of blast furnaces in the steel industry is only partly automated. The thermal control of the process is yet carried out manually by the operators. Their decisions may lead to an oversupply of carbon-based fuels, causing surplus production of carbon monoxide. The unexploited excess of carbon monoxide in the iron oxide reduction reactions increases the total carbon supply, hence the cost and the CO2 emissions. To maximize the carbon monoxide efficiency in the reduction reactions, the authors propose a dynamic optimizing control scheme and evaluate its performance by simulation studies using real operational data. The optimizer adjusts the fast dynamics of the blast furnace to prevent the inefficiency of the utilization of carbon monoxide that is influenced by the slow dynamics, subject to process productivity and safety constraints. Simulation results demonstrate that the control scheme can lead to the full conversion of the reduction reactions as well as a reduction of the total carbon supply.
    Dewasme, L., Vande Wouwer, A., Letchindjio, C.G.F., Ahmad, A., Engell, S.:
    Maximum-likelihood extremum seeking control of microalgae cultures
    ADCHEM 2021: 16th IFAC Symposium on Advanced Control of Chemical Processes, Full Paper
    This paper proposes a model-free extremum seeking control (ESC) approach to optimize the productivity of continuous cultures of microalgae, considering the dilution rate and the light intensity as manipulated variables, and the biomass concentration as single measurement. The resulting two-input single-output optimization problem is first solved using a recursive least-squares strategy based on the representation of the process by a Hammerstein block-oriented model. In order to face the presence of noise on the regressor variables (input and output signals), the problem is then reformulated as a maximum-likelihood estimation problem, which is solved on a moving horizon. Simulation results demonstrate the method performance.
    Abdelsalam, Y., Subramanian, S., Aboelnour, M., Engell, S.:
    Adaptive tube-enhanced multi-stage nonlinear model predictive control
    ADCHEM 2021: 16th IFAC Symposium on Advanced Control of Chemical Processes, Full Paper
    A robust adaptive controller for nonlinear plants with parametric uncertainties, additive disturbances, and state estimation errors based on the tube-enhanced multi-stage (TEMS) nonlinear model predictive (NMPC) framework is proposed. In TEMS NMPC, primary multi-stage NMPC is used to achieve robustness against the uncertainties which have a large effect on the evolution of the state of the plant, and ancillary multi-stage NMPC is used to track the predictions of the primary controller to counteract the effect of the small uncertainties. We propose updating, at each time step, the uncertainty set considered by the scenario trees of the primary and ancillary controllers with a tighter non-falsified uncertainty set, which results from solving a guaranteed parameter estimation (GPE) optimization problem. This produces significant performance improvements over the non-adaptive implementation as will be shown on the Williams-Otto continuous stirred tank reactor (CSTR) case study.
    Klanke, C., Bleidorn, D.R., Yfantis, V., Engell, S.:
    Combining Constraint Programming and Temporal Decomposition Approaches - Scheduling of an Industrial Formulation Plant
    CPAIOR 2021: Integration of Constraint Programming, Artificial Intelligence, and Operations Research, Full Paper
    This contribution deals with the development of a Constraint Programming (CP) model and solution strategy for a two-stage industrial formulation plant with parallel production units for crop protection chemicals. Optimal scheduling of this plant is difficult: a high number of units and operations have to be scheduled while at the same time a high degree of coupling between the operations is present due to the need for synchronizing charging and discharging operations. In the investigated problem setting the formulation lines produce several intermediates that are filled into a variety of types of final containers by filling stations. Formulation lines and filling stations each consist of parallel, non-identical sets of equipment units. Buffer tanks are used to decouple the two stages, to increase the capacity utilization of the overall plant. The CP model developed in this work solves small instances of the scheduling problem monolithically. To deal with large instances a decomposition algorithm is developed. The overall set of batches is divided into subsets which are scheduled iteratively. The algorithm is designed in a moving horizon fashion, in order to counteract the disadvantages of the limited lookahead that order-based decomposition approaches typically suffer from. The results show that the complex scheduling problem can be solved within acceptable solution times and that the proposed moving horizon strategy (MHS) yields additional benefits in terms of solution quality.
    Kaiser, S., Menzel, T., Engell, S.:
    Focusing experiments in the early phase process design by process optimization and global sensitivity analysis
    ESCAPE 2021: 31st European Symposium on Computer Aided Process Engineering, Full Paper
    Accurate process models which are the key to a reliable model-based process design usually need to be identified on the basis of expensive laboratory experiments. In this work, we present an integrated methodology which enables to focus these experiments on the most relevant model parameters by combining a global sensitivity analysis and optimal design of experiments. We apply the methodology to the homogeneous catalyzed hydroformylation of 1-dodecene as an example process. The comparison to an ordinary optimal experimental design and a factorial design show that by this approach the experimental effort could be reduced. Furthermore, we compare the use of a local and a global sensitivity analysis; the global sensitivity analysis can indeed enhance the process design.
    Winz, J., Engell, S.:
    Optimization based sampling for gray-box modeling using a modified upper confidence bound acquisition function
    ESCAPE 2021: 31st European Symposium on Computer Aided Process Engineering, Full Paper
    Chemical process simulations rely on the accurate representation of thermodynamic phenomena. Complex models like the Perturbed-Chain Statistical Associating Fluid Theory (PC-SAFT) provide such accurate descriptions but due to their implicit formulation, process optimization based on such models is computationally very demanding. This issue can be avoided by surrogate modeling, where a data-based model approximates the costly computation. When setting up a surrogate model, the question of which data to collect to fit the surrogate arises. In previous work, methods have been developed to combine sampling with optimization to only collect data in regions of interest for the optimization. These methods however mostly assume that the surrogate model describes the objective function. In this work, an extension to gray-box models is proposed.
    Semrau, R., Tamagnini, F., Tatulea-Codrean, A., Engell, S.:
    Application of Constrained EKF based State Estimation to a Coiled Flow Inverter Copolymerization Reactor
    ESCAPE 2021: 31st European Symposium on Computer Aided Process Engineering, Full Paper
    The performance of the reconstruction of the states of the controlled system is a key factor for the performance of nonlinear model-based control. In this work, the design and experimental evaluation of a Constrained Extended Kalman Filter (CEKF) for a continuous copolymerization process is presented. The experimental set-up and the model are introduced. The performance of the CEKF scheme with a systematic tuning, sampling-based constraint handling approach is tested in simulation studies, and the performance of the CEKF formulation is validated for experimental data.
    Elsheikh, M., Hille, R., Tatulea-Codrean, A., Krämer, S.:
    A comparative review of multi-rate moving horizon estimation schemes for bioprocess applications
    ESCAPE 2021: 31st European Symposium on Computer Aided Process Engineering, Full Paper
    Advanced control and monitoring of bioprocesses are dependent on accurate state and parameter information. At the same time, bioprocesses are well known for their time-varying behavior and difficulty of obtaining online measurements of the important process states. The selection and the tuning of the estimation algorithms is therefore crucial to the design of reliable monitoring tools. In this work we discuss the design of several Moving Horizon Estimation schemes for a class of bioprocesses. We compare the algorithms in terms of the arrival cost computation and address the implementation of a multi-rate measurement structure. The tuning of the estimators and further details are illustrated using a cell culture case study, where we show that not all the estimators under investigation can cope equally well with process uncertainty and multi-rate measurements. As an outcome of the comparative study, we provide a set of guidelines for selecting an appropriate estimator for bioprocess applications.
    Ebrahim, T., Engell, S.:
    Robust Model Predictive Control for Switched Nonlinear Dynamic Systems
    ECC 2021: European Control Conference
    This paper discusses a new approach to model predictive control (MPC) of switched nonlinear dynamic systems. Optimal control schemes that are based on relaxation followed by integrality restoration, have been proven to be computationally efficient in handling switched systems and therefore are promising candidates for use in MPC algorithms. The main disadvantage of such schemes, however, is the inability to guarantee optimality or even feasibility of the generated solution after the integrality restoration step. For solving this problem, in this paper an upper bound of the expected integer approximation error is computed and integrated as an additive disturbance into the relaxed model used in predictions. By using robust MPC schemes, e.g., multi-stage MPC, the resulting uncertain system can be handled in order to guarantee feasibility of the switched system. The development of the scheme is described and its performance is illustrated via simulation studies of a nonlinear switched system with parametric uncertainty.

    Conference Presentations 2021

    Filippo Tamagnini:
    Dynamic model of a batch evaporator for the controlled production of nanoparticles via the sol-gel route
    13th ECCE and 6th ECAB
    Nanoparticles are a class of materials with interesting properties that make them suitable as building blocks for the production of pigments, nanofluids or photocatalysts. The sol-gel route is one popular production process for these particles due to the mild conditions within the equipment. In this route, a precursor is hydrolysed in water, and nanoparticles are formed as a result of agglomeration and deagglomeration. Typically, in order to control the final size of the nanoparticles, a distillation step is required to reduce the contents of the byproducts of the hydrolysis reaction that could hinder the stability of the product. The possibility to produce nanoparticles with controlled characteristics, in terms of particle size and/or crystallinity is very appealing, as it enables to fine-tune the properties of whichever material is produced downstream. Due to the typical scale of the productions and the limitations in handling the growing nanoparticles during the first stage of the synthesis (namely, the formation of a viscous gel), the production is realized in batch reactors, thus making it difficult to apply control techniques beyond the application of predefined temperature profiles. The subject of this contribution is the formulation of a simple (but comprehensive) dynamic model (mass and energy balances coupled with the moment balance equations for tracking the evolution of the particle size distribution) for the optimization of the operation of such equipment, and the definition of a procedure for the practical parameterization. The application of the model in conjunction with advanced control techniques (trajectory optimization and tracking, state estimation) is then discussed and the optimization potential is assessed.
    Stefanie Kaiser:
    Accelerating the early phase in process development by integrating experimental work, modeling and optimization
    13th ECCE and 6th ECAB
    The early phase in the development of new chemical processes is crucial as it has a large impact on the final investment and production costs while at the same time, the available information is limited and decisions have to be taken under uncertainty. Often experimental investigations are necessary but also time-consuming while there is pressure to reduce the time-to-market. Two general approaches for process design are the heuristics-based approach and the optimization-based approach. In the first one which is widely used in industry, decisions are made based on expert knowledge, combined with experimental work. The second one which has since long been advocated in research, has a large potential as it can explore the full design space and can lead to novel solutions. However, the prerequisite of the application of the optimization-based approach is the availability of accurate models that describe the chemical and physical phenomena. To develop such models is challenging in particular for processes that involve several phases. So, in design optimization, one has to consider uncertainties in the models. Steimel and Engell (2016) proposed to use a two-stage optimization procedure for superstructure optimization under uncertainty. It is based on a representation of the uncertainty by a set of scenarios of the uncertain model parameters. In the first stage, the design degrees of freedom that cannot be adapted during the plant operation are optimized. In the second stage, the operational degrees of freedom of the plant (e.g. reaction temperatures, feed rates or reflux ratios) are adjusted to the realization of the uncertainties, as the control systems or the operators will do during operation. Nonetheless, the selection of the right process equipment and its sizing depends on the amount of uncertainties in the models and experimental work is required to reduce these to arrive at a near-optimal design In this work, we present an integrated approach that combines superstructure optimization under uncertainty with sensitivity analysis and optimal design of experiments. The goal is to identify one design that is cost-optimal for the considered realizations of the parametric uncertainties in the process model. As this usually is not possible for early phase process models with large uncertainties, the models need to be refined. For this purpose, the parameters that influence the process cost the most, are identified using a sensitivity analysis and then further determined by experiments that are planned using a modified optimal experimental design. We demonstrate the application of the proposed procedure for the design of the homogeneously catalyzed hydroaminomethylation of 1-decene in a thermomorphic solvent system. The reaction system consists of two subsequent reactions, the hydroformylation of 1-decene and the reductive amination of undecanal (Bianga et al., 2020). For modelling the phase behavior of the solvent system, thermodynamic models are essential. In this work, the equation of state PC-SAFT is used to model gas solubilities and liquid-liquid equilibria. Since the direct use of PC-SAFT in the optimization is infeasible due to the large computational effort, surrogate models are used as proposed in Nentwich and Engell (2019). A superstructure that includes different design alternatives, including conducting the reaction as two subsequent reactions or as a tandem reaction, is optimized and it is shown that the parametric uncertainties make a selection of one of the design alternatives impossible and hence further refinement of the models is needed. The impact of the model parameters is analyzed and the identification of one optimal design by application of the proposed methodology is presented.
    Joschka Winz:
    Model based optimal design of dynamic experiments in gray-box and black-box modeling of fermentation processes
    13th ECCE and 6th ECAB
    The modeling of fermentations of sporulating bacteria is a challenging task. The first challenge is to characterize the large and complex network of biochemical reactions that occur inside the cells, including the substrate conversion to biomass and the production of messenger substances that communicate the initiation of sporulation. Secondly, the sporulation process itself is also difficult to describe as it occurs over different stages in which the bacteria change structurally. A promising modeling approach is the use of data-based machine learning models to describe some or all of the mentioned phenomena. Data-based models can be used to describe the overall process in a black-box fashion or be parts of gray-box models where some elements represent the knowledge of the biochemical transformations and are complemented by data-based models of phenomena for which structural relationships are not known. In data-based modeling, the acquisition of training data is a key aspect for the prediction accuracy of the models. In this work different methods for designing experiments are compared as e.g. amplitude modulated pseudo-random binary sequences (APRBS) and model based optimal design of dynamic experiments. In the latter case, the trajectories of the input variables of batch processes are optimized with the goal to minimize the expected variance of the model parameters. By a simulation study of a real process that is operated by Evonik, it is shown that the prediction accuracy of gray-box models is higher compared to purely black-box models. Furthermore, the model accuracy can be improved by using an optimal experimental design that exploits knowledge about the process to compute an input trajectory that yields maximum information about the unknown parameters.
    Anwesh Reddy Gottu Mukkula:
    Application of Iterative Real-time Optimization for a Homogeneously Catalyzed Reductive Amination Process in a Miniplant
    13th ECCE and 6th ECAB
    In the context of the DFG Collaborative Research Center InPROMPT, novel homogeneously catalyzed processes in multiphase systems are developed, in particular hydroformylations and aminations. These processes are tested in long-term operations in mini- and pilot plants. Here also innovative strategies for their control and optimal operation are implemented. In this contribution, we discuss the optimal operation of a continuously operated homogeneously catalyzed reductive amination (RA) process. The miniplant consists of a CSTR, a decanter, and a membrane separator; the reaction is catalyzed by an organometallic catalyst system. It enables to perform the reaction under mild conditions with a high selectivity towards the target product. The reaction is performed in a thermomorphic multicomponent solvent (TMS) system for efficient recovery and reuse of the expensive rhodium catalyst. Real-time optimization denotes a model-based upper-layer optimization of the steady-state operating conditions of industrial processes based on rigorous models to improve the profitability of the process, taking into account safety constraints, product quality specifications and process limitations. It is commonly applied in the petrochemical industry but its potential has not been exploited much for smaller-scale productions. One of the reasons for this is that its performance is dependent on the accuracy of the process model and developing an accurate process model is expensive and time-consuming. Even if a model is available, its predictions will deviate from the behavior of the real process. Then the steady-state computed by solving a model-based optimization problem is not optimal for the real process and sometimes even is infeasible. This is in practice partially remedied by the estimation of key model parameters from the available plant data, but this only is effective if the model is structurally correct, and may fail if e.g. side-reactions are not included. Therefore iterative real-time optimization (RTO) methods have been developed which drive the process to the true plant optimum in the presence of structural and parametric plant-model mismatch. Among these, the so-called modifier adaptation-based methods rely on the correction of the model gradients by estimated plant gradients which are determined from the observations of the behavior of the plant. (Gao & Engell, 2005). A further development is modifier adaptation with quadratic approximation (MAWQA) (Gao, Wenzel, & Engell, 2016) which makes use of quadratic approximations (QA) of the response surface to reduce the effect of measurement noise. For a fast and smooth convergence to the optimum of the true plant, it is necessary that the optimum of the corrected plant model satisfies the optimality conditions of the true plant. This condition is called model adequacy. The scheme developed by (Gottu Mukkula & Engell, 2020) ensures that the nominal model is adequate for all possible inputs in the operating region. In this work, we demonstrate the application of this scheme to identify the optimal inputs for the reductive amination process in a miniplant (Künnemann, et al., 2020). For the nominal model, the reaction kinetics are taken from (Kirschtowski, Jameel, Stein, Seidel-Morgenstern, & Hamel, 2021), and surrogate models of the predictions of the equation of state PCSAFT are used to model the liquid-liquid equilibria in the decanter. The nominal model is corrected by the modifier adaptation-based iterative RTO method using the plant measurements and the optimal inputs for the process are identified.
    Joschka Winz:
    Data generation for using surrogate models for approximation of thermodynamic equilibria
    In vielen Modellen chemischer Prozesse ist die akkurate Beschreibung der Thermodynamik von zentraler Bedeutung. Moderne thermodynamische Modelle, wie die Perturbed-Chain Statistical Associating Fluid Theory (PC SAFT), ermöglichen die akkurate Beschreibung, sind allerdings berechnungsaufwendig, was ein Hindernis zum Beispiel in der Optimierung von Prozessmodellen bedeutet. Um diese Herausforderung zu überwinden, können Surrogatmodelle benutzt werden, welche die thermodynamischen Gleichgewichtsberechnungen approximieren und schnell auswertbar sind. Der Einsatz von Surrogatmodellen führt zu einem Approximationsfehler. Da das Auswerten der zu approximierenden Funktion zeitaufwendig ist, ist es notwendig, mit möglichst wenig Trainingsdaten auszukommen. Verschiedene Ansätze stehen zur Verfügung. Für das Ziel ein global genaues Surrogatmodell zu erzeugen, kann adaptives Sampling eingesetzt werden, bei dem die Vorhersage der Surrogatmodelle benutzt wird, um die Datengenerierung dort zu konzentrieren, wo der Prädiktionsfehler hoch ist. Ein weiterer Ansatz ist die Datenaugmentation, bei der aus einem bestehenden Set an Trainingsdaten weitere Daten generiert werden. Zum Beispiel kann im Falle von Gleichgewichtsdaten ausgenutzt werden, dass alle Feedzusammensetzungen einer Konode dieselben Phasenzusammensetzungen ergeben. Im Fall der Prozessoptimierung ist es von zentraler Bedeutung, dass das Surrogatmodell am Optimum genau ist. Daher werden alternative Samplingmethoden benutzt um Trainingsdaten in dem Bereich, in dem das Prozessoptimum liegt, zu konzentrieren. Diese Arbeit zeigt die möglichen Einsparungen an Funktionsaufrufen eines thermodynamischen Modells durch adaptives Sampling und Datenaugmentation in verschiedenen Zielsetzungen. Weiterhin werden Schwierigkeiten und Herausforderungen bei der Anwendung dieser Methoden präsentiert.
    Maximilian Cegla:
    Prozessintensivierung der Reaktivextrusion von Farbadditiven durch den Einsatz von Ultraschall – Modellierung und Optimierung
    Weltweit werden Farbadditive wie z.B. Benetzungsmittel, Dispergiermittel oder Verdickungsmittel im Millionen Tonnen Maßstab produziert. Aktueller Stand der Technik ist die Produktion von Polyurethan-Verdickungsmittel in einer Batch Fahrweise. Innerhalb des Projektes SIMPLIFY wird die Umstellung auf eine kontinuierliche Produktion dieser Additive als Reaktivextrusion auf einem Doppelschneckenextruder untersucht. Diese Umstellung ermöglicht die Erhöhung der Produktivität da Rüstzeiten entfallen, eine vollständige Elektrifizierung um die Nachhaltigkeit zu verbessern sowie neue Produktklassen mit höherer Viskosität. Bei der Produktion in dem Doppelschneckenextruder wird im Feed zunächst aufgeschmolzenes langkettiges Polyethylenglycol mit einem Diisocyanat gemischt. Diese Mischung reagiert in der ersten Stufe zu einem langkettigem und viskosen Prepolymer. Im weiteren Verlauf der Extrusion wird ein kurzkettiger Alkohol hinzudosiert um das Kettenwachstum zu unterbrechen und die hydrophilen Eigenschaften des Endproduktes einzustellen. Der Einsatz von Ultraschall in diesem Prozess wurde in beiden Reaktionsschritten des Prozesses untersucht da diese wegen der hohen Molekulargewichte der Edukte stofftransportlimitiert sind. Dabei wurde Ultraschall mit 20 kHz und verschiedenen Amplituden über eine Sonotrode im Extruderzylinder lokal in den Prozess eingebracht. Es konnte experimentell gezeigt werden, dass durch den Einsatz von Ultraschall die Reaktionsrate gesteigert wird. Diese Effekte sind anschließend in dem mathematischen Modell des Prozesses abgebildet worden und zeigten nach der Optimierung dieses Modelles eine deutliche Durchsatzsteigerung durch den Einsatz von Ultraschall in der optimalen Fahrweise.
    Sebastian Engell:
    Iterative Real-time Optimization of a Continuous Lithiation Process Based on Compact NMR Spectroscopy
    ACHEMA Pulse 2021

    Master Theses 2021

    Uma Jaya Ravali Theeda:
    Hybrid modelling of batch distillation processes of polymer solutions
    supervised by: Joschka Winz
    A process model representing the dynamics of a chemical process is a prerequisite for applications like process optimization and control. Such a developed process model can be either mechanistic, adhering to physical laws or data-based using plant historic data. A mechanistic model for the batch distillation of polymer mixtures requires the description of thermodynamic quantities describing the non-ideal behaviour of the polymer. Conventionally, such thermodynamic properties are estimated from the experimental studies of the required separation mixture. The presence of different lengths of polymer chains, copolymers and monomers makes such mixtures complex and conducting experiments on such a variety of species is expensive. A hybrid modelling approach to combine an approximate mechanistic model with some data-driven components has been explored. Thus, a data-driven model in combination with a mechanistic model is explored to estimate the polymer influence in this work. An artificial neural network is used as the data-based component, to estimate the polymer interactions and combined with a mechanistic model to predict the reboiler temperatures where the polymer accumulates with time. The hybrid model with three types of one-step prediction modes is implemented in this work. The predicted results show the dynamics of reboiler temperatures with a low plant-model mismatch and smoothening effect. Further, the model is adopted to make predictions in untrained batches that are operated at slightly different conditions and showed acceptable reboiler temperature predictions.
    Felix Riedl:
    Modular Surrogate Models for Flowsheet Optimization
    supervised by: Tim Janus
    Design optimization problems of chemical processes are characterized by a large number of discrete and continuous design decisions, highly non-linear models and multi-modal continuous subspaces. The design decisions are crucial for the long-term profitability of the chemical production process because the decisions made in the design phase typically define 80% or more of the operation costs of the process. Evolutionary algorithms (EA) assisted by neural networks have been applied for global flowsheet optimization with commercial process simulators. The performance of the optimization improves significantly as soon as the neural networks are applied. Therefore, a new sampling strategy based on a principal component analysis that allows a faster generation of training data by cutting the flowsheets into its individual unit operations, and based on this, a surrogate network that approximates the original flowsheet are introduced. For each unit operation a modular surrogate is trained and this modular surrogate flowsheet outperforms a black box approach that uses a single surrogate to replace a flowsheet at once on a small case study in the early iterations of the EA. The sampling strategy saves more than 70% of the time needed for training data generation.
    Engelbert Erwin Pasieka:
    Metaheuristic Optimal Batch Production Scheduling with Direct and Heuristics-based Encodings
    supervised by: Christian Klanke
    A simulation-based genetic algorithm is presented that solves a real-world batch production scheduling problem that resembles a hybrid flow shop, with multiple stages, unrelated parallel machines, and various constraints. In total, six problem instances are considered in this case study, which differ in the number of orders and objectives. Optimal production scheduling has been researched a lot, but there is still a gap between application and theory. This is due to the fact that solution approaches from research are often difficult to implement in practice, and that many problems that have been studied in academia involve simplified models that rarely correspond to processes that emerge in reality. Simulations are able to represent realistic models of processes and can be integrated into genetic algorithms as schedule builders. This approach of simulation- based genetic algorithm offers many opportunities to bring theory and practice closer together. In this work a commercial scheduling software was used which together with a genetic algorithm optimized the degrees of freedom of the problem at hand, such that it could be solved efficiently and good. Different encodings for the genetic algorithm were tried and the best set was selected for further parameter tuning. The solutions of the problem instances obtained with the simulation-based genetic algorithm were compared to solutions obtained by a Mixed Integer Linear Programming (MILP) based decomposition approach, which served as a benchmark. The simulation-based genetic algorithm was able to come close to the case study results in terms of the solution quality for many problem instances through an appropriate selection of chromosomes, operators and parameters. However, some problem instances could not be solved well, but for these suitable heuristic improvement strategies are presented.
    Jiadi Yang:
    ENMPC based flexible production of zeolites in a COBR
    supervised by: Robin Semrau
    Zeolites are widely used industrial adsorbents and catalysts produced conventionally in hydrothermal batch processes. However, continuous process possesses several ad- vantages over the batch processes, such as the enhanced mass/heat transfer. In this work, a special continuous crystallizer, the Continuous Oscillatory Baffled Reactor (COBR) with electric heat coils, is studied for the flexible production of zeolites. Electricity is a secondary energy source produced by converting primary energy sources such as coal, natural gas, solar, wind energy, etc. The increasing share of renewables in the grid to mitigate climate change leads to the process industry's electrification of the energy price variation. Besides, conventional way constraints are imposed on the variables and cannot be violated, while integral constraints confine the variables over the entire horizon within a certain range, allowing more flexible space. Thus, a flexibilization of the zeolite production to a fully dynamic operation with integral constraints should be studied to cope with the electricity price fluctuations. The optimal heating distribution of the tubular reactor model with 30 discretized electrical heaters was investigated in a steady state, leading to heating mainly at the reactor inlet. Then, the heating system is simplified by using three heating coils whose lengths were optimized. The static economic behaviour of the process is further investigated with the simplified model, conducing to a steady-state dependency of the flow rate. Besides, integral constraints are employed in the dynamic optimization to relax the process limits and guarantee the average production rate, allowing the process variables to violate the former hard constraints and leading to a better optimization result. Moreover, two terminal hold-up constraints are used for a sell-off effect at the end of the prediction horizon. Furthermore, combining two repeated integral horizons with two end constraints results in the best optimization results in this work. Additionally, the flexible energy prices drive the plant away from its original steady- state that is producing continuously, so dynamic optimizations are performed simply with a step-changing of energy price. Since the changing directions and magnitudes can be different, several dynamic optimizations are scheduled, performed, and evaluated based on their corresponding optimal profit. The results prove to be beneficial to the nominal profit, especially when the energy price is far away from the constraint with a bigger changing magnitude.
    Qiunan Zhang:
    Electricity price forecasting for scheduling the production of a process plant in Italy
    supervised by: Filippo Tamagnini
    Electricity price is the core subject in the power market environment. All participants are urged to develop effective and accurate electricity price forecasting models. From the point of view of buyers, the market clearing price of electricity constitutes one of the main drivers for the unit cost of production and operation activities. Electricity price forecasting makes it possible for buyers to control their own production cost, while for power generation companies, it enables to formulate optimal bidding strategies to maximize their profits. Therefore, electricity price forecasting is of great significance for all participants. This thesis consists of two parts: electricity price forecasting and optimization of the scheduling plan of a benchmark process plant. Firstly, a model is developed to forecast the unified market clearing price of the Italian power market. ARIMA, ANN and ANN in conjunction with SVD models with different influential variables including historical price and load data, weather factors, social economic indicator FTSE, and the price of fuel and natural gas are developed for price forecasting. Prediction indicators such as MAE and MAPE are compared in order to choose the most suitable model with best accuracy forecasting performance. The MAE and MAPE of forecasting results show that the ANN model in conjunction with SVD has the highest forecasting accuracy. Finally, a simple scheduling model including constraints such as working days, minimum and maximum batch size, maximum inventory of the warehouse and the order demand is developed to demonstrate the profitability of using electricity price forecasting to organize the production. The results show that the profit of the optimal scheduling plan based on the predicted price can increase substantially with respect to using a non-optimal scheduling plan that disregards the electricity price fluctuations.

    Bachelor Theses 2021

    Lutz Vogel-Lackenberg:
    Surrogate assisted optimization of modular gray-box models
    supervised by: Joschka Winz
    Flowsheet simulation is a widespread tool in process industry for process synthesis and optimization. A key part here is the precise description of the thermodynamic phase equilibria in each unit operation. Modern models used for such application, like the PC-SAFT equation of state, are prone to become complex and therefore computationally demanding. This limits their application in process optimization. Data-based surrogate models provide computationally cheap alternative and are used to replace the original models in the flowsheet simulation to obtain general feasibility of the process optimization. A surrogate assisted optimization algorithm with successive bound contraction of the independent input variables for the surrogate models has been evaluated. The optimization was implemented with two different embedded surrogate models (kriging models and artificial neural networks). The accuracy and efficiency regarding the number original function calls was evaluated. A comparison to an LHS-design approach is conducted.
    Jan Seemann:
    imulation and Heuristic Scheduling of Industrial Make-and-Pack Processes
    supervised by: Chrisitan Klanke
    In dieser Arbeit geht es um die Modellierung einer Fallstudie zur Produktion von Waschmaschinen Pods. Ziel der Arbeit ist es eine effizientes Modell für den Prozess zu entwickeln und eine möglichst kurze Fertigungsspanne (Makespan) zu erzielen. Bei dem Prozess handelt es sich um einen make-and-pack Prozess, der sich in drei Bereiche unterteilt: In einen Herstellungsbereich, einen Lagerbereich und einen Verpackungsbereich. Der Lagerbereich dient hierbei zur Entkopplung des Formulier- und Verpackungsbereichs, der die Gesamtproduktivität der Anlage erhöht, jedoch die Produktionsplanung allerdings erschwert. Der Prozess wurde mit der Hilfe einer Ereignisdiskreten Simulation der Firma Inosim modelliert. Zudem wurde die Abfolge der Aufträge mit einem Sortieralgorithmus verändert, um eine Verbesserung der Lastenverteilung auf den Teilanlagen zu erzielen. Für die Bestimmung der Wartezeit wurden zwei Methoden erstellt und getestet. Die Korrektheit des Simulationsmodells wurden zunächst anhand von Literaturwerte verglichen. Die Ergebnis des in der Arbeit entwickelten Sortieralgorithmus wurden mit dem optimierungsbasierten Ansatz aus der Literatur verglichen. Es wurde eine signifikante Verbesserung zu den Ergebnissen einer naiven Einlastung der Aufträge festgestellt. Jedoch zeigen die Ergebnis im Vergleiche zu den Literatur Ergebnissen noch ein gewisses Verbesserungspotential.

    Arnold Eucken Medal for Sebastian Engell

    On the occasion of the annual meeting of the ProcessNet-Fachgemeinschaft (ProcessNet professional group) “Prozess-, Apparate- und Anlagentechnik (Process, Apparatus and Plant Technology)” in 2021, Professor Dr.-Ing. Sebastian Engell was presented with the Arnold Eucken Medal donated by the German Association for Process Engineering (GVT) in 1956 during the virtual opening and plenary session on November 22, 2021.

    Prof. Engell is honored for his outstanding work on dynamics, automation and optimal control of process engineering processes, and he has dedicated his entire professional career to these complex topics. His developments in methods of control engineering and mathematical optimization processes have introduced a paradigm shift in the process management of process engineering processes. With his work, he has made a major contribution to optimization of complex process engineering processes in real time and making them suitable for broad industrial use.

    Doctoral degree awarded to Corina Nentwich

    The examination committee congratulating Corina Nentwich

    On May 19, 2021, Corina Nentwich who had been supervised by Prof. Engell, finished the procedures for her doctoral degree with the oral examination at TU Dortmund University. The examinations took place under Covid-19 conditions with only the examiners present in a lecture hall and the public attending via Zoom. Thanks to the great preparation by Maximilian Cegla and Tim Janus, the web-based format worked out very well.

    Corina Nentwich, a former member of the DYN group at TU Dortmund and currently employed at the Evonik Technology & Infrastructure GmbH obtained the Dr.-Ing degree for her dissertation “Surrogate modeling of phase equilibrium calculations using adaptive sampling”, Congratulations to Corina Nentwich!

    Svetlana Klessova received PhD degree in Management Science under co-supervision of Prof. Engell

    Svetlana Klessova after the PhD defense with her two supervisors (left and right) and the President of the PhD jury, Dr. Amel Attour (in black)

    On July 12, 2021, Svetlana Klessova, who was involved in management and impact maximisation activities in several of our EU projects in the past, obtained a PhD in Management Science from Université Côte d’Azur, France. The title of her thesis is: “How to improve the performance of collaborative innovation projects: the role of the architecture, of the size, and of the collaboration processes” (Comment améliorer la performance des projets d’innovation collaboratifs – le rôle de l’architecture, de la taille et des processus de collaboration). The dissertation was supervised by Prof. Catherine Thomas, GREDEC, Université Côte d’Azur, and by Prof. Engell. She did the PhD research besides her professional work at an innovation management company in Sophia Antipolis.

    International Summer Program ISP 2021

    Last year, our annual International Summer Program could not take place due to the Covid-19 pandemic. This year, 10 students from our overseas partner universities in Asia and America were welcomed to Dortmund in the 1st virtual edition International Summer Program. The ISP organized by the dyn chair in close collaboration with the faculty of American Studies and the International Office, has provided the students with both, an interesting learning environment for various academic subjects and a balanced cultural program. While everyone expressed his or her regrets that traveling to Germany was not possible, participants have given the program best grades for the effort that has been invested which underlines the acceptance and the value of the program for the international activities of the faculty and of the university.

    dyn @ ESCAPE31

    From June 6th to June 9th, the dyn members Robin Semrau, Filippo Tamagnini, Stefanie Kaiser and Joschka Winz represented the group in the 31st European Symposium on Computer-Aided Process Engineering (ESCAPE-31). ESCAPE31 was planned to take place in Istanbul, Turkey. However, due to the pandemic ESCAPE31 took place in a virtual format. The followong contributions were presented at the conference:

  • Robin Semrau, Filippo Tamagnini, Alexandru Tatulea-Codrean, and Sebastian Engell, "Application of Constrained EKF Based State Estimation to a Coiled Flow Inverter Copolymerization Reactor".
  • Stefanie Kaiser and Sebastian Engell, "Focusing experiments in the early phase process design by process optimization and global sensitivity analysis".
  • Joschka Winz and Sebastian Engell, "Optimization based sampling for gray-box modeling using a modified upper confidence bound acquisition function".
  • dyn @ ADCHEM 2021

    The 11th Symposium on Advanced Control of Chemical Processes (IFAC ADCHEM2021), was held from June 13 to June 16, 2021. In view of the current COVID-19 pandemic, it was decided to move the Conference to a pure virtual format. Many current and former group members represented the dyn. The followong contributions were presented at the conference:

  • S. Gerlich, H. Arab, P. Buchholz and S. Engell, "Experimental application of individual column state and parameter estimation in SMB processes to an amino acid separation".
  • P. Azadi, R. Klock and S. Engell, "Efficient Utilization of Active Carbon in a Blast Furnace through a Black-Box Model-Based Optimizing Control Scheme".
  • Y. Abdelsalam, S. Subramanian, M. Aboelnour and S. Engell, "Adaptive tube-enhanced multi-stage nonlinear model predictive control".
  • M. Cegla and S. Engell, "Application of Model Predictive Control to the reactive extrusion of e-Caprolactone in a twin-screw extrude".
  • dyn @ ECC 2021

    Taher Ebrahim represented the dyn group at the European Control Conference from June 29th to July 2nd. Originally the conference was planned to take place in Rotterdam, Netherlands, but it took place in a virtual form due to the Covid-19 pandemic. He represented the paper:

  • Taher Ebrahim and Sebastian Engell, "Robust Model Predictive Control for Switched Nonlinear Dynamic Systems".
  • dyn @ CPAIOR 2021

    In the hot midsummer of Vienna, the 18th international conference on the integration of constraint programming, artificial intelligence and operations research (CPAIOR2021) took place at TU Wien as a hybrid conference with around 20 participants that were present in person. Interesting interdisciplinary presentations ranging from purely theoretical contributions to industry-applicable solutions in the fields of scheduling, machine learning, constraint satisfaction and many more made this event special.

  • Klanke, C., Bleidorn, D.R., Yfantis, V., Engell, S.,"Combining Constraint Programming and Temporal Decomposition Approaches - Scheduling of an Industrial Formulation Plant", 2021, Lecture Notes in Computer Science 12735, P. 133–148.
  • dyn @ GECCO'21

    From July 10th to 14th, Christian Klanke represened the dyn group at the conference of the Genetic and Evolutionary Computation Conference Companion (GECCO'21) in Lille, France. The conference was held in a fully virtual format.

  • Klanke, C., Bleidorn, D., Koslowski, C., Sonntag, C. and Engell, S., "Simulation-based Scheduling of a Large-scale Industrial Formulation Plant Using a heuristics-assisted Genetic Algorithm".
  • dyn@ ECCE/ECAB 2021

    From September 20th to 23rd, Filippo Tamagnini, Stefanie Kaiser and Joschka Winz represented the dyn group at the 13th European Congress of Chemical Engineering and 6th European Congress of Applied Biotechnology (ECCE/ECAB 2021). The conference could not take place in Berlin, Germany, but was held in a virtual format. The dyn contributed the following talks to the program:

  • Filippo Tamagnini, "Dynamic model of a batch evaporator for the controlled production of nanoparticles via the sol-gel route".
  • Stefanie Kaiser, "Accelerating the early phase in process development by integrating experimental work, modeling and optimization".
  • Joschka Winz, Uwe Piechottka, Supasuda Assawajaruwan, Sebastian Engell, "Model based optimal design of dynamic experiments in gray-box and black-box modeling of fermentation"
  • dyn @ CDC 2021

    From December 13th to 17th, Sankaranarayanan Subramanian attended the 60th Conference on Decision and Control. The conference was planned to take place in Austin, Texas, USA, but as most of the other conferences this year, the CDC2021 took place in a fully virtual format. Mr. Subramanian presented the following journal article:

  • S. Subramanian, Y. Abdelsalam, S. Lucia and S. Engell, "Robust Tube-Enhanced Multi-Stage NMPC With Stability Guarantees".
  • dyn @ PAAT 2021

    On November 22nd and 23rd, Joschka Winz, Maxmilian Cegla represented the dyn group at the PAAT2021 (Jahrestreffen der ProcessNet-Fachgemeinschaften “Prozess-, Apparate- und Anlagentechnik”). Due to the Covid-19 situation, it was organized as an online event. Chemical engineers, plant constructors, process engineers, and technical chemists from science and industry had the opportunity to present research results, discuss requirements from industrial practice, and jointly develop solutions for new processes in the chemical process industries and other sectors. The dyn contributed the following talks to the program:

  • Joschka Winz and Sebastian Engell, "Data generation for using surrogate models for approximation of thermodynamic equilibria".
  • M. Cegla, A. Buczko, S. Kemmerling, M. Gillock-Karner, T. Dreyer and S. Engell, "Prozessintensivierung der Reaktivextrusion von Farbadditiven durch den Einsatz von Ultraschall – Modellierung und Optimierung".