Methods, Processes, Languages and Tools for Engineering Trustworthy Systems
In our everyday life we become increasingly dependent, on systems that are controlled by software. As a consequence, we expect a trustworthy execution of this software with respect to functional and non-functional requirements. To meet the increasing complexity and the ever-increasing importance of satisfying these requirements, we develop software engineering methods, processes, languages and tools, resulting in an engineering approach to software development. An engineering approach is characterized by predictable and foreseeable results, already while designing the system.
We, the Software Quality and Architecture Group, are dealing with the fulfillment of functional as well as non-functional requirements during the design of systems. Such requirement fulfillment is an imperative prerequisite for trustworthy systems. In our research, systems are specified and analyzed using model-based or model-driven approaches. Among others, developers should use modeling languages such as UML2 (Unified Modeling Language, v2.x) and DSLs (domain-specific languages).
The chair is founded on three pillars: quality analysis of functional and non-functional properties of software, engineering methods for software development, and reengineering of existing systems.
In the area of quality analysis, we focus on the following characteristics: performance, reliability, maintainability, scalability, elasticity, cost-effectiveness, as well as security and safety. In model-based analysis we use and extend formal analysis models such as Markov chains, queuing networks, and stochastic process algebras. If the analyses algorithms require, we also deal with system simulations. We also develop approaches for the quality analysis of systems using measurements, requiring an efficient instrumentation of the systems.
The development of high-quality software, in today's level of complexity of systems, requires engineering methods towards software development. Such methods first systematically capture the system to be implemented into models, and then transform these as automatically as possible into the executable code. In order to model the system, components or services are used for reasons of re-usability, and hierarchical, scalable analyzability. To assure specified quality levels during operations, i.e., while the software is in production use, respective analyses such as monitoring as well as automated problem detection, prediction, and resolution support system maintenance.
If models or components are not available, they must first be extracted from legacy applications, and then used in all types of reengineering activities. We develop methods and tools that support the software architects as much as possible in the extraction of these models by applying a combination of static and dynamic analyses, e.g., based on code and measurements.
The chair currently deals with three central research topics: multi- and many-core systems, energy-efficient execution, and coordinated self-adaptation of the system.
Multi- and many-core systems are those systems whose main processors (CPUs) consist of several computing units (cores). Such systems are nowadays the state of the art. In such systems, the CPU can work massively in parallel - when appropriately designed software is used. We want to support this design with models and analysis approaches. Multi-core systems currently have dozens of arithmetic units; CPU-integrated accelerators such as GPUs already have hundreds or thousands of units. In this case, we talk about many-core systems. Modeling and analysis of such systems is still in its infancy and can be compared to the level of machine-oriented languages such as assembler: higher abstractions, usually available in programming languages, are often missing and modeling languages have been designed for sequential computers. Our research is concerned with how the effective use of multi- and many-core systems can be assured by the use of models at a higher level of abstraction.
The goal of energy-efficient execution is for operating data centers to run with the minimum possible power consumption at a given workload. For this purpose, processing steps have to be optimized, while avoiding unnecessary computations and waiting times. Additionally, computing units or entire nodes have to be powered down when they are not needed, but in such a way that they are available again at any given time, when they are needed again.
Modern systems operate in increasingly unpredictable environments. They require abilities to adapt to their environment at runtime. One of the techniques used is that of self-adaption. Unfortunately, self-adapting mechanisms are built into systems without knowledge of other self-adapting systems in the same system. This is why we are investigating coordinated self-adaptation. Here, all self-adaptation mechanisms are coordinated and subordinated to a global system goal. The goal is to prevent uncoordinated self-adaptation from leading to unpredictable misconduct - which can cost lives in embedded, safety-critical systems, or high financial damage to information systems.
Modern continuous software engineering paradigms such as DevOps aim for an increased speed and frequency of software delivery. This puts serious challenges on quality assurance methods for traditional environments. On the other hand, these paradigms employ innovative practices, such as automation and live experimentation, provide opportunities for novel quality assurance methods. We aim to integrate quality assurance methods into DevOps practices and employ DevOps practices for improved quality assurance.
The chair for Software Quality and Architecture currently applies its research results in three main application areas: cloud computing, HPC, and mechatronic systems.
In cloud computing, virtually unlimited computing, storage, and network resources are provided to systems. These resources can be leased, configured, reconfigured, and released again by systems through self-adaptation and virtualization at runtime. The high degree of flexibility requires precise analysis of the system design in order to ensure the user satisfaction, but at the same time also to reduce the operating costs. Applications that are developed for cloud computing typically use architectural styles, such as microservices, which exploit all the available features of cloud. Monitoring of these applications includes end-to-end monitoring of user request processing across all the available layers of an application.
HPC employs dedicated, highly complex, and extremely powerful compute clusters to solve computing-intensive problems. This means that the processing of these problems must be planned systematically. In particular, errors in the processing must be detected at an* Enterprise application systems (combined with cloud?) early stage in order to be able to stop faulty, and therefore unnecessary, computations. Furthermore, this saves valuable computing time and energy, too. Currently, we are particularly interested in early detection of such erroneous calculations based on abnormal system behavior, e.g. extremely increased memory consumption. Models of the normal behavior of the software help to identify abnormal situations.
Mechatronic systems or cyber-physical systems (CPS) are systems which are strongly connected to and interact with their physical environment. Examples of these systems are autonomous vehicles, smart grids, robots, etc. Such systems are developed interdisciplinarily, together with experts from various fields, such as mechanical engineering, control theory and computer science, as they comprise different components from the respective disciplines. In the physical world, discrete processes such as "on" or "off" are only possible to a limited extent. For example, in a few nanoseconds, a software can decide to stop the autonomous car, i.e., to command its state to go from "driving" to "standing", but the actual braking maneuver is continuous and not in the range of nanoseconds, e.g., deceleration from 100 km/h to 0 km/h. Since these systems directly affect the fate of people or companies, it is particularly important to design the systems reliably, safely, efficiently, and therefore trustworthy.