Overview

The aim of the DFG-funded research group OC-Trust is to improve the trustworthiness of Organic Computing Systems in order to enable their use in open, heterogeneous, safety-critical and user-centered scenarios. Furthermore, it was investigated to what extent trust, as a constitutive element of technical systems, can contribute to improving their robustness and efficiency. Methods, models, algorithms and user interfaces were developed. These techniques allow to consider trust in the design of systems and to investigate their trustworthiness. They also allow to measure trust at runtime and to adapt the systems in relation to different aspects of trust.

 

Description

Five teams from Augsburg and Hanover were involved in the research group. The research group's spokesperson is Prof. Dr. Wolfgang Reif from the University of Augsburg. The following teams worked together in the research group:

  • Formal Analysis and Software Architectures for Trustworthy Organic Computing
    Prof. Dr. Wolfgang Reif, Chair of Software Engineering, University of Augsburg, Speaker
  • HCI Design for Trustworthy Organic Computing
    Prof. Dr. Elisabeth André, Chair for Multimedia Concepts and Applications, University of Augsburg
  • Erzeugung selbst-organisierender Vertrauensgemeinschaften - Top-down
    Prof. Dr. Christian Müller-Schloer, Department of System and Computer Architecture, University of Hannover
  • Erzeugung selbst-organisierender Vertrauensgemeinschaften - Bottom-up
    Prof. Dr. Jörg Hähner, Chair of Organic Computing, University of Augsburg
  • Vertrauensverhältnisse unter den autonomen Einheiten von OC-Systemen
    Prof. Dr. Theo Ungerer, Chair of Computer Science and Communication Systems, University of Augsburg

Organic Computing systems are highly dynamic, consist of a multitude of changeable components and are located in a constantly changing environment. This results in a variety of desirable properties, including the ability to self-heal, self-adapt or self-configure. However, classical techniques for analysis and design of software systems are not suitable for these system structures. Completely new aspects, such as emergent behavior and the extreme changeability of OC systems, require a rethink and the development of new mechanisms. In addition to formal methods for functional correctness, safety and security, these include methods for the trusting interaction between parts of the system, the monitoring of predefined guidelines at runtime, and the development of algorithms that take trust aspects into account in self-organizing systems. In particular, the interface to the user can no longer be realized with classical methods. Here, questions have to be examined that deal with the representation of self-organizing system structures, the adaptive representation of information via different types of displays and, last but not least, the privacy of data.

 

Trustworthy Organic Computing Systems differ from conventional Organic Computing Systems, such as those considered in the DFG Priority Programme 1183, above all in their openness, the heterogeneity of the agents involved and the consideration of human users. A further feature is the higher autonomy of the agents through an increasing shift of decisions from design time to runtime, which in extreme cases leads to a purely selfish agent behavior. Thus the assumption of good will, i.e. the assumption of the agents' willingness to cooperate in principle, can no longer be maintained. All these factors increase the uncertainty about the system, its environment and future developments, and emergent behavior becomes more difficult to control.

 

In the first phase of the project, which started on 01.10.2009, basic technologies were developed to deal with these problems. These include formal methods for controlling emergent behavior, communities of trusted agents, models for measuring user trust and availability, algorithms that include trust values in their calculations, as well as construction methods and adaptive user interfaces. These technologies have been evaluated using open, heterogeneous organic computing systems.

 

During the second and third phases, these technologies were expanded and the complexity of the problems considered increased. The focus was on dealing with insecure data, conflicts, interactions and an increasing hierarchization of systems. These aspects were covered by work carried out in close cooperation between the project groups. In addition, application aspects increasingly came to the fore. For this purpose, the techniques from the groups were combined and implemented in demonstrators.

© University of Augsburg

Funded by

 

Key facts

Start date:
01.10.2009
End date:
31.07.2017

 

Fields of application and case studies

The OC-Trust research group produced several case studies. The fields of application considered are characterized by the fact that the use of Organic Computing principles and the consideration of trustworthiness make it possible to realize clear advantages over conventional approaches. In the following, the fields of application and the problems to be found in them are described and the solutions developed in OC-Trust are outlined.

 

The described solutions were implemented as application platforms within the scope of the project. These provide the concepts of the field of application for applications based on them and serve as a basis for other applications in the same field. Individual applications were implemented as demonstrators in the first project phase. The figure on the right shows the structure of the application platforms on the common infrastructure, the applications based on them and the responsibilities of the individual project groups.

 

Architecture and Applications in OC-Trust © University of Augsburg

Field of application: Open Desktop Grid Computing (Group Müller-Schloer/Hähner)

In a desktop grid computing system, networked computing resources are used for distributed computing on large amounts of data. Frameworks like Condor or BOINC make this functionality generically available and can be used for the execution of arbitrary, distributable computational tasks. The calculated tasks are distributed centrally and the results are then collected centrally. In practice, these systems are mainly used in the scientific field and within an administrative domain. The trustworthiness of the participants is therefore usually irrelevant.

 

In contrast, in the general case of Volunteer Computing, individuals from different administrative domains who have no form of relationship to each other provide resources. Calculation tasks can be created by all participants. Essential problems in such networks are unreliable participants who accept work orders but do not complete them, and participants who calculate tasks in the network but do not provide resources themselves. In existing systems, it is assumed that the participants are in principle benevolent and that these phenomena are not or only insufficiently countered.

 

The Trusted Computing Grid (TCG) makes these problems manageable. The participants in the TCG measure the trustworthiness of the other participants and make this data available to each other. This makes it possible to specifically search for trustworthy partners and to exclude egoistic participants. In addition, the agents can adapt their behavior in such a way that the efficiency and robustness of the TCG are increased. Due to the proximity to existing systems, it is possible to transfer the developed principles relatively quickly into concrete software and to realize their benefits. One of the concrete applications being investigated is distributed face recognition in TCG.

 

Field of application: Decentralised control of energy supply (Group Reif)

The energy market is undergoing the greatest upheaval in its history. The massive increase of regenerative power sources and use-dependent energy producers (summarized as stochastic producers), as well as controllable micro-producers, presents the current technologies for controlling controllable power plants with massive problems: on the one hand, it is becoming more difficult to deal with the large number of power plants, and on the other hand, the fluctuations caused by the changes in the production of stochastic producers are difficult to predict and compensate.

 

Organic Computing systems are the solution to this dilemma. An essential feature of these systems is the transfer of decisions to the agents and the use of local knowledge. This fits in with the distributed nature of modern energy supply and allows to deal with the high dynamics and constantly changing situations. However, in such a critical domain as energy supply, special precautions must be taken to ensure security of supply and avoid damage.

 

The Trusted Energy Grid (TEG) provides methods to compensate uncertainties with the help of confidence values and to enable autonomous control of controllable power plants. At the same time, the use of behavior corridors ensures operational safety and functional correctness. Through self-organization, a System of Systems is created which is divided into Autonomous Virtual Power Plants (AVPP). Within the AVPPs, efficient planning can be carried out taking into account the trustworthiness of the participants as well as on the basis of forecasts and quality criteria measured for them. The TEG thus represents a vision of the future for solving an important social issue that is also extremely relevant from a systems engineering and economic perspective.

 

Simulation to control the decentralized energy supply in the Trusted Energy Grid © University of Augsburg

Field of application: Adaptive public multi-user interfaces (Group André)

The privacy and transparency requirements of a system are particularly high when users use public displays to visualize and edit data alone or with other private data. If, for example, an unknown person enters the interaction area, it must be ensured that the private data is no longer displayed. At the same time, however, the user must be able to follow the autonomous changes through self-adjustment of the system and possibly reverse them.

 

Multi-User Multi-Display Environments (MUMD) lie in this area of conflict between privacy, transparency, controllability and ease of use. The inclusion of the social relations between the users of such a system makes it possible to distinguish between trust critical situations, in which the privacy of the data must be protected, and uncritical situations and to react accordingly. Applications can be found in Ambient Assisted Living or anywhere where the public space is enriched with interactive devices that can be coupled with applications on private end devices. An example of this is the Friend Finder, a system that can be used in large public facilities or at trade fairs to locate acquaintances on the premises.

 

Interactions with the Friend Finder via public and private displays © University of Augsburg

Common Basis: Infrastructure for Trusted Systems (Group Ungerer)

The proposed solutions are based on common elements. All applications are based on agents that communicate with each other. The agents measure, store, process and analyze trust values. They also need services to locate other agents, store data persistently in the system and retrieve data about the availability of other agents.

 

At the infrastructure level, trust values can be used to improve the trustworthiness of a system. At this level, for example, reliability can be increased by identifying unreliable nodes, i.e. nodes that cannot be reached repeatedly. Then important services are migrated to reliable nodes. In addition, procedures can be set up in the infrastructure to store trust values and evaluate them with application-specific metrics.

 

The Trust Enabling Middleware (TEM) provides these services. In cooperation with all groups, interfaces were developed that allow the individual application platforms to be developed on the basis of the TEM. The TEM provides an agent abstraction as well as an infrastructure and primitives for communication. Availability data of the individual agents are collected completely over the infrastructure and can be evaluated and reused by the platforms. In addition, it is possible to store trust values collected in the application in the TEM and to call them up and evaluate them as required.

Team

Director
Institute for Software & Systems Engineering
Senior Researcher
Institute for Software & Systems Engineering

Chairholder
Chair for Human-Centered Artificial Intelligence
Full Professor
Lehrstuhl für Organic Computing
retired professor
Chair for Embedded Systems

Prof. Dr. Christian Müller-Schloer

Former Professor
Leibniz University Hannover

Phone: +49 511 762 19730

Institute for Software & Systems Engineering

The Institute for Software & Systems Engineering (ISSE), directed by Prof. Dr. Wolfgang Reif, is a scientific institution within the Faculty of Applied Computer Science of the University of Augsburg. In research, the institute supports both fundamental and application-oriented research in all areas of software and systems engineering. In teaching, the institute facilitates the further development of the faculty's and university's relevant course offerings.

Search