The Real Time Systems Group focuses on the following areas: planning of complex technical systems, modelling and analysis of event-discrete systems with formal methods, software development methods and devices in the automation technology as well as in the programming and testing of embedded, networked control devices under the aspect of real time, reliability and security.
The respective applications range from systems automation with industrial, programmable logic controllers to event-discrete (reactive) control of autonomous, mobile robots with embedded microcontrols and real time operating systems.
Currently, the RTS research projects can be assigned to the following areas:
MOBILIZE - Mobility in Engineering and Science
Researcher: M. Sc. Sebastian Kleinschmidt
Especially for the state of Lower Saxony, the future of mobility is a key issue. The "MOBILIZE - Mobility in Engineering and Science" research initiative lays the foundations for further mobility research and, in particular, promotes interdisciplinary cooperation since future innovations can be expected especially at the borders between the various disciplines. Mobilize includes several research initiatives, "Mobile Human" is one of them. Together with the Institute of Microelectronic Systems (IMS) and the Institute of Photogrammetry and GeoInformation (IPI), the RTS is working on the topic "Algorithms and Architectures for Machine Learning based Real Time Sensor Data Fusion".
DFG Research Training Group – Integrity and Collaboration in dynamic SENSor networks (i.c.sens)
Researcher: M. Sc. Raphael Voges
The Research Training Group GRK2159 (i.c.sens) will investigate concepts for ensuring the integrity of collaborative systems in dynamic sensor networks. Currently, collaborative and autonomous systems are entering every day live, like e.g., in flexible factory automation, through service and home robotics, or as autonomous vehicles. Consequently, these systems are close to or even in direct interaction with humans yielding new risks. In order to prevent damage and accidents, the systems should detect failures and warn timely their vicinity, i.e. guarantee the integrity of their correct operation, in particular the integrity of their navigation information. In addition, an increasing number of co-existing autonomous systems enables collaboration and cooperation but calls for coordination
Automated Driving in Industrial Environments
In many industrial areas, cars are driven manually through the environment. However, the proportion of electronic components in vehicles is increasing steadily, and thus most actuators can already be controlled electronically. Nevertheless not all cars have appropriate sensors to let them drive automatically. This project aims to investigate how the vehicles can be retrofitted with a mobile sensor unit and an automatic drive in an industrial environment can be realized.
Starting in January 2015, SmokeBot is a project funded by the EU within the scope of the Horizon 2020 program. It is being realized together with partners from Sweden, Austria and United Kingdom, as well as Fraunhofer Institute for High Frequency Physics and Radar Techniques. The aim of SmokeBot is to improve the environment perception of mobile robots in scenarios with low visibility. L3S is responsible for research topics like sensor data fusion (light based sensors and radar), thermography, (hazard) situation analysis and information modeling.
Harsh conditions such as rain, snow and fog and situational occurrences, e.g. fire-related smoke and dust, significantly decrease quality and usability of traditional, light based sensor modalities. Robots for emergency response and disaster management will satisfy an upcoming field with a high demand of risk reduction for first responder personnel as well as reduced mission costs. These applications require new perceptual and cognitive robotic capabilities. Limits of traditional sensors for tasks of dealing with only partial or erroneous information as well as mechanical, electronic and thermal robustness restrict the use of current robotic platforms.
The aim of SmokeBot was to improve the environment perception of mobile robots in scenarios with smoke. Existing sensor technology as LiDAR and RGB cameras cannot cope with such demanding conditions. The focus is on civil robots supporting fire brigades in search and rescue missions, e.g. in post-disaster management operations. Here you can see our SmokeBot video.
Article: SMOKEBOT undergoes a trial by fire
As part of the USBV-Inspektor project, L3S develops a multimodal sensor-suite together with the North Rhine-Westphalia State Office of Criminal Investigation, Fraunhofer FHR, ELP GmbH and Hentschel System GmbH. This sensor-suite consists of a millimetre wave scanner, a 3D rangescanner and a high-resolution camera. The system perceives the internal and external geometry and generates high definition images of suspicious objects to support the local action force and collect additional evidence.
Suitcases, bags or backpacks left lying around unsupervised are part of daily life. Even most abandoned luggage items turn out to be harmless, they can be the cause for a large-scale police operation. The objective of the USBV-Inspektor project is to develop a sensor-suite, which can be mounted on the end effector of a remotely operated robot to enable local forces to inspect suspicious objects without placing themselves in danger. The operator remotely controls the robot from a save distance to get a 3D visualization of the environment. Furthermore, the data of the sensor-suite can be used to preserve evidence for criminal proceedings and to facilitate threat assessment of suspicious items by providing additional 3D information of the crime scene and the content of the object.
Mobile Service Robots
Autonomous forklift – a Revolution on the Factory Floor
The Realtime Systems Group (RTS) of the Leibniz Universität Hannover cooperates with the STILL GmbH in order to facilitate flexible navigation and materials handling.
An overriding aim of the project is to reduce costs by using flexible automated materials handling vehicles – both in warehouse and production facilities. Economic gains can be expected, for example, by considerably reducing the risk of accidentally dropping material or misplaced goods. Furthermore, little or no infrastructure changes (such as track guides) are necessary. The possibilities for an autonomous forklift truck are enormous, since the transition between manual and automated operation can be optimised. Futhermore, the transport processes can be reproduced – even those in which a high degree of care is needed when positioning.
Navigation of Autonomous Systems in Outdoor Environments
Service robots autonomously mow greens and pull out weeds in large parks or sports facilities. Tourists are most comfortably brought to the respective places of interest or accompanied on a fair. Disabled persons and elderly people find help in dealing with everyday routines. There are countless examples like the aforementioned ones, when service robots support individuals. To be able to solve the given tasks effciently the robot always has to know his present location thus constantly looking for the answer to the question: "Where am I?".
To find the answer to this question we devolop new procedures and mathematical models for navigating autonomous systems. In this special case, we focus on the navigation in unknown, natural surroundings. In contrast to indoor navigation, where the environment is structured by straight walls, doors and flat floors, the obstacles and objects available in outdoor environments to determine the navigation are much more complex. Form and surface of the objects (trees, bushes, buildings etc.) are irregular and change over time. Influences such as wind can hardly be described using unambiguous mathematical models. By fusing different sensors such as GPS, compass, laser scanner, wheel sensors and inertial circles we suceed in autonomously generating environment models of the working area. By using these maps localization is possible with a tolerance of only a few centimetres.
Recent disasters, such as the Fukushima catastrophe, the attack on the WTC, the Gulf of Mexico oil spill, or the Mont Blanc Tunnel fire, have shown that post disaster management tasks still require considerable human intervention, even though the humans entering affected areas risk their health and lives, causing human tragedy and immense cost for national economies. With view to keeping these dangers and costs at bay, and to increasing effectiveness of disaster management operations, our long-term vision is that teams of autonomous robots equipped with sensors, manipulators, and communication capabilities will be able to enter dangerous, polluted or contaminated areas and to manage all the necessary tasks (e.g. search for survivors, evacuate injured persons, remove safety critical materials, clean up) without the need for explicit human assistance. This inspires the notion of Robotic FireFighters (RFF), in analogy to human fire brigades, where firemen (and firewomen) work together towards the common goal of getting a disaster under control.
To obtain 3D environmental data the Institute for Systems Engineering has developed a continuously rotating 3D scanner consisting of a Scan Drive system especially designed for this application and a SICK laser scanner. The sensor can be used on mobile robot platforms for environmental perception and navigation. It can take a complete picture of its surroundings every 2 seconds with more than 13,000 measuring points to be taken into account every second. These points are synchronized with the angle of the rotation 2D scanner and the actual position and location of the robot to generate accurate 3D data even during the robot's journey. Evaluation of the data obtained is performed via a Scalable Processing Box (SPB) on the basis of our robot operating system. Using this real time operating system it is possible to avoid systematic measurement errors which usually occur due to runtime differences during data processing.
The 3D point clouds obtained are evaluated using different algorithms. In this context, algorithms are available which recognize the obstacles in 3D pictures reducing the gathered information about size and orientation of the objects to a twodimensional view. Then, these data can be fed to the 2D SLAM algorithms thus enabling us to use the considerably higher amount of information contained in 3D scans to have 2D algorithms work more effectively. Integrating 3D perception makes it possible - for the first time - to autonomously navigate in an unstructured outdoor environment.
Furthermore we work on extracting characteristics within the 3D point clouds to finally examine the 3D data as to certain features, thus reducing the large quantities of data generated by the 3D sensor and implementing more efficient algorithms for object detection and classification.
Distributed Realtime and Automation Systems
Realtime-Linux: Xenomai, RTnet, RACK
Our institute's research and developement is based on Linux Realtime Extension. Xenomai Further development of this Open-Source-Projekt is actively supported. In this context, RTS contributed offering the Real-Time-Driver Model (RTDM) as well as numerous driver units.
Due to several student/diploma papers the project RTnet has ocurred at RTS aiming at using Standard-Ethernet for communcation in hard real time conditions. Already a short time after publication the research team has been joined by numerous volunteers coming from different countries so that progress is coming at a dynamic pace. The open architecture of RTnet offers the potential for diverse application purposes ranging from the institute's mobile robots to industrial units. RTS will continue to actively support the coordination and development of this project.
All RTS Service robots are running an in-house developed real time middleware based on Xenomai und RTnet. In this context, Robotics Application Constructions Kit (RACK) represents the key system published as Open Source. Other components will be developed as a result of research projects or else according to the orders placed by industry partners.
Scalable Processing Box
Increasingly complex robotic projects in research and teaching require the development of a standardized and scalable processing platform which can be used not only as basis for lectures and internships but also for research projects. The Scalable Processing Box (SPB) is designed to offer a flexible possibility for students and reseachers to work in the area of robotics without having to worry about basic process infrastructure which they only wanted to use as a tool for their work. Object is, to standardize SPB with regard to hard- and software so that despite of using processor cores with different performance and architecture the same Lock-and-Feel can always be achieved. Thus, it should be possible to realize simple low level projects for student beginners as well as complex applications for research projects (e.g. doctor theses).