Research
Our research on autonomous systems has mainly focused on increasing the perception capabilities with work in computer vision, SLAM, and multi-sensor fusion. More recently, we have explored the usefulness of different machine learning approaches for real robots. While supervised learning works very well, e.g. in object recognition due to a large number of training examples, machine learning that integrates perception with behavior proves much more difficult. In real world robotics applications, there are only few training examples (there is no BehaviorNet!) and we have to deal with partially observable environments. Thus, we are currently looking at alternative approaches including Hierarchical Temporal Memory (HTM), Vector Symbolic Architectures (VSA), and Deep Symbolic Reinforcement Learning that facilitates transfer learning. Since many of our papers and projects cover several topic areas, it was difficult to cluster them without having too much redundancy. Thus, we decided to tag our work according to the different topics, so you can select the tags that interest you most.Research Topics, filter by keyword
Complete list
Image Processing and Computer Vision
The human eye is a powerful perception tool. Large parts of our brain are dedicated for processing the massive amounts of incoming data. Accordingly there is a manifold of possible aplications of cameras in automated systems. A particularly interesting application are mobile robots that work side on side with humans for long time periods.