Norbert Krüger has
been employed at the University of Southern Denmark since 2006
(first as an Associate Professor and then as a full Professor
(MSO) since 2008). He is one of the two leaders of the Cognitive
and Applied Robotics Group (CARO, caro.sdu.dk) in which
currently 12 PhD students, two Assistant and two Associate
Professor as well as 8 master students are working. Norbert
Krüger's research focuses on Cognitive Vision, in particular
vision based manipulation and learning. He has published 45
papers in journals and more than 80 papers at conferences
covering the topics computer vision, robotics, neuroscience as
well as cognitive systems. His H-index is 24. His group has
developed the C++-software CoViS (Cognitive Vision Software)
which is now used by a number of groups in national as well as
European projects. He is currently involved in 2 European
projects as well as 4 Danish projects.
For more details see
PDF.
Computer vision –
although being still a rather young scientific discipline - in
the last decades was able to provide some impressive examples of
artificial vision systems that outperform humans. However, the
human visual system is still superior to any artificial vision
system in visual tasks requiring generalization and reasoning
(often also called ‘cognitive vision’) such as extraction of
visual based affordances or visual tasks in the context of tool
use and dexterous manipulation of unknown objects.
Two decades ago, there
has been a strong connection between the two communities dealing
with human vision research and computer vision. This link
however has been somehow lost recently and computer vision has
been more and more developed into a sub-field of machine
learning. In this talk, I argue that the reason for the
superiority of human vision for ‘cognitive vision tasks’ is
connected to the deep hierarchical architecture of the primate’s
visual system.
The talk is divided
into two parts: First, I will give an overview about today’s
knowledge about the primate’s (and by that the human’s) visual
system primarily based on neurophysiological research. This part
is based on the paper (Kruger et al. 2013, IEEE PAMI) and is in
particular addressing computer vision and machine learning
scientists as audience.
In the second part of
the talk, I will describe a three level hierarchical cognitive
robot system in which actions are learned by observing humans
performing these actions (Kruger et al. 2013, KI). Learning is
taking place at the different levels of the hierarchy in rather
different representations. On the sensory-motor level, the shape
and appearance of objects as well as optimal action trajectories
and force torque profiles are represented. On the mid-level, a
discrete visual representation based on semantic event chains
(Aksoy et al. 2011) bridges towards the planning level, the
highest representational level. I will describe different the
learning problems on the different representational levels and
their interaction.
References
E. E. Aksoy, A.
Abramov J. Dörr, N. Kejun, B. Dellen and F. Wörgötter F. (2011).
Learning the semantics of object-action relations by
observation. The International Journal of Robotics Research, 30,
1229-1249.
N. Krüger, P. Janssen,
S. Kalkan, M. Lappe, A. Leonardis, J. Piater, A. J.
Rodriguez-Sanchez and L. Wiskott. Deep Hierarchies in the
Primate Visual Cortex: What Can We Learn For Computer Vision?
I E E E Transactions on Pattern Analysis and Machine
Intelligence, 35(8), 1847-1871, 2013.
Norbert Krüger, Aleš
Ude, Henrik Gordon Petersen, Bojan Nemec, Lars-Peter Ellekilde,
Thiusius Rajeeth Savarimuthu, Jimmy Alison Rytz, Kerstin
Fischer, Anders Glent Buch, Dirk Kraft, Wail Mustafa, Eren Erdal
Aksoy, Jeremie Papon, Aljaž Kramberger, Florentin Wörgötter.
Technologies for the Fast Set-Up of Automated Assembly
Processes. KI - Künstliche Intelligenz. November 2014, Volume
28, Issue 4, pp 305-313.