Full Seminar Details
Agnese Chiatti
Knowledge Media Institute

This event took place on Wednesday 25 March 2020 at 11:30
The fast-paced advancement of the AI and Robotics fields has provided new technological tools for developing robots that can assist people with their daily tasks (i.e., service robots). To make sense of real-world, dynamic environments, service robots need not only to robustly recognise objects, but also to understand their observations and to react accordingly. Our focus is on the sensory modality of vision and on Visual Intelligence: the robots’ capability to use their vision system, reasoning components and background knowledge to make sense of their environment. Despite the recent popularity of Computer Vision methods based on Deep Neural Networks, machine Visual Intelligence is still inferior to human Visual Intelligence in many ways. Thus, there is an incentive to take inspiration from the excellence of the human mind at vision, to better pinpoint the set of capabilities and types of knowledge required for human-like Visual Intelligence. In this work, we examine the epistemic requirements of Visual Intelligence. We propose a framework which leverages the different capabilities and knowledge resources required for Visual Intelligence, to improve the sensemaking capabilities of service robots.