KMi Seminars
Off the beaten track: using content-based multimedia similarity search for learning
This event took place on Friday 04 March 2011 at 11:30

Suzanne Little

Electronic media is a valuable and ever increasing resource for information seekers and learners. So much information can be contained in a picture, explained by a diagram or demonstrated in a video clip. But how can you find what you are looking for if you don't understand it well enough to describe it? What can you do if you are faced with a mountain of multimedia learning material? Are there other ways of exploring open educational resources then sticking to the well defined paths of text search and hyperlinks?

This talk will present recent work applying content-based multimedia similarity search to find related educational material by using images to query a collection. It will describe the use of local features in images, 'keypoints', identified using an approach called Scale-Invariant Feature Transforms (SIFT) [1], and the implementation of a nearest neighbour based indexing system to find visually similar images. The resulting content-based media search tool (cbms) has been applied in the context of the SocialLearn project [2] to help users find and explore connected web pages, presentations or documents. It is also the basis of the Spot&Search [3] iPhone application that can be used to explore artwork installations on the OU Walton Hall campus.

[1] http://www.cs.ubc.ca/~lowe/keypoints/
[2] http://www.sociallearn.org
[3] http://spotandsearch.kmi.open.ac.uk

 
KMi Seminars Event | SSSW 2013, The 10th Summer School on Ontology Engineering and the Semantic Web Journal | 25 years of knowledge acquisition
 

Multimedia and Information Systems is...


Multimedia and Information Systems
Our research is centred around the theme of Multimedia Information Retrieval, ie, Video Search Engines, Image Databases, Spoken Document Retrieval, Music Retrieval, Query Languages and Query Mediation.

We focus on content-based information retrieval over a wide range of data spanning form unstructured text and unlabelled images over spoken documents and music to videos. This encompasses the modelling of human perception of relevance and similarity, the learning from user actions and the up-to-date presentation of information. Currently we are building a research version of an integrated multimedia information retrieval system MIR to be used as a research prototype. We aim for a system that understands the user's information need and successfully links it to the appropriate information sources, be it a report or a TV news clip. This work is guided by the vision that an automated knowledge extraction system ultimately empowers people making efficient use of information sources without the burden of filing data into specialised databases.

Visit the MMIS website