Efficient Re-indexing of Automatically Annotated Image Collections Using Keyword Combination
This event took place on Friday 20 October 2006 at 11:30
Alexei Yavlinky Multimedia and Information Systems Group, Dept of Computing, Imperial College London
I will present a framework for improving the image index obtained by automated image annotation. Within this framework, the technique of keyword combination is used for fast image re-indexing based on initial automated annotations. It aims to tackle the challenges of limited vocabulary size and low annotation accuracies resulting from differences between training and test collections. It is useful for situations when these two problems are not anticipated at the time of annotation. I will show that based on example images from the automatically annotated collection, it is often possible to find multiple keyword queries that can retrieve new image concepts which are not present in the training vocabulary, and improve retrieval results of those that are already present. This can be done at a very small computational cost and at an acceptable performance tradeoff, compared to traditional annotation models. I will report results on TRECVID 2005, Getty Image Archive, and Web image datasets, the last two of which were specifically constructed to support realistic retrieval scenarios.
Download PowerPoint presentation (9Mb ZIP file)
This event took place on Friday 20 October 2006 at 11:30
I will present a framework for improving the image index obtained by automated image annotation. Within this framework, the technique of keyword combination is used for fast image re-indexing based on initial automated annotations. It aims to tackle the challenges of limited vocabulary size and low annotation accuracies resulting from differences between training and test collections. It is useful for situations when these two problems are not anticipated at the time of annotation. I will show that based on example images from the automatically annotated collection, it is often possible to find multiple keyword queries that can retrieve new image concepts which are not present in the training vocabulary, and improve retrieval results of those that are already present. This can be done at a very small computational cost and at an acceptable performance tradeoff, compared to traditional annotation models. I will report results on TRECVID 2005, Getty Image Archive, and Web image datasets, the last two of which were specifically constructed to support realistic retrieval scenarios.
Download PowerPoint presentation (9Mb ZIP file)
Future Internet
KnowledgeManagementMultimedia &
Information SystemsNarrative
HypermediaNew Media SystemsSemantic Web &
Knowledge ServicesSocial Software
Multimedia and Information Systems is...

We focus on content-based information retrieval over a wide range of data spanning form unstructured text and unlabelled images over spoken documents and music to videos. This encompasses the modelling of human perception of relevance and similarity, the learning from user actions and the up-to-date presentation of information. Currently we are building a research version of an integrated multimedia information retrieval system MIR to be used as a research prototype. We aim for a system that understands the user's information need and successfully links it to the appropriate information sources, be it a report or a TV news clip. This work is guided by the vision that an automated knowledge extraction system ultimately empowers people making efficient use of information sources without the burden of filing data into specialised databases.
Visit the MMIS website
Check out these Hot Multimedia and Information Systems Projects:
List all Multimedia and Information Systems Projects
Check out these Hot Multimedia and Information Systems Technologies:
List all Multimedia and Information Systems Technologies
List all Multimedia and Information Systems Projects
Check out these Hot Multimedia and Information Systems Technologies:
List all Multimedia and Information Systems Technologies

