The Discreet Charm of Meta
This event took place on Friday 22 October 2004 at 10:00
Dr. Frank Nack CWI, Amsterdam, The Netherlands
In dynamic environments, such as web-based cultural heritage sites, where neither the individual user requirements nor the requested material can be predicted in advance, an automated presentation generation process is required. For that, however, the system relies on semantic, episodic, and technical representation structures that provide full experience to the user by means of montage. Such constructivistic needs of new media require more than characterizing audio-visual information on a perceptual level using objective measurements, such as those based on image or sound processing or pattern recognition. These retrospective technologies understand media rather from the point of view of automatically index multimedia information.
Though machine-generated metadata is cheap to produce, it is insufficient because it s exclusively organized around the media structures? sensory surfaces that is, the physical features of an image, or video, or audio stream. There is lots of evidence, however, that a great deal of required annotation can be provided by manual labour. In this talk we investigate the notion of individual and purpose-driven audio-visual media annotation during the production process. The aim is to make use of human activity to extract the significant syntactic, semantic and semiotic aspects of the media content as well as the related process, which then can be transformed into formal descriptions. The resulting descriptions form a conceptual information space, in which authors are semantically supported in their creative efforts to generate, manipulate, and exchange information. The approach is demonstrated with examples mainly taken from encyclopaedic spaces from domains such as of theory, history, and anthropology of film or cultural heritage, as provided by museums for the fine arts.
This event took place on Friday 22 October 2004 at 10:00
In dynamic environments, such as web-based cultural heritage sites, where neither the individual user requirements nor the requested material can be predicted in advance, an automated presentation generation process is required. For that, however, the system relies on semantic, episodic, and technical representation structures that provide full experience to the user by means of montage. Such constructivistic needs of new media require more than characterizing audio-visual information on a perceptual level using objective measurements, such as those based on image or sound processing or pattern recognition. These retrospective technologies understand media rather from the point of view of automatically index multimedia information.
Though machine-generated metadata is cheap to produce, it is insufficient because it s exclusively organized around the media structures? sensory surfaces that is, the physical features of an image, or video, or audio stream. There is lots of evidence, however, that a great deal of required annotation can be provided by manual labour. In this talk we investigate the notion of individual and purpose-driven audio-visual media annotation during the production process. The aim is to make use of human activity to extract the significant syntactic, semantic and semiotic aspects of the media content as well as the related process, which then can be transformed into formal descriptions. The resulting descriptions form a conceptual information space, in which authors are semantically supported in their creative efforts to generate, manipulate, and exchange information. The approach is demonstrated with examples mainly taken from encyclopaedic spaces from domains such as of theory, history, and anthropology of film or cultural heritage, as provided by museums for the fine arts.
Future Internet
KnowledgeManagementMultimedia &
Information SystemsNarrative
HypermediaNew Media SystemsSemantic Web &
Knowledge ServicesSocial Software
Multimedia and Information Systems is...

We focus on content-based information retrieval over a wide range of data spanning form unstructured text and unlabelled images over spoken documents and music to videos. This encompasses the modelling of human perception of relevance and similarity, the learning from user actions and the up-to-date presentation of information. Currently we are building a research version of an integrated multimedia information retrieval system MIR to be used as a research prototype. We aim for a system that understands the user's information need and successfully links it to the appropriate information sources, be it a report or a TV news clip. This work is guided by the vision that an automated knowledge extraction system ultimately empowers people making efficient use of information sources without the burden of filing data into specialised databases.
Visit the MMIS website
Check out these Hot Multimedia and Information Systems Projects:
List all Multimedia and Information Systems Projects
Check out these Hot Multimedia and Information Systems Technologies:
List all Multimedia and Information Systems Technologies
List all Multimedia and Information Systems Projects
Check out these Hot Multimedia and Information Systems Technologies:
List all Multimedia and Information Systems Technologies



