Learning Conditional Random Fields from Unaligned Data for Natural Language Understanding
This event took place on Friday 28 October 2011 at 11:30
Dr. Deyu Zhou School of Computer Science and Engineering, Southeast University, China
One of the key tasks in natural language understanding is semantic parsing which maps natural language sentences to complete formal meaning representations. Rule-based approaches are typically domain-specific and often fragile. Statistical approaches are able to accommodate the variations found in real data and hence can in principle be more robust. However, statistical approaches need fully annotated data for training the models. A learning approach to train conditional random fields from unaligned data for natural language understanding is proposed and discussed. The learning approach resembles the expectation maximization algorithm. It has two advantages, one is that only abstract annotations are needed instead of fully word-level annotations, and the other is that the proposed learning framework can be easily extended for training other discriminative models, such as support vector machines, from abstract annotations. The proposed approach has been tested on the DARPA Communicator Data. Experimental results show that it outperforms the hidden vector state (HVS) model, a modified hidden Markov model also trained on abstract annotations.
This event took place on Friday 28 October 2011 at 11:30
One of the key tasks in natural language understanding is semantic parsing which maps natural language sentences to complete formal meaning representations. Rule-based approaches are typically domain-specific and often fragile. Statistical approaches are able to accommodate the variations found in real data and hence can in principle be more robust. However, statistical approaches need fully annotated data for training the models. A learning approach to train conditional random fields from unaligned data for natural language understanding is proposed and discussed. The learning approach resembles the expectation maximization algorithm. It has two advantages, one is that only abstract annotations are needed instead of fully word-level annotations, and the other is that the proposed learning framework can be easily extended for training other discriminative models, such as support vector machines, from abstract annotations. The proposed approach has been tested on the DARPA Communicator Data. Experimental results show that it outperforms the hidden vector state (HVS) model, a modified hidden Markov model also trained on abstract annotations.
Future Internet
KnowledgeManagementMultimedia &
Information SystemsNarrative
HypermediaNew Media SystemsSemantic Web &
Knowledge ServicesSocial Software
Narrative Hypermedia is...

Hypermedia is the combination of hypertext for linking and structuring multimedia information.
Narrative Hypermedia is therefore concerned with how all of the above narrative forms, plus the many other diverse forms of discourse possible on the Web, can be effectively designed to communicate coherent conceptual structures, drawing inspiration from theories in narratology, semiotics, psycholinguistics and film.
Check out these Hot Narrative Hypermedia Projects:
List all Narrative Hypermedia Projects
Check out these Hot Narrative Hypermedia Technologies:
List all Narrative Hypermedia Technologies
List all Narrative Hypermedia Projects
Check out these Hot Narrative Hypermedia Technologies:
List all Narrative Hypermedia Technologies



