Automatic generation of personalized tutorial feedback in e-learning
This event took place on Wednesday 22 September 2010 at 11:30
We are arguably in the midst of a transition from traditional classroom learning (c-learning) to electronic and mostly individual learning (e-learning). One of the problems we are facing today is that feedback given automatically by a computer is much more limited and often less helpful than feedback provided by a teacher. For exercise types with limited input possibilities, like multiple choice questions, the teacher is asked to enter feedback for all possible wrong answers. Once we make use of more open question, such as a translate exercise, this is no longer feasible. The student can make any grammar, spelling, translation or style error and for a number of different reasons. Current state-of-the-art solutions use language specific parsers in combination with spellcheckers to provide corrections and feedback. They are however very hard to construct and although their precision is acceptable, they often lack in recall. What we are planning to do is develop a system that can compare errors and reuse feedback messages from the past. To accomplish this, we make use of natural language processing (such as part-of-speech tagging and corpus linguistics) and machine learning techniques (classification, clustering, etc.). Combining linguistics, statistics, computer science and pedagogy, a truly interdisciplinary undertaking.
(Due to unforeseen circumstances we were unable to record or webcast this event, we apologise to those who were otherwise unable to attend this event in person)
Watch the webcast replay >>
We are also inviting top experts in AI and Knowledge Technologies to discuss major socio-technological topics with an audience that comprises both members of the Knowledge Media Institute, as well as the wider staff at The Open University. Differently from our seminar series, these events follow a Q&A format.