Full Seminar Details
Trung Huynh

This event took place on Wednesday 04 October 2017 at 11:30
Learning word representations from unsupervised methods has been recently extensively explored. However the studied methods have been limited to methods based on co-occurrence statistics or windowed bag-of-word shallow neural networks. These methods are usually so computationally efficient that they can be trained with huge corpora that may contain billions of tokens. These bag-of-word models however do not utilise the structural nature of languages for their inferences. We hypothesise that by using structural architectures, specifically recurrent neural networks, derived word representations contain properties learned from the preserved sequential nature of the inputs.