Amazon cover image
Image from Amazon.com

Supervised Sequence Labelling with Recurrent Neural Networks [electronic resource] / by Alex Graves.

By: Contributor(s): Material type: TextTextSeries: Studies in Computational Intelligence ; 385Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 2012Description: XIV, 146 p. online resourceContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9783642247972
Subject(s): Additional physical formats: Printed edition:: No titleDDC classification:
  • 006.3 23
LOC classification:
  • Q342
Online resources:
Contents:
Introduction -- Supervised Sequence Labelling -- Neural Networks -- Long Short-Term Memory -- A Comparison of Network Architectures -- Hidden Markov Model Hybrids -- Connectionist Temporal Classification -- Multidimensional Networks -- Hierarchical Subsampling Networks.
In: Springer eBooksSummary: Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional recurrent neural networks extend the framework in a natural way to data with more than one spatio-temporal dimension, such as images and videos. Thirdly, the use of hierarchical subsampling makes it feasible to apply the framework to very large or high resolution sequences, such as raw audio or video.   Experimental validation is provided by state-of-the-art results in speech and handwriting recognition.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
E-Book E-Book Central Library Available E-48554

Introduction -- Supervised Sequence Labelling -- Neural Networks -- Long Short-Term Memory -- A Comparison of Network Architectures -- Hidden Markov Model Hybrids -- Connectionist Temporal Classification -- Multidimensional Networks -- Hierarchical Subsampling Networks.

Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional recurrent neural networks extend the framework in a natural way to data with more than one spatio-temporal dimension, such as images and videos. Thirdly, the use of hierarchical subsampling makes it feasible to apply the framework to very large or high resolution sequences, such as raw audio or video.   Experimental validation is provided by state-of-the-art results in speech and handwriting recognition.

There are no comments on this title.

to post a comment.

Maintained by VTU Library