Graduate Thesis Or Dissertation

 

Parsing with Recurrent Neural Networks Public Deposited

Contenu téléchargeable

Télécharger le fichier PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/hm50tv954

Descriptions

Attribute NameValues
Creator
Abstract
  • Machine learning models for natural language processing have traditionally relied on large numbers of discrete features, built up from atomic categories such as word forms and part-of-speech labels, which are considered completely distinct from each other. Recently however, the advent of dense feature representations coupled with deep learning techniques has led to powerful new models which can automatically learn to exploit various dimensions of implicit similarity between such discrete linguistic entities. This work extends that line of research as it applies to syntactic parsing, particularly by introducing recurrent network models which can encode the entirety of a sentence in context and by proposing novel parsing systems to take advantage of such models. Syntactic parsing is an inherently difficult problem in natural language processing because of the ambiguous and highly compositional nature of language itself. Perfect agreement is not possible even among expert human annotators. Statistical and machine learning prediction of the syntactic structure of sentences has been the subject of decades of study. Recent advances in applying deep neural models to language problems, however, have led to rapid strides in this domain, with models which are able to automatically exploit a whole new realm of hidden regularities in language. We continue this trend with feature-learning recurrent networks to model entire sentences, which allow the parser to incorporate information from the entire sentence context when making every decision. We also introduce new parsing paradigms designed explicitly to leverage this new representational power, including a state-of-the-art transition-based constituency parser, the first ever to achieve competitive results with greedy decoding. We also introduce a straightforward dynamic oracle for the aforementioned constituency parsing system, and show that it is optimal in both label recall and precision. This is the first ever provably optimal dynamic oracle for a transition-based constituency parser. In addition to its optimality, our dynamic oracle is computable in amortized constant time per step, a dramatic improvement over its forerunners for arc-standard dependency parsing, which required worst-case cubic time per step. Extending the optimality proof for that dynamic oracle, we show the surprising result that the entire space of possible parser states for a sentence of length n can be reduced to O(n²) using a further simplified feature space. This simplification could have important future impact for search-based or globally-optimized training methods. Finally, we extend our parsing model still further, by applying it to morphologically rich languages, using continuous embeddings over previously predicted morphological features. We find that we achieve very competitive results over a range of languages de- spite no language-specific architectural or hyper-parameter tuning, including achieving the best reported parsing results on the French Treebank.
License
Resource Type
Date Available
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Non-Academic Affiliation
Subject
Déclaration de droits
Publisher
Peer Reviewed
Language
Replaces

Des relations

Parents:

This work has no parents.

Dans Collection:

Articles