Compositional embedding models build a rep-resentation for a linguistic structure based on its component word embeddings. While re-cent work has combined these word embed-dings with hand crafted features for improved performance, it was restricted to a small num-ber of features due to model complexity, thus limiting its applicability. We propose a new model that conjoins features and word em-beddings while maintaing a small number of parameters by learning feature embeddings jointly with the parameters of a compositional model. The result is a method that can scale to more features and more labels, while avoiding overfitting. We demonstrate that our model at-tains state-of-the-art results on ACE and ERE fine-grained relation extraction.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below