If I were supposed to pick seven papers form this year's ACL, then these are my choice (in order they presented in the conference):
- Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning by Tsvetkov et al.
- Pointing the Unknown Words by Gulcehre et al.
- Word Embeddings as Metric Recovery in Semantic Spaces by Hashimoto et al. (though I could not understand much of the complex mathematics they proposed, but, an excellent idea!)
- Learning Word Meta-Embeddings by Yin and Schutze; an interesting idea with improvement in the state-of-the-art for several tasks such as MEN, and the method can be generalised in several ways.
- LEXSEMTM: A Semantic Dataset Based on All-words Unsupervised Sense Distribution Learning by Bennett et al.; a new automatically generated language resource (+gold annotations) + good overview of WSI.
- A Latent Variable Model Approach to PMI-based Word Embeddings by Arora et al.; to me, it was the best paper and one of the best-delivered talks.
- A Vector Space for Distributional Semantics for Entailment by Henderson and Popa; really interesting work.
I must add here that choosing 7 papers from many good works was not that easy---the pace of CL research (and not only research, I mean real quality research) is incredible).
Ah, also I should mention the invited talk at *SEM by Alexander Koller. Enjoyable talk but somehow controversial! The title of the talk was "Top-down and bottom-up views on success in semantics". But, I would rather rephrase it to "distributional semantics vs. formal semantics at the absence of a constructive psychology of success" or maybe "If XOR(success of distributional semantics, success of formal semantics )" (I am trying to be funny, right?!).
In fairness, Alexander had the attention of all the audience for the whole duration of 40-50 minutes. You know what I mean: it was not one of those invited talks that make you yawning! A great start to the last day of a computational-linguistics-packed strenuous week!