Reza Ghaeini, Ph.D. Student
Oregon State University
Natural Language Understanding is a challenging domain of Natural Language Processing. One way to improve a model's language understanding is enriching its structure to enhance its capability to learn the latent rules of the language. Our past work has introduced several deep models in a variety of domains (e.g. event extraction, natural language inference, question answering). Such efforts yield better performances, however, due to the black-box nature of deep learning, it is difficult to understand whether the improved models indeed acquire a better understanding of language. Meanwhile, data is often plagued by meaningless or even harmful statistical biases and deep models might achieve high performance by focusing on the biases.
This motivates us to study methods for `peaking inside’ the black-box deep models to provide explanation and understanding of the models’ behavior. Further, we introduce a novel saliency learning mechanism, which learns from ground-truth explanation signal such that the learned model will not only make the right prediction but also for the right reason. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.
Wednesday, January 23, 2019 at 3:00pm to 4:00pm
Batcheller Hall, 250
1791 Campus Way, Corvallis, OR 97331