Explainable AI for NLP: Decoding Black Box |
||
|
|
|
© 2022 by IJCTT Journal | ||
Volume-70 Issue-7 |
||
Year of Publication : 2022 | ||
Authors : Yogendra Sisodia | ||
DOI : 10.14445/22312803/IJCTT-V70I7P103 |
How to Cite?
Yogendra Sisodia, "Explainable AI for NLP: Decoding Black Box," International Journal of Computer Trends and Technology, vol. 70, no. 7, pp. 11-15, 2022. Crossref, https://doi.org/10.14445/22312803/IJCTT-V70I7P103
Abstract
Recent advancements in machine learning have sparked greater interest in previously understudied topics. As machine learning improves, experts are being pushed to understand and trace how algorithms get their results, how models think, and why the end outcome. It is also difficult to communicate the outcome to end customers and internal stakeholders such as sales and customer service without explaining the outcomes in simple language, especially using visualization. In specialized domains like law and medicine, it becomes vital to understand the machine learning output.
Keywords
Artificial Intelligence, Natural Language Processing, Explainable AI, Deep Neural Network, Transformers.
Reference
[1] Tom B. Brown Et Al, "Language Models Are Few-Shot Learners," in: Arxiv Preprint Arxiv:2005.14165v4 , 2020.
[2] Shap Library [Online]. Available Https://Github.Com/Slundberg/Shap/
[3] Lime Library [Online]. Available Https://Github.Com/Marcotcr/Lime
[4] Scott Lundberg, Su-in Lee, "A Unified Approach To Interpreting Model Predictions,” in: Arxiv Preprint Arxiv:1705.07874v2, 2017.
[5] Avanti Shrikumar Et Al, “Not Just a Black Box: Learning Important Features Through Propagating Activation Differences,” in: Arxiv Preprint Arxiv:1605.01713, 2016.
[6] Ashish Vaswani Et Al, “Attention Is All You Need,” in: Arxiv Preprint Arxiv:1706.03762v5 , 2017.
[7] Jacob Devlin Et Al, “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding,” in: Arxiv Preprint Arxiv:1810.04805v2, 2018.
[8] Nlpexplainer Library [Online]. Available Https://Github.Com/Scholarly360/Nlpexplainer
[9] Eneko Agirre, Llu’is M‘Arquez, and Richard Wicentowski, “Proceedings of the Fourth International Workshop on Semantic Evaluations” (Semeval), 2007.
[10] Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. "Clozedriven Pretraining of Self-Attention Networks,” Arxiv Preprint Arxiv:1903.07785, 2019.
[11] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. "A Large Annotated Corpus for Learning Natural Language Inference," 2015.
[12] William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. "KERMIT: Generative Insertion-Based Modeling for Sequences,” Arxiv Preprint Arxiv:1906.01604 , 2019.
[13] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S.Weld, Luke Zettlemoyer, and Omer Levy, "Spanbert: Improving Pre-Training By Representing and Predicting Spans,” Arxiv Preprint Arxiv:1907.10529 , 2019.
[14] Diederik Kingma and Jimmy Ba Adam, "A Method for Stochastic Optimization", In International Conference on Learning Representations (ICLR), 2015.
[15] Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu,Yordan Yordanov, and Thomas Lukasiewicz. "A Surprisingly Robust Trick for Winograd Schema Challenge,” Arxiv Preprint Arxiv:1905.06290, 2019.
[16] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. "Race: Large-Scale Reading Comprehension Dataset From Examinations,” Arxiv Preprint Arxiv:1704.04683 , 2017.
[17] Guillaume Lample and Alexis Conneau,” Crosslingual Language Model Pretraining,” Arxiv Preprint Arxiv:1901.07291, 2019.
[18] Hector J Levesque, Ernest Davis, and Leora Morgenstern, "The Winograd Schema Challenge,” in AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 2011.
[19] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao, "Improving Multi-Task Deep Neural Networks Via Knowledge Distillation for Natural Language Understanding,” Arxiv Preprint Arxiv:1904. 9482 , 2019.