Hongyin Luo
Postdoctoral Associate at MIT CSAIL.

32-G440
32 Vassar St.,
Cambridge, MA 02139
Hongyin is a postdoctoral associate at MIT CSAIL, working with Dr. Jim Glass in the spoken language systems (SLS) group. He completed his Ph.D. at MIT EECS in May 2022, working on self-training for natural language processing.
He has worked on different natural language processing techniques, mostly semantic representation models that help computers understand and generate natural languages better. He has been working on interpretable word representation learning, deep neural networks, co-reference resolution, and other NLP applications. His recent research interest includes efficient and robust language modeling techniques and computational linguistics, especially pragmatics, contextual entailment, and their applications in trustworthy language models.
Email: hyluo [at] mit [dot] edu
news
May 24, 2023 | We release the preprint of SAIL: Search Augmented Instruction Learning! |
---|---|
May 1, 2023 | Our paper Entailment as Robust Self-Learners is accepted by ACL 2023! |
Apr 13, 2023 | Check out some results of our UniLC-based twitter-checking experiments! |
Apr 13, 2023 | Released the Open Language Safety Research (OpenLSR) website and twitter checking bot (twitter account and blog post). |
Apr 10, 2023 | Preprint “Interpretable Unified Language Checking” is out. [paper] and [code]. |
Mar 3, 2023 | Our “Logic against bias” paper accepted by EACL 2023. Covered by MIT News, TechXplore, Nauka TV, and Freethink. |
Oct 20, 2022 | Gave a talk on entailment self-training at the MIT EI Seminar |
Oct 5, 2022 | Hosted the Efficient & Robust language modeling seminar |
Jul 14, 2022 | Presented our work on self-training QA at NAACL 2022! |
Apr 29, 2022 | Ph.D. thesis defended! |
selected publications
- interpretable Unified Language CheckingarXiv preprint arXiv:2304.03728 2023
- Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence ReasoningarXiv preprint arXiv:2303.05670 2023
- Cooperative Self-training of Machine Reading ComprehensionIn Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2022
- DiffCSE: Difference-based Contrastive Learning for Sentence EmbeddingsarXiv preprint arXiv:2204.10298 2022