Hongyin Luo 罗鸿胤

Postdoctoral Associate at MIT CSAIL.



32 Vassar St.,

Cambridge, MA 02139

Hongyin is a postdoctoral associate at MIT CSAIL, working with Dr. Jim Glass in the spoken language systems (SLS) group. He completed his Ph.D. at MIT EECS in May 2022, working on self-training for natural language processing.

His research focuses on improving the efficiency, transparency, and reasoning ability of language models. His latest research has combined natural language with different formal reasoning engines, including entailment models and program interpreters. He has built small language models outperforming GPT3-175B with 1/500 computation, self-denoising language models that handles noises of search engines, and natural language embedded programs that achieves accurate reasoning without task-specific examples.

Email: hyluo [at] mit [dot] edu


Jan 10, 2024 Released SmoothGPT on GPTs - A toy for polishing AI writing
Nov 30, 2023 Hosted Google Tech Talk - DataCommons by Dr. R. V. Guha
Jul 20, 2023 IA générative: quelle technologie après ChatGPT
Jun 8, 2023 Our research on self-training is highlighted on MIT NEWS: MIT researchers make language models scalable self-learners.
Jun 7, 2023 Search engines don’t always help chatbots generate accurate answers
Jun 2, 2023 MIT, 자가 학습 AI 공개…성능 최대 500배 향상
Jun 1, 2023 Biggere is not always better
May 28, 2023 The Little Language Model That Could.
Apr 20, 2023 Released the Open Language Safety Research (OpenLSR) website and twitter checking bot (twitter account and blog post). heck out some results of our UniLC-based twitter-checking experiments!
Mar 7, 2023 Текстовую нейросеть научили думать» еще лучше, чтобы избавить от расизма и сексизма.
Oct 5, 2022 Hosted the Efficient & Robust language modeling seminar
Apr 29, 2022 Ph.D. thesis defended!
Mar 3, 2022 Langauge models are biased. Can logic save them? Covered by MIT News, TechXplore, Nauka TV, and Freethink.

selected publications

  1. Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
    Tianhua Zhang*, Jiaxin Ge*, Hongyin Luo*, and 7 more authors
    arXiv preprint arXiv:2309.10814 2023
  2. Dola: Decoding by contrasting layers improves factuality in large language models
    Yung-Sung Chuang, Yujia Xie, Hongyin Luo, and 3 more authors
    arXiv preprint arXiv:2309.03883 2023
  3. SAIL: Search-Augmented Instruction Learning
    Hongyin Luo*, Tianhua Zhang*, Yung-Sung Chuang, and 6 more authors
    EMNLP 2023
  4. Entailment as Robust Self-Learner
    Jiaxin Ge*, Hongyin Luo*, Yoon Kim, and 1 more author
    ACL 2023
  5. Listen, Think, and Understand
    Yuan Gong, Hongyin Luo, Alexander H Liu, and 2 more authors
    arXiv preprint arXiv:2305.10790 2023
  6. interpretable Unified Language Checking
    Tianhua Zhang*, Hongyin Luo*, Yung-Sung Chuang, and 7 more authors
    ASRU 2023
  7. Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
    Hongyin Luo, and James Glass
    EACL 2023