Chao-Wei Huang

Ph.D., National Taiwan University.

prof_pic.jpg

Hi! My name is Chao-Wei Huang. I received my B.S. and Ph.D. from National Taiwan University, advised by Prof. Yun-Nung (Vivian) Chen.

My research interests include information retrieval, factuality, large language models, and speech processing. My research has been published in top-tier speech and natural language processing conferences, e.g., ACL, EMNLP, NAACL, ICASSP, Interspeech, and EACL. Driven by a commitment to fostering academic collaboration, I have co-organized The 9th Dialog System Technology Challenge (DSTC9) and the 1st TamingLLM workshop. Previously, I worked as a Research Scientist Intern at Amazon Alexa AI (‘20, ‘21) and Meta AI Research (‘22, ‘23).

news

Sep 20, 2024 Excited to share that 4 papers (2 first-authored) have been accepted to EMNLP 2024! 🎉
Sep 11, 2024 PhDone! I have received my Ph.D. degree from National Taiwan University!
Aug 28, 2024 I’m giving an oral presentation of our paper “Investigating Decoder-only Large Language Models for Speech-to-text Translation” at Interspeech 2024 at Kos Island, Greece. 🇬🇷

selected publications

  1. EMNLP 2024
    FactAlign: Long-form Factuality Alignment of Large Language Models
    Chao-Wei Huang, and Yun-Nung Chen
    In Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
  2. EMNLP 2024
    PairDistill: Pairwise Relevance Distillation for Dense Passage Retrieval
    Chao-Wei Huang, and Yun-Nung Chen
    In Proceedings of The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), 2024
  3. Interspeech 2024
    Investigating Decoder-only Large Language Models for Speech-to-text Translation
    Chao-Wei Huang, Hui Lu, Hongyu Gong, and 4 more authors
    In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2024
  4. EACL 2024
    Unsupervised Multilingual Dense Retrieval via Generative Pseudo Labeling
    Chao-Wei Huang, Tsu-Yuan Hsu, Chen-An Li, and 2 more authors
    In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2024), 2024
  5. EMNLP 2024
    Two Tales of Persona in LLMs: A Survey of Role-Playing and Personalization
    Yu-Min Tseng, Yu-Chao Huang, Teng-Yun Hsiao, and 4 more authors
    In Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
  6. EMNLP 2024
    Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models
    Cheng-Hsun Hsueh, Paul Kuo-Ming Huang, Tzu-Han Lin, and 4 more authors
    In Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
  7. Arxiv
    InstUPR: Instruction-based Unsupervised Passage Reranking with Large Language Models
    Chao-Wei Huang, and Yun-Nung Chen
    arXiv preprint arXiv:2403.16435, 2024
  8. SIGDIAL 2023
    CONVERSER: Few-shot Conversational Dense Retrieval with Synthetic Data Generation
    Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, and 2 more authors
    In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Sep 2023
  9. ClinicalNLP 2022
    PLM-ICD: Automatic ICD Coding with Pretrained Language Models
    Chao-Wei Huang, Shang-Chi Tsai, and Yun-Nung Chen
    In Proceedings of the 4th Clinical Natural Language Processing Workshop, Jul 2022
  10. ICASSP 2020
    Learning asr-robust contextualized embeddings for spoken language understanding
    Chao-Wei Huang, and Yun-Nung Chen
    In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Jul 2020
  11. ACL 2020
    Learning Spoken Language Representations with Neural Lattice Language Modeling
    Chao-Wei Huang, and Yun-Nung Chen
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020
  12. ASRU 2019
    Adapting pretrained transformer to lattices for spoken language understanding
    Chao-Wei Huang, and Yun-Nung Chen
    In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Jul 2019