I am a Research Scientist at LG AI Research. I received my PhD from the Computer Science Department at the University of Michigan, where I worked on topics including representation learning, learning from limited supervision and language grounding. More recently, I have been interested in enhancing the planning, reasoning and agent capabilities of language models.
Talks
- May 2024: Guiding Language Models to be Better Agents (Frontiers of AI in Business and Society @ UIC)
- Oct 2023: Task Planning with Large Language Models (University of Michigan AI Seminar)
- Aug 2019: Zero-Shot Entity Linking by Reading Entity Descriptions
- Feb 2019: Ann Arbor Deep Learning Event
Selected Publications
Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents.
Yunseok Jang*, Yeda Song*, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Dong-Ki Kim, Kyunghoon Bae, Honglak Lee.
CVPR 2025Autoguide: Automated generation and selection of state-aware guidelines for large language model agents. [paper]
Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, Honglak Lee.
NeurIPS 2024Auto-Intent: Automated Intent Discovery and Self-Exploration for Large Language Model Web Agents [paper]
Jaekyeom Kim, Dong-Ki Kim, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee
EMNLP Findings 2024Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense [paper]
Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, Rada Mihalcea
NAACL 2024 (Social Impact Award)Code Models are Zero-shot Precondition Reasoners [paper]
Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee
NAACL 2024 (Also at Neurips FMDM Workshop 2023)Unsupervised Task Graph Generation from Instructional Video Transcripts [paper]
Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee
Findings of ACL 2023 (Also at ACL WNU Workshop 2023)Knowledge Unlearning for Mitigating Privacy Risks in Language Models [paper]
Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo
ACL 2023Exploring the Benefits of Training Expert Language Models over Instruction Tuning [paper]
Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
ICML 2023Few-shot Subgoal Planning with Language Models [paper]
Lajanugen Logeswaran, Violet Fu, Moontae Lee, Honglak Lee
NAACL 2022 (Also at ACL CSRR workshop 2022)Zero-Shot Entity Linking by Reading Entity Descriptions [paper]
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee
ACL 2019 (Nominated for best paper)Content Preserving Text Generation with Attribute Controls [paper]
Lajanugen Logeswaran, Honglak Lee, Samy Bengio
NIPS 2018An Efficient Framework for Learning Sentence Representations [paper]
Lajanugen Logeswaran, Honglak Lee
ICLR 2018
Professional Experience
- Research Scientist, LG AI Research (Ann Arbor), Jul 2021 - Present
- Research Intern, Facebook AI Research (New York), May - Aug 2019
- Research Intern, Google Research (Seattle), May 2018 - Jan 2019
- Research Intern, Google Brain (Mountain View), Feb - Jun 2017
Awards & Honors
- IEEEXtreme 24 hour Programming Competition - 24th place (2013)
- INexus International Robot Competition - 3rd place (2012)
- Bronze medal at the 50th International Mathematical Olympiad (2009)
- Gold medal at Sri Lankan Mathematics Olympiad (2007)