Research Interests
At Alpha Lab, we are broadly interested in developing trustworthy foundation models and AI agents, with a particular focus on large language models, diffusion models, and agentic systems. Our goal is to understand and improve the capabilities, limitations, and emergent properties of next-generation general AI models, and to deploy them responsibly in real-world applications. Much of our work lies at the intersection of personalization and recommender systems, LLM safety and alignment, reasoning and planning, and data-centric learning on graphs and sequences. Here is my Google Scholar.
Currently, our research mainly focuses on the following topics:
- Foundation Models for Personalization: Large language models and diffusion models for recommendation and personalization, including generative recommenders, language-model-based collaborative filtering, and diffusion-based preference modeling.
- LLM-powered Agents: Agentic LLMs for information retrieval, web interaction, tool use, and long-term user modeling, with applications in recommender systems, tutoring, and interactive AI services.
- Trustworthy and Safe LLMs: Alignment, refusal steering, fine-grained safety evaluation, and robustness against jailbreak attacks for large language models and vision-language models.
- Reasoning and Planning in Large Reasoning Models: Understanding and enhancing the reasoning strength, planning ability, and abstraction capabilities of LLMs and large reasoning models, especially in multi-step decision-making tasks.
- Causality, Robustness, and Generalization: Causal discovery, invariant learning, and robust representation learning for graphs and recommenders, aiming at models that generalize reliably across domains, distributions, and deployment environments.
- Applications in Education and Multimodal AI: AI for education (e.g., intelligent tutoring and smart campuses), scientific discovery (e.g., chemistry and biology), and multimodal understanding with vision-language models and protein–text models.