S307, New Bund Campus, NYU Shanghai
Tuesday, October 14, 2025 - 11:45 - 12:45
Join our PhD and Undergraduate Students Research Spotlight Series. This series provides a platform for students to share their work and connect with the academic community!
About the Speaker: Kaiyue Feng is a senior majoring in Computer Science. His research centers on reasoning in large language models (LLMs), with applications in scientific problem solving, autonomous research agents, and multimodal retrieval-based question answering. Under the supervision of Professor Chen Zhao, he is advancing the frontiers of LLM research, aiming to enhance their reasoning capabilities and real-world applicability.
Abstract: Large Language Models (LLMs) have rapidly advanced in recent years, showing remarkable progress in tasks that require structured reasoning, such as mathematics and problem solving. Early approaches often relied on pattern recognition or memorization, but the introduction of techniques like chain-of-thought prompting and, more recently, o1-style “slow thinking” models, has pushed the field toward more deliberate multi-step reasoning. Despite these advances, fundamental challenges remain: LLMs frequently struggle with maintaining logical consistency, handling domain-specific knowledge, and carrying out extended reasoning chains without error. In this talk, we will trace the development of reasoning in LLMs, highlight the limitations that persist even in today’s most capable systems, and then dive into the real-world challenges of applying reasoning-focused models to scientific and interdisciplinary domains.
