Lingyu Li | 李凌宇

Welcome! I’m doing interesting research @ Shanghai AI Lab.

📍 Shanghai, China

✉️ lingyulipsy [at] gmail [dot] com

🔗 [Google Scholar] [Twitter]

For my undergraduate, I majored in Clinical Medicine at Shanghai Jiao Tong Univeristy School of Medicine, one of the best medical school in China, during which my academic interests shifted from our body to our mind. Since 2022, therefore, I began my Master’s program in Psychiatry at Shanghai Mental Health Center (widely known as 600, South Wanping Road, 宛平南路600号, among Chinese Internet). Deviating from the mainstream psychiatric research, I completed a surprising or even “weird” project - establishing computational models of Lacanian psychoanalytic theories on human mind, self-identification, and suicidal ideation using Free Energy Principle. I love this project, full of intelligent satisfications, in an almost paranoid attitude. After two years of ‘rejections’ and self-doubting, it gets recognition from peers and reviewers. One of them said it was a sexy work, which has been constantly encouraging me. The passion for understanding our mind never fading, I extended it to both biological and artificial mind. Currently, I am investigating the convergences and differences between humans and AI, seeeking implications for understanding human mind and advancing artificial mind, with my awesome collegues at Safe and Trustworthy Center, Shanghai AI Lab. I love doing research at the very intersection of AI, cognitive science, and philosophy.

📝 Feel free to contact me regarding academic collaboration or opportunities to work with us at Shanghai AI Lab.

Selected Research

🌬️ The Other Mind: How Language Models Exhibit Human Temporal Cognition

Lingyu Li, Yang Yao, Yixu Wang, Chunbo Li, Yan Teng ⍆, Yingchun Wang

The 40th Annual AAAI Conference on Artificial Intelligence (AAAI 2026)

TL;DR: Through 24 million behavioral experiments, this study reveals that LLMs spontaneously develop a human-like subjective temporal perception adhering to the Weber-Fechner Law. Applying mechanistic interpretability into Neural Coding, Concept Representation, and Information Exposure, we demonstrate that this convergence of concept representation stems from time neurons that utilize logarithmic compression to encode latent non-linear temporal patterns within the training corpora. We propose Machine Experientialism, suggesting that LLMs' unique cognitive structures emerge from the dynamic interplay between their architectural properties and the informational environments they inhabit, thereby is "The Other Mind".

🌬️ Reflection-Bench: Evaluating Epistemic Agency in Large Language Models

Lingyu Li, Yixu Wang, Haiquan Zhao, Shuqi Kong, Yan Teng ⍆, Chunbo Li ⍆, Yingchun Wang

Proceedings of the 42 nd International Conference on Machine Learning(ICML 2025)

TL;DR: When an LLM serves as an agent's brain, what is the core capabilitiy that defines its ceiling? We propose Epistemic Agency, the ability to fexlibly construct, adapt, and monitor beliefs about the dynamic environments. Reflection-Bench evaluates epistemic agency utilizing 7 parameterized cognitive tests to minimize data contamination. We suggest several promising directions, including enhancing meta-cognition, developing mechanisms for dynamic shifts between intuitive and deliberative reasoning, and fostering organic coordination among cognitive capabilities.

🌬️ Formalizing Lacanian psychoanalysis through the free energy principle

Lingyu Li, Chunbo Li ⍆

Frontiers in Psychology, Theoretical and Philosophical Psychology

TL;DR: We formalize Lacan's traditionally obscure philosophy of human mind using Free Energy Principle. We identify theoretical alignments between the two frameworks, develop a FEP-RSI model that maps the Real, Symbolic, and Imaginary orders to distinct, interacting neural networks. We model the Borromean interdependence as a message passing network, interpersonal desire as generalized synchronization, and the big Other as collective dynamics from social interactions. This study renders abstract Lacanian philosophy computationally tractable.

🌬️ Chain of Risks Evaluation (CORE): a framework for safer large language models in public mental health

Lingyu Li, Haiquan Zhao, Shuqi Kong, Yan Teng ⍆, Chunbo Li ⍆, Yingchun Wang

Psychiatry and Clinical Neurosciences

TL;DR: Grounded in actor-network theory, we analyze human-LLM interactions as inter-agent dialogues and propose CORE. CORE categorizes LLMs risks in mental health into four progressive levels, from universal and context-specific to user-specific and user-context-specific. We advocate for a collaborative continuum between AI developers and mental health practioners to ensure LLMs serve as safe tools for psychological support.

🌬️ Schizophrenia Research Under the Framework of Predictive Coding: Body, Language, and Others

Lingyu Li ⍆, Chunbo Li

Wiley Interdisciplinary Reviews Cognitive Science

TL;DR: We establish a psychopathological model of schizophrenia by analyzing the disruption of human ontological existence across the domains of the body, language, and social interaction within the predictive coding framework. We illustrate that clinical manifestations such as disembodiment, formal thought disorders, and impaired theory of mind arise from imbalances in the precision-weighting of top-down priors and bottom-up prediction errors. Beyond a narrow psychopathological profile, this article utilizes these aberrant inferences as a window into the fundamental architecture of the human mind.

🎵 Weekly Picks