
People often turn to AI systems for help with emotionally complex situations, such as difficult relationships, personal uncertainty, or decisions that feel stuck. But today’s large language models typically respond with quick advice or generic suggestions rather than helping users understand why they feel stuck in the first place. They rarely uncover the assumptions, perspectives, and internal logic shaping a user’s experience. This project explores how AI can support reflective reasoning, not just answer-giving. We introduce AristotelianChat, a system that guides users through a structured process inspired by Aristotelian character friendship and Mezirow’s Transformative Learning. Instead of offering solutions immediately, the system first helps users clarify their experience, articulate their own ideas, and then examine the biases and assumptions behind those ideas. Through interpretive summaries, user-driven ideation, and AI-generated perspective challenges, the system works as a kind of “critical friend” that encourages deeper understanding. Early findings from prototyping suggest that this reflective workflow can help users surface hidden assumptions, broaden their thinking, and experience meaningful perspective shifts that traditional chatbots do not create. By reframing LLMs as collaborators in self-understanding, rather than answer-giving engines, this project opens new directions for human–AI interaction in emotional reasoning and personal growth.

Figure 1: A user enters the experience they want to create a concept expression for

Figure 2: The green buttons query ChatGPT, gather ideas, and organize the ideas by approach

Figure 3: User can expand and edit the tree as they see fit, listing relevant Yelp locations to get a practical sense of what a specific idea entails

Figure 4: User can click the yellow button to display their full concept expression