We engineer AI systems that reason with the mind, not just about it—bringing psychological theory into large language models, modeling personal traits to personalize interactions, and measuring thinking quality in the wild.
To transform how organizations and societies decide by creating cognition-aware AI that aligns teams, scales responsibly, and fosters collective choices that are both harmonious and trustworthy.
Partner With UsHere's what we discovered: AI fails not because it lacks intelligence, but because it doesn't understand how humans actually think and feel. We're building something different.
Most AI treats every user the same—like a perfectly rational decision-maker who just needs more information. But real people aren't like that. We catastrophize. We get stuck in all-or-nothing thinking. We make decisions based on values that seem irrational but make perfect sense to us.
So we're teaching AI to recognize these patterns. When someone says "this project will be a disaster," our system doesn't just offer generic reassurance. It recognizes catastrophizing, understands the underlying fear, and helps reframe the situation in a way that actually resonates—because it knows what this specific person values and how they prefer to process difficult situations.
This isn't just therapy-speak. We've built measurable systems that track real outcomes: how often teams reach better decisions, how much cognitive load decreases in conversations, how trust builds between people who communicate differently. Because if AI is going to live in our most important conversations—in boardrooms, classrooms, and moments of personal struggle—it needs to understand not just what we're saying, but how we're thinking.
We start where others don't: inside human cognition itself. Our systems recognize catastrophizing, all-or-nothing thinking, and other cognitive distortions in real-time conversations, then offer reframings grounded in Acceptance & Commitment Therapy.
Every person thinks differently. Our AI learns persistent cognitive traits—direct vs. reflective, detail-oriented vs. big-picture—and adapts its communication style accordingly. The same copilot becomes a different partner for each user.
Beyond simple Q&A, we build AI agents that understand context, remember relationships, and take initiative. They don't just respond—they anticipate needs, surface hidden assumptions, and guide teams through complex decision-making processes.
We measure what actually matters in production: Decision Quality Uplift in boardrooms, Team Harmony Index in cross-functional projects, Distortion Reduction Rate in coaching sessions. Our application-first philosophy means every metric connects to real business outcomes.
Cognition-aware AI is not simply a set of tools; it is a research philosophy. Where much of AI research aims to make models faster, larger, or more accurate, our philosophy is to make them more humanly aligned. We treat cognition itself—distortions, traits, reasoning styles—not as noise to be ignored but as an infrastructure layer for the next generation of AI.
This philosophy shapes every application we pursue. In education and coaching, cognition-aware systems act not as answer engines but as companions in reasoning. By drawing on frameworks like Acceptance & Commitment Therapy (ACT), they help learners and professionals surface distorted thinking, reframe unhelpful patterns, and sustain motivation.
In HR and organizational design, the challenge is collective. Teams often lose clarity when communication styles clash, when bias creeps in unnoticed, or when hidden assumptions block progress. Cognition-aware copilots detect these patterns and provide timely signals: highlighting when a conversation is caught in all-or-nothing framing, or when contributions are being systematically undervalued.
In enterprise environments, cognition-aware AI determines whether copilots remain curiosities or become indispensable infrastructure. Standard copilots often fail because they feel generic, opaque, or psychologically tone-deaf. By contrast, cognition-aware copilots embed reasoning frameworks, adapt guidance to personal traits, and explain their interventions in human-legible terms.
Across these domains, the principle remains consistent: cognition-aware AI makes individuals think more clearly, teams work with fewer breakdowns, and enterprises adopt AI with confidence. What distinguishes this research is not only its applications, but the philosophy that drives them: AI must align with the structure of human cognition if it is to achieve both scientific rigor and social adoption.
Our case studies are not just demonstrations of technology; they are validations of a philosophy. Each collaboration shows how cognition-aware AI, when embedded into real systems, changes not only performance metrics but also trust, adoption, and the human dynamics around decision-making.
Across all these domains—finance, sales, manufacturing, SaaS—the same lesson emerges. AI adoption fails when systems ignore cognition: when they feel like black boxes, add friction, or undermine trust. It succeeds when cognition-aware principles are applied: explainability, trait-awareness, reframing, and goal-driven metrics. The numbers tell part of the story—weeks cut to days, costs reduced by millions, adoption tripled—but the deeper impact is that people felt aligned with the AI they used. That alignment is the essence of cognition-aware design, and it is what makes these case studies more than technical wins.
Our work is peer-reviewed and published at leading venues in natural language processing and affective computing. Recent highlights include:
Hajime Hotta, Huu-Loi Le, Manh-Cuong Phan, Minh-Tien Nguyen
EMNLP 2025, System Demonstrations (accepted)
Huu-Loi Le, Manh-Cuong Phan, Hajime Hotta, Minh-Tien Nguyen
MMAC @ ACII 2025 (accepted)
Manh-Cuong Phan, Thi-Ngoc-Phuong Nguyen, Huu-Loi Le, Huy-The Vu, Hajime Hotta, Minh-Tien Nguyen
PACLIC 39, 2025 (accepted)
Viet-Tung Do, Xuan-Quang Nguyen, Van-Khanh Hoang, Duy-Hung Nguyen, Shahab Sabahi, Chening Yang, Hajime Hotta, Minh-Tien Nguyen, Hung Le
PAKDD 2025, pp. 91–102
Access to unique datasets, experimental platforms, and joint publications that connect cognitive science and AI engineering.
Proven track record of translating cognition-aware research into production with measurable gains in adoption and speed to market.
Bridge between cutting-edge research and defensible product strategy. Differentiate on trust, usability, and long-term adoption.
Research-informed insights for technical diligence, AI defensibility analysis, and board-level strategy development.
We are building a research program that is both scientifically rigorous and practically indispensable. In every case, collaboration means advancing a new paradigm where cognition is treated as infrastructure for AI.
For research collaborations, enterprise projects, or advisory opportunities:
Let's design cognition-aware AI that transforms how humans and organizations think, decide, and grow.
Book a 30-min Consultation