Cognition-Aware
AI Experience

Establishing "AI That Gets Used" in organizations
through the 8C Framework

The value of generative AI isn't determined by model performance alone. It's determined by whether it gets used consistently in daily operations. However, in reality, while proof-of-concept (PoC) implementations often succeed, production deployments frequently fail to gain traction and don't deliver expected results.

Why PoC Success Doesn't Lead to Daily Adoption

The demo in the conference room was impressive, but months later the dashboard shows no growth in usage—this scenario is far from uncommon. In many cases, the following human friction factors accumulate:

Anxiety

When people fear data leaks or accidental operations, they hesitate to take the first step. Even if the system is secure by design, without an "experience that feels safe", it won't be used.

Example: Easy undo after accidental sending / Data storage location always visible on screen.

Unclear

When it's unclear what to do or how to start, the first successful experience doesn't emerge. Screens and messages must guide specific actions.

Example: Empty state shows "Try this template first" / Error messages show next steps briefly.

Not Personal

Even when understood as convenient, if it doesn't connect to personal work goals or daily routines, it won't continue.

Example: Integration with personal goals and routines, visible progress.

Hard to Share

When usage methods and results are difficult to pass on, learning doesn't spread within departments. Without reuse mechanisms, success tends to remain isolated.

Example: One-click sharing of results, template saving and publishing.

These can be addressed comprehensively from a Cognition-Aware perspective. The next chapter explains how to organize these into specific design elements called 8C across three layers.

Organizing 8C Across Three Layers

Foundation / Individual Experience / Social Dynamics

Benefits of Organizing 8C Across Three Layers

Organizing 8C across three layers enables a progressive improvement approach. The "PoC succeeds but production doesn't grow" problem many organizations face with AI adoption is caused by friction rooted in human cognition and emotions rather than technical issues.

The greatest benefit of the three-layer structure is the ability to design a natural development process: Foundation → Individual Experience → Social Dynamics. If the foundation is weak, individual experiences won't take root; if individuals remain isolated, learning won't spread. Conversely, when the foundation is solid and individual usage becomes routine, sharing and reuse naturally increase, fostering organizational knowledge.

Moreover, each layer has a clear correspondence with KPIs. Foundation most strongly influences Activation (first success), Individual Experience influences Retention (continuation), and Social Dynamics influences Expansion (horizontal spread). This correspondence allows concrete understanding of where numerical improvements will be effective, useful for both investment decisions and field improvements.

Comfort (Safe to Try)

Foundation

Why it matters: The main reason many people avoid AI tools is anxiety about "fear of failure" and "not knowing if what they do is safe." This anxiety is the biggest barrier to first-time use.

What effect do measures have: By providing mechanisms for recovery from failure, clear display of permissions and boundaries, and easy access to explanations, an environment where people can interact safely is created. This significantly improves first-time usage rates (Adoption) and also improves Activation (first success) rates.

Specific examples: Always-visible undo functionality, clear data storage location, one-click access to help, confirmation dialogs before important operations

Credible (Trustworthy by Design)

Foundation

Why it matters: In corporate environments, concerns about "where data goes" and "who can access it" are one of the biggest obstacles to AI adoption. Reading terms alone doesn't create a sense of security.

What effect do measures have: By providing an environment where data storage locations, sharing scope, and audit logs can be constantly verified on-screen, and privacy settings can be chosen by users, trust is built. This improves adoption rates in enterprises and leads to continued usage.

Specific examples: Real-time display of data storage location, self-inquiry function for audit logs, privacy level options, display of AI output rationale

Clarity (Clear Navigation)

Foundation

Why it matters: The state of "not knowing what to do or how to start" is the biggest frustration point in first-time use. Empty screens or complex menus are factors that diminish user motivation.

What effect do measures have: By providing clear guidance in empty states, sample data experiences, next action prompts, and recovery procedures for errors, first-time success experiences are created. This significantly improves Activation rates and becomes the first step toward continued usage.

Specific examples: "Try this template in 3 minutes" button, automatic insertion of sample data, progress bar display, error recovery procedures

Compassion (Contextual and Empathetic)

Individual

Why it matters: Even the same information is received very differently depending on the user's situation or emotional state. Even technically correct answers can be half as effective if the timing or expression doesn't match.

What effect do measures have: By understanding user history and context and responding with appropriate tone and explanation order, receptivity is significantly improved. This improves continued usage rates (Retention) and increases user satisfaction.

Specific examples: Explanation mode switching (detailed/summary only), reuse of previous settings, paraphrasing considering psychological load, suggestions based on situation

Commitment (Daily Use Reasons)

Individual

Why it matters: Even when understood as convenient, if AI tools don't connect to personal work or goals, they remain "occasionally used tools." True value isn't realized unless integrated into daily routines.

What effect do measures have: By providing systems that link to personal goals and work routines and make progress visible, AI becomes established as a "daily companion." This significantly improves continued usage rates (Retention) and also realizes productivity improvements.

Specific examples: Integration settings with personal goals, progress dashboard, reminder functionality, "today's step" suggestions, visualization of results

Cheer (Small Joys)

Individual

Why it matters: Even when AI usage at work aims for efficiency, "comfort" and "small sense of achievement" are necessary to continue using it. Monotonous and mechanical experiences make long-term continuation difficult.

What effect do measures have: By providing subtle achievements, thoughtful feedback during waiting, and visualization of learning, usage changes from "burden" to "enjoyment." This improves continued usage rates (Retention) and forms natural usage habits.

Specific examples: Completion animations, progress messages, display of learning history, consecutive usage day records, suggestions for what can be done next

Connected (Easy to Share Results)

Social

Why it matters: If individual success experiences don't reach other members, the overall AI utilization level of the organization won't improve. When sharing barriers are high, good usage methods and results get buried.

What effect do measures have: By providing one-click sharing functionality, collaborative editing environments, and templating mechanisms, individual results accumulate as organizational assets. This promotes horizontal expansion (Expansion) and realizes productivity improvements across the entire organization.

Specific examples: One-click sharing of results, collaborative editing functionality, permission settings for shared links, template registration functionality, provision of public spaces

Collective (Learning Accumulates and Standardizes)

Social

Why it matters: If individual innovations and success cases remain scattered, they won't accumulate as organizational knowledge. Without mechanisms for shared knowledge to be standardized and continuously reused, the same failures are repeated.

What effect do measures have: By organizing community operations such as template review and updates, organizing and distributing best practices, and holding regular sharing sessions, individual learning becomes established as organizational standards. This accelerates horizontal expansion (Expansion) and raises the overall AI utilization level of the organization.

Specific examples: Template review flow, best practice publication, regular sharing sessions, Q&A functionality, update history records

The three layers are most effective when built up in the order of Foundation → Individual Experience → Social Dynamics. If the foundation is weak, individual experiences won't take root; if individuals are isolated, learning won't spread.

KPI Design and Causal Thinking

Making it measurable for operations

Why KPI Design is Important

AI adoption success cannot be measured by determination or atmosphere. It's important to define metrics (KPIs) first and have both field and management teams progress while looking at the same numbers. Many organizations tend to stay at qualitative evaluations like "we tried it" during the PoC phase, but production operations require quantitative improvements.

The advantage of the 8C framework is that it can clearly define which KPI each C affects. This allows concrete identification of "what to improve" when numbers are poor, and concentration of limited resources on the most effective places. Also, since the effectiveness of improvement measures can be verified numerically, continuous optimization becomes possible.

Furthermore, KPI design also functions as evidence for investment decisions. By accurately measuring AI adoption ROI and reflecting it in next investments or governance updates, the overall AI utilization level of the organization can be raised.

KPI Flow

AI adoption success is achieved by passing through four stages in order: Adoption → Activation → Retention → Expansion. By setting appropriate metrics at each stage and continuously improving, organizations can establish "AI that gets used."

Adoption
(Adoption)

Percentage of first-time users

Example: 60 out of 100 people use it
→ Adoption 60%

Activation
(First Success)

First-time task completion rate

Example: 52 out of 80 tasks completed
→ Activation 65%

Retention
(Continuation)

Percentage of continued users

Example: 35% continue after 30 days
→ Retention 35%

Expansion
(Horizontal Spread)

Spread of sharing and reuse

Example: 25% collaborative editing rate
Template reuse rate improvement

Relationship between 8C and KPIs

Foundation (Comfort/Credible/Clarity) most strongly influences Activation. The more anxiety is relieved in the first 10 minutes and the more experience there is of progressing without confusion, the higher the first-time success rate.

Individual Experience (Compassion/Commitment/Cheer) boosts Retention. The more it becomes personal, has small joys, and can be continued comfortably, the more it takes root.

Social Dynamics (Connected/Collective) supports Expansion. The easier results are to pass on, the more templates accumulate, and the more learning is standardized, the more it naturally spreads horizontally.

Dashboard Operation Basics

Check Activation and Retention weekly, Expansion monthly

Don't end with "good/bad" metrics, but hypothesize which C is holding things back and connect it to next improvements

Leave notes on weeks with change points (releases, training, rule changes) to review relationships with numbers

90-Day Roadmap

From PoC-only to "Daily Operations"

Phase A: Pilot (0-30 days)

Objective: Stabilize first success (Activation) for target tasks

Limit scope to "frequent tasks with standardized deliverables"

Prioritize Foundation (recovery from errors, visualization of data boundaries)

Daily monitoring of first completion rate, error rate, help navigation, fine-tuning text and flow

Select field champions (several people) and run short cycles of improvement suggestions → immediate fixes

Completion target: Activation 60-70%, recovery success rate from errors 80% or higher

Phase B: Scale (31-60 days)

Objective: Connect first success to continuation (Retention)

Strengthen Individual Experience (Compassion, Commitment, Cheer)

Weekly check of D7 and D30 continuation rates and usage frequency, focus on fixing high-friction scenarios

Template successful procedures and prompts, publish within team

Completion target: D30 Retention 35-45%, template-based creation about 20% of total

Phase C: Standardization (61-90 days)

Objective: Horizontal expansion (Expansion) and quality improvement

Institutionalize Social Dynamics (Connected, Collective)

Monthly check of collaborative editing rate, template reuse rate, new inflow from other departments

Calculate Risk-Adjusted Value and reflect in next investments or governance updates

Completion target: Collaborative editing rate around 25%, public templates reach certain number, reuse continuously increasing

Contact

For those who want to learn more about Cognition-Aware AI Experience or are considering implementing the 8C framework, please feel free to contact us.

A scientific approach to establishing "AI that gets used" in organizations.

Contact Us