Skip to main content

Self-Model API: From Zero to Personalized in 30 Minutes

A practical developer guide to integrating the Clarity Self-Model API. Create a user model, observe interactions, and personalize responses in under an hour.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 5 min read

TL;DR

  • The Clarity Self-Model API lets you create, observe, and query user models with three API calls,no ML infrastructure required
  • This guide walks through a working integration in 30 minutes: create a model, feed observations, and get personalized inferences
  • By the end, you will have a self-model that improves with every user interaction and persists across sessions

The Self-Model API enables per-user personalization with three API calls: create a model, observe interactions, and query beliefs. No historical data, ML infrastructure, or data science team is required to start seeing personalized responses. This guide walks through the full integration from zero to personalized output in 30 minutes, including observation design patterns and next-level strategies for cross-context models.

0
API calls to go from zero to personalized
0min
average time to first personalized response
0
ML models you need to train yourself

Step 1: Create a Self-Model

Every user in your system gets a self-model. Think of it as a structured profile that evolves,not a static preferences object, but a living representation of what the user believes, knows, and prefers.

step-1-create-model.ts
1import Clarity from '@heyclarity/sdk';Install: npm i @heyclarity/sdk
2
3const clarity = new Clarity({ apiKey: process.env.CLARITY_API_KEY });Your API key from dashboard
4
5// Create a self-model for a new userOne model per user
6const model = await clarity.createSelfModel({Returns model object
7 externalUserId: 'user_abc123',Your user ID
8 context: 'product-analytics-app',Your app context
9});
10
11console.log(model.id); // 'sm_7f3a...'Self-model ID for future calls
12console.log(model.alignmentScore); // 0.5 (neutral start)Score evolves with observations

The self-model starts with a neutral alignment score of 0.5. It has no beliefs yet,it knows nothing about the user. That is about to change.

What is the alignment score? It is a 0-1 score that measures how well the model understands the user. A score of 0.5 means “no understanding yet.” As you feed observations, the score increases as the model builds coherent beliefs about the user. A score above 0.8 typically means the model can reliably predict user preferences across contexts.

Step 1: Create a Self-Model

One API call creates a per-user model. Starts with a neutral 0.5 alignment score and zero beliefs.

Step 2: Observe Interactions

Feed high-signal interactions (choices, corrections, questions) as observations. Each one builds the belief structure.

Step 3: Query for Personalization

Fetch the self-model’s beliefs and inject them into your LLM prompt. The model handles inference, confidence scoring, and temporal evolution.

Result: Personalized Output

Responses adapt to user expertise, preferences, and goals. Improves with every interaction automatically.

Step 2: Observe Interactions

Observations are how the self-model learns. Every meaningful interaction the user has with your product is an observation. You do not need to send every click,focus on interactions that reveal intent, preferences, or expertise.

step-2-observe.ts
1// User chose a specific dashboard layoutPreference signal
2await clarity.observe(model.id, {Feed observation
3 type: 'preference',Observation type
4 content: 'User selected compact table view over chart view',What happened
5 context: 'dashboard-layout',Where it happened
6});
7
8// User asked a question revealing expertise levelExpertise signal
9await clarity.observe(model.id, {Another observation
10 type: 'interaction',Conversation signal
11 content: 'User asked about p95 latency percentiles',Reveals domain knowledge
12 context: 'support-chat',Context matters
13});
14
15// User edited an AI-generated summaryCorrection signal (high value)
16await clarity.observe(model.id, {Edits are gold
17 type: 'correction',User corrected the system
18 content: 'Changed verbose summary to 3 bullet points',Strong preference signal
19 context: 'report-generation',
20});

Three observations and the self-model already has material to work with. It can now infer: this user prefers data density over visualization, has technical expertise in performance monitoring, and wants concise output formats.

Which interactions should you observe? Focus on high-signal moments:

Choices

When users pick between options (layout, format, feature). The rejected alternatives are as informative as the selection.

Corrections

When users edit or reject AI-generated content. The diff between original and edit is your highest-value signal.

Questions

What users ask reveals their mental model. The vocabulary they use signals expertise level and domain context.

Skips

What users consistently ignore reveals what they do not value. Absence of engagement is a strong preference signal.

You do not need to instrument everything on day one. Start with 3-5 high-signal interaction points and expand as you validate the model’s accuracy.

Step 3: Query for Personalization

Now the payoff. When you need to personalize something,a response, a default setting, a content recommendation,query the self-model.

step-3-personalize.ts
1// Get the current self-model stateFetch model with beliefs
2const model = await clarity.getSelfModel('user_abc123');By your user ID
3
4// Access structured beliefsWhat the model learned
5const beliefs = model.beliefs;Array of belief objects
6// => [
7// { statement: 'Prefers data tables over charts', confidence: 0.82 },From layout choice
8// { statement: 'Has technical performance monitoring expertise', confidence: 0.75 },From question
9// { statement: 'Values concise output', confidence: 0.88 },From correction
10// ]
11
12// Use beliefs to personalize your LLM promptThe integration point
13const systemPrompt = `User preferences: ${beliefs.map(Inject into context
14 b => b.statement).join('; ')}`;Simple string concat

That is the core loop: create, observe, query. Three API calls. The self-model handles all the complexity of belief inference, confidence scoring, and temporal evolution.

The Full Integration Pattern

Let us put it all together in a realistic scenario. You have an AI-powered analytics product. Users ask questions about their data, and you use an LLM to generate insights. Here is how self-models make those insights personalized.

Without Self-Model

  • ×Every user gets the same verbose explanation format
  • ×Technical and non-technical users see identical output
  • ×New users and power users get same onboarding
  • ×Personalization requires manual configuration by each user

With Self-Model (30 min integration)

  • Concise users get bullet points, detail-oriented users get narrative
  • Technical users get raw data, non-technical get plain English
  • Onboarding adapts based on demonstrated expertise
  • Personalization improves automatically with every interaction

The integration adds roughly 10 lines of code to your existing LLM pipeline. You are already constructing a system prompt. Now you include the user’s self-model beliefs in that prompt. The LLM does the rest,it naturally adapts its response style based on the context you provide.

Observation Design Patterns

After the initial integration, the most impactful thing you can do is improve the quality of your observations. Here are patterns that work well.

The Correction Pattern: When a user edits an AI-generated output, capture both the original and the edit. The diff between them is the highest-value signal in your entire product. It tells you exactly how the user’s expectations differ from your defaults.

The A/B Choice Pattern: When you show users options (layout choices, feature toggles, format selectors), observe the choice and the alternatives they rejected. A user who picks “CSV” when “PDF” and “JSON” are also available is telling you something specific about their workflow.

The Question Pattern: Natural language questions from users reveal their mental model. “How do I filter by p95 latency?” tells you they think in percentile terms. “Why is my app slow?” tells you they are outcome-oriented and may not know performance metrics vocabulary.

The Absence Pattern: Track features that are available but never used. If a user has had access to the advanced query builder for 60 days and has never opened it, that is a belief signal,either they do not know about it, do not need it, or find it intimidating.

Correction Pattern

Capture diffs between AI output and user edits. Reveals exactly how expectations differ from defaults. Highest-value signal.

A/B Choice Pattern

Observe selections and rejected alternatives. “CSV over PDF and JSON” reveals specific workflow preferences.

Question Pattern

Natural language questions reveal mental models. Vocabulary signals expertise level and domain context.

Absence Pattern

Track features available but never used. 60 days of non-engagement with a feature is a strong belief signal.

Beyond the Quickstart

Once the basic loop is working, here are the next integration levels.

Level 2: Cross-Context Observations

Feed observations from multiple parts of your product: support chats, feature usage, settings changes, content interactions. Each context adds depth to the self-model.

Level 3: Alignment Score Monitoring

Track the alignment score over time per user. A declining score may indicate the user is changing (new role, new goals). A plateauing score means the model has converged.

Level 4: Belief-Driven Product Decisions

Use aggregate belief data (anonymized) to inform product roadmap. If 60% of users’ models contain “prefers concise output,” maybe your default should be concise.

Trade-offs and Limitations

Cold start is real but short. A brand-new self-model knows nothing. The first 5-10 observations are the most critical. Design your observation points to capture high-signal interactions early in the user journey.

Observation quality matters more than quantity. Sending every mouse move as an observation will not produce better models. It will produce noise. Focus on the 5-10 interaction types that reveal genuine preferences and expertise.

Latency adds up. Each API call adds 50-100ms of latency. For real-time applications, consider caching the self-model locally and refreshing periodically rather than querying on every request.

Models can be wrong. A user who exported CSVs three times might prefer CSVs, or they might have been following a tutorial. Confidence scores reflect this uncertainty, but surfacing a “Your Preferences” page where users can correct their model significantly improves accuracy.

What to Do Next

  1. Get your API key: Sign up at the Clarity dashboard and create your first project. It takes under 2 minutes.
  2. Instrument your highest-value interaction: Pick the single interaction in your product where personalization would have the most impact. Add the three-call integration loop (create, observe, query) to that interaction.
  3. Measure the difference: Run an A/B test,same feature, with and without self-model personalization. Start in the API playground to prototype before integrating into production.

References

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →