Claritree
DESCRIPTION
An early iteration of an AI-powered decision clarity app for people who feel emotionally overwhelmed. Not an advice engine: a thinking tool. You speak or type what's on your mind. A living tree grows. Branches represent directions you could take. The app gives structure. You make the decision.
View Claritree on PlaygroundTHE GAP
Existing tools either give you the answer (ChatGPT) or a blank canvas (Notes).
Neither helps you think. Neither preserves your agency.
Neither helps you think. Neither preserves your agency.
DELIVERABLES
Build a spatial decision tool that structures thinking without directing it. The app surfaces the shape of your thinking. You navigate it yourself.
How It Started
Vague Notes Idea
In one of my shower thoughts sessions, I was wondering if there is something that could help you think when you are emotionally overwhelmed. Like an emotionally intelligent friend that is an active listener, and helps you process instead of give unsolicited advice.


Idea iteration and development with Claude Sonnet 4.6
Through back-and-forth iteration with Claude. I was able to come up with a unique product with a user-friendly system that has competitive edge over generic AI experiences. Anyone ask ChatGPT or Claude. But how does someone currently in overwhelm get past that heavy cognitive load question: "Where do I start?".
AI models can give generic answers but what if one can answer it themselves, with the assistance of AI, as well as deliver a unique digital experience?
AI models can give generic answers but what if one can answer it themselves, with the assistance of AI, as well as deliver a unique digital experience?
DECISION ENVIRONMENT
No advice constraint
The app cannot give recommendations. The moment it nudges, it removes agency. Every design decision had to preserve the user's ownership of the outcome.
Primary user: emotionally overwhelmed
Low executive function in the moment. The user who needs this most can least afford cognitive overhead. The UX has to work at 11pm in distress, not in optimal conditions.
AI as structure, not answer
The AI predicts likely branches from user input. It frames the decision space. But framing is already influence. The design had to acknowledge and constrain that.
Insights That Shaped the Product
These are tensions that emerged through iterative product thinking and competitive analysis.
Productivity-app aesthetics: streaks, dashboards, progress bars would actively harm trust with someone in distress. The product needed to feel like exhaling, not optimizing.
People in decision paralysis already have the information, they just need to hold the problem spatially so they can see what they actually think.
AI models like ChatGPT, even prompted not to advise, still nudges. It's trained to resolve, not to hold space.
Constraints are therapeutic design. Unlimited options amplify overwhelm. Design constraints such as a 60-second voice cap forces signal over noise.
Key decisions
The tree is the product, not the UI
Why
Early versions treated the tree as a visualization layer on top of a decision tool. The shift was recognizing the tree had to be the primary experience: the thing users are emotionally invested in, not a status indicator.
When the tree is unique per user, grows with real use, and has branches that flourish or go bare based on engagement, it becomes a living record of how someone thinks: a retention mechanic, an identity system, and a differentiator simultaneously.
When the tree is unique per user, grows with real use, and has branches that flourish or go bare based on engagement, it becomes a living record of how someone thinks: a retention mechanic, an identity system, and a differentiator simultaneously.
The Implication
No dashboards. No metrics. No streaks. The only feedback mechanism is the tree itself: its fullness, which branches have leaves. The design had to trust that the metaphor carries enough meaning without annotation.


Intentional friction over infinite options
Why
The first instinct was unlimited branch regeneration. The right call was the opposite. Someone endlessly regenerating branches is avoiding the decision, not making it.
The acorn token system limits regenerations. One tree at a time limits parallel processing. None of these feel punitive: they feel like the app understanding what you actually need. matter when they lead to a decision in the same moment, in the same place.
The acorn token system limits regenerations. One tree at a time limits parallel processing. None of these feel punitive: they feel like the app understanding what you actually need. matter when they lead to a decision in the same moment, in the same place.
Trade-off accepted
There could be initial user frustration with usage limits, especially when they are emotionally overwhelmed at the moment. But it sets the precedence of limited usage, prompting user to be more intentional with their answers and choices.
Dialogue over interface
Why
Between each branch selection, a dialogue bubble appears. It asks one contextual question, sometimes one exchange, sometimes two. It types out at a natural speaking pace. It feels like a quiet voice, not a form field.
The constraint was strict: questions only. Reflections only. Never a recommendation, never a reframe that signals a preferred answer.
The constraint was strict: questions only. Reflections only. Never a recommendation, never a reframe that signals a preferred answer.

Pixel art as emotional register
Why
The visual direction had to do real work. Smooth modern SaaS UI would read as clinical. Dark-mode journaling aesthetics would feel heavy. The brief was: the app should feel like exhaling.
Pixel art with golden-hour palette and ambient piano hits the register: warm, patient, alive. Not productivity, not therapy. Something with its own emotional category.
Pixel art with golden-hour palette and ambient piano hits the register: warm, patient, alive. Not productivity, not therapy. Something with its own emotional category.
Next steps
Several areas were intentionally deferred — either because they require live AI or because they warrant real user testing before committing to a direction.
Connect to live AI model via Anthropic API. Branch prediction, dialogue generation, and pattern surfacing all need real API calls. The pre-scripted prototype validates the concept. Live AI validates the product.
Test the "no advice" constraint with real users in distress. Whether the app feels supportive without being directive cannot be answered by a prototype. It needs real emotional context and real sessions.
Design the orchard pattern insights screen. Currently mocked as a concept. The hard design problem: how to surface patterns without interpreting them. Staying observational, not analytical.
Safety layer for harmful territory. The app flags dangerous input and stops generating branches when a session enters crisis territory. This needs real clinical input to design responsibly.