Apple Chi
A stress-aware system that turns biometric data into mindful insights
Role:
Brand Designer, Motion Designer, Visual Designer
Duration:
3 Month, 2024
Category:
Visual Design, Motion Design
Tools:
Figma, Photoshop, After Effects, Cinema 4D, Blender, Redshift
Summary
Apple Chi is a conceptual design study that imagines how Apple’s ecosystem could evolve toward emotional intelligence. It proposes a system that focuses on the well-being of wearers by collecting stressor data, recognizing its patterns, and gently communicating back through adaptive feedback. Rather than simply tracking health, Apple Chi envisions a world where technology senses, interprets, and supports the mind-body balance in everyday life.
Design Thinking
The concept began with the question: Can technology sense stress as intuitively as we feel it?
Through a human, centered design lens, I explored how emotion and data intersect, framing stress not as a medical symptom but as a moment of imbalance that can be visualized, reflected upon, and gently realigned. By combining observation, scenario mapping, and speculative prototyping, the design process reimagines Apple’s existing ecosystem as a responsive environment that listens and learns. The goal was to ensure that every interaction — from data visualization to haptic prompt — reinforces calmness, awareness, and empathy, rather than adding cognitive load.
To express this mindset, the design process follows three guiding principles:
Sense
When users can’t speak or must hide their actions, AROW allows silent activation through discreet gestures or hardware triggers. The interface stays dark, sending live location data and alerts invisibly.
Reflect
If sudden impact or loss of movement is detected, AROW automatically initiates a safety check or dispatches help with the last known coordinates.
Adapt
For users who can’t communicate verbally, AROW uses tap patterns, icons, or visual prompts to send specific alerts — ensuring accessibility for all.
Goal
To explore how Apple Chi could be integrated into the iOS Health app, turning physiological and environmental data into visual, emotional feedback that helps users recognize their stress patterns and make proactive choices toward wellness. The project examines the intersection of data visualization, ambient intelligence, and behavioral design, and how these can promote calm and self-awareness.
AROW’s ecosystem
AROW’s ecosystem is built on three layers of response, allowed the system to function intuitively across edge cases, whether the user can’t speak, move, or even unlock their device. By removing unnecessary decision points, AROW transforms complex emergency protocols into an adaptive, intelligent flow that feels instant and human.

User Layer
Immediate Action
A single-tap or silent trigger activates the system without cognitive effort. This layer is optimized for speed and minimal visibility, guiding users through one clear action instead of multiple decisions.

Intelligence Layer
Context Awareness.
Once triggered, AI interprets sensor data, movement, sound, and location to assess risk. It differentiates between accidental inputs and real distress, and routes the alert to the right channel trusted contacts, nearby responders, or local authorities.

Assurance Layer
Feedback & Trust.
Every response loop provides confirmation through discreet cues, vibration, color pulse, or visual signal to reassure the user that help is on the way. After the event, AROW delivers transparent logs showing what was shared and with whom.
The Trigger
A crucial design question emerged while shaping AROW’s system architecture—what actually triggers it, and how? In an emergency, users can’t rely on complex gestures or visual interfaces. The trigger had to be instinctive, tactile, and accessible in any condition. We explored physical gestures and hardware interactions that could activate the system without visual focus—such as a long hold on the side or camera button, or a subtle shake motion when the phone is in a pocket. Each option was tested for speed, discretion, and reliability across iOS and Android devices, ensuring that the action feels natural under pressure and technologically feasible within real hardware constraints.
Refining the Gesture Logic
After exploring multiple activation gestures, we focused on two that balanced speed, safety, and technical feasibility: a long-press of the side or camera button, and a triple-shake motion for in-pocket use. These gestures leveraged existing smartphone hardware while remaining subtle and accessible with one hand. The challenge was to design haptic feedback and timing that felt deliberate—not accidental—under stress. Each millisecond of vibration and delay was tuned to help users feel confident that AROW understood their intent.

User Testing & Timing Validation
To validate the system, we ran a series of gesture and timing tests with participants simulating high-stress conditions—walking at night, carrying items, or keeping the phone in a pocket. We measured success rate, reaction time, and perceived confidence in both gestures. Photos and prototypes captured each iteration, from early timer calibrations to haptic feedback mapping. These sessions helped fine-tune the hold duration and feedback rhythm, striking the right balance between responsiveness and false-trigger prevention.

Long-Press Testing
Testing the 3-second long-press trigger for speed and comfort under one-handed use. The goal was to make activation feel deliberate, not accidental.

Shake Motion Prototype
Simulating the triple-shake gesture while the phone is in-pocket. We measured consistency and false-trigger prevention across multiple users.

Haptic Feedback Calibration
Mapping vibration timing and intensity to confirm that users can recognize activation without looking at the screen.
Storyboardd

Service blueprint
Once triggered, AROW activates a chain of human and system interactions built for speed and trust. Sensors capture motion, sound, and location data; AI interprets the context; and encrypted alerts reach trusted contacts or responders, all within seconds. The blueprint shows how these layers work together using today’s smartphone technology, from background sensors and machine learning to secure APIs and haptic feedback systems. Designed within existing mobile capabilities.
Wait!
During development, one question reshaped the system: What if AI gets it wrong? A false alarm could waste critical resources, or worse, put someone in danger. That realization led to restoring human control as an essential layer of safety. AROW’s AI now acts first to detect, but may confirms through human context, trusted contacts, user gestures, or manual override. It’s a balance between automation and empathy, ensuring every response is both intelligent and accountable.
Designing for Panic, Not Perfection
AROW’s interface was designed for moments of fear and urgency. Every interaction had to work under pressure—when users can’t think, see clearly, or speak. The flow was reduced to a single action, with large targets, silent gestures, and instant haptic feedback confirming safety.
Testing Human Instinct
Prototype tests simulated real stress, dim light, trembling hands, limited mobility, to study instinctive behavior. With a timer, because timing is everything. Results showed users reacted faster to tactile cues than visual ones, leading to refinements that made the system feel natural and reliable even in chaos.
Trust and Transparency
Because AROW operates in moments of fear, every automated action had to feel accountable. Users can see what data is shared, with whom, and when. Clear feedback loops and privacy logs were built to ensure that trust isn’t just implied—it’s designed into the system.
Visual Design — Clarity Under Pressure
The visual system was intentionally minimal—built to communicate faster than words. Large, high-contrast shapes guide focus, while color and motion signal urgency or safety. Typography is bold and legible under low light, and animations are restrained, designed only to orient or reassure. Every element serves one purpose: to make the interface readable, instinctive, and calm when everything else isn’t.
Conclusion
AROW pushed me to design beyond interfaces, to think about safety, trust, and accountability as user experiences. It taught me that in moments of fear, good design isn’t decorative, it’s decisive. Every choice, from AI logic to color contrast, became a study in empathy and restraint.
This project deepened my belief that technology should extend human capability, not replace it. By blending automation with human judgment, AROW shows how UX can turn complexity into confidence, and systems into lifelines.