⚡ Case Study 01

Listing Doctor

Designing a zero-to-one application to track, diagnose, and resolve product listing errors across Amazon, Walmart, and eBay marketplaces.

UX ResearchProduct DesignDesign SystemUsability TestingE-commerce
Role
Lead UX Designer
Timeline
5 Months · 2023
Team
2 Designers · 4 Eng
Platform
Web App · SaaS
The Problem

Sellers on multi-channel marketplaces lose significant revenue due to undetected listing errors β€” suppressed products, missing attributes, policy violations β€” impossible to monitor manually across thousands of SKUs.

The Solution

A centralized dashboard aggregating listing health data from Amazon, Walmart, and eBay APIs, surfacing critical errors with severity scoring, and guiding sellers through step-by-step resolution workflows.

01

Discovery & Research

I conducted 12 contextual inquiry sessions with e-commerce operations managers. The goal was to understand the current workaround landscape β€” what tools they used, where errors fell through the cracks, and the emotional cost of a suppressed listing.

Research Process
😰
Manual monitoring is the norm
9 of 12 participants used spreadsheets or separate marketplace dashboards β€” no unified view existed.
⏱️
Detection lag averages 4.2 days
On average, sellers didn't notice a suppressed listing for 4+ days β€” each day representing direct lost revenue.
🀯
Error messages are cryptic
Marketplace error codes are written for developers, not merchants. "8572: INVALID_ATTRIBUTE_VALUE" means nothing to a seller.
πŸ”
68% of errors are repeat
Sellers were solving the same issues over and over due to no historical error memory in existing tools.
"I found out my top-selling product was suppressed on Amazon because I noticed a dip in sales β€” not because any tool told me."
— Operations Manager, Mid-market Seller
02

Design Process

The design went through 4 major iterations, each validated with a different cohort of sellers. I used a mix of low-fidelity concept testing, task-based usability testing, and desirability studies.

πŸ—ΊοΈ
Journey Mapping & How Might We
Mapped the current-state error discovery journey. Identified 6 key failure points. Generated 40+ HMW statements, narrowed to 8 priority opportunity areas.
✏️
Concept Sketching & IA
Sketched 3 competing IA approaches: flat list, marketplace-grouped, and severity-first. Tested concept preference with 6 participants β€” severity-first won decisively.
πŸ–₯️
Wireframe Usability Testing (R1 & R2)
Built mid-fi wireframes in Figma, ran 2 rounds of task-based usability tests. Success rates moved from 62% β†’ 89% between rounds.
🎨
High-Fidelity Design & Design System
Built a component library of 80+ components with error state variants, severity indicators, and multi-marketplace theming.
03

Key Design Decisions

Errors listed chronologically β€” latest first, regardless of severity. Merchants missed critical suppression events buried under minor warnings.
Severity-first triage queue. Critical errors always surface at the top. A 3-tier system: Critical / Warning / Info. Reduced time-to-action on suppressed SKUs by ~70%.
Raw API error codes displayed (e.g. "ERROR_8572"). Required sellers to Google every code to understand it.
Human-readable error translation. "Your title exceeds Amazon's 200-char limit. Current: 214. Here's how to fix it." One-click edit shortcut inline.
04

Outcomes

89%Task Completion Rate
4.2d→6hrDetection Lag Reduced
4.7/5SUS Usability Score
Next Case Study
SaaS Contract Analyzer
β†’
🔬 Case Study 02

SaaS Contract Analyzer

Augmenting the financial auditing process by designing human-centered interfaces for ML-powered contract analysis and anomaly detection.

AI/ML UXB2B SaaSFintechPrototyping
Role
Senior UX Designer
Timeline
4 Months · 2023
Team
1 Designer · ML Team
Platform
Web · Enterprise
The Challenge

Financial auditors review hundreds of SaaS contracts manually β€” checking for auto-renewal clauses, price escalators, and compliance flags. The ML model could identify these patterns, but the outputs were impenetrable for non-technical auditors.

My Contribution

Bridge the gap between ML confidence scores and human judgment. Design an interface where auditors could review, override, and learn from model outputs β€” turning AI into a trusted collaborator, not a black box.

01

The AI Trust Problem

The core UX challenge wasn't a flow or navigation problem β€” it was a trust calibration problem. Auditors were either over-trusting the model (rubber-stamping) or completely ignoring it. Neither extreme was useful.

"When the model flags something at 72% confidence, what does that mean? Do I trust it or not? I need to understand why it thinks that."
— Senior Financial Auditor, Research Participant
🧠
Explainability is non-negotiable
Auditors needed to see the specific clause that triggered a flag β€” not just a confidence score. Evidence-first design became the core principle.
✍️
Override must not feel like failure
A frictionless override flow that captures reasoning feeds back into model retraining β€” turning disagreement into improvement.
πŸ“„
Side-by-side context is critical
The document viewer and analysis panel had to be co-visible at all times. Auditors can't trust a flag without seeing the exact text that triggered it.
⚑
Batch workflows matter
Power users processed 20+ contracts per session. Keyboard shortcuts, bulk approval, and similar-contracts grouping were essential at scale.
02

Design Decisions

Confidence score shown as a raw percentage (e.g. 78%). No context about what it means or what evidence drove it.
Evidence-anchored flags. Every flag highlights the exact contract clause. Confidence shown as High/Medium/Review tied to clear thresholds.
Overriding required navigating to a separate "disputes" section β€” adding friction that discouraged auditors from correcting wrong outputs.
Inline override with reasoning capture. A single keystroke surfaces an override panel with reason codes, feeding directly into model retraining.
03

Outcomes

3.4Γ—Faster Review
91%Auditor Trust Score
+12%Model Accuracy
Next Case Study
Smithsonian Archives of American Art
β†’
🏭 Case Study 03

Smithsonian Archives of American Art

Reimagining search and discovery for one of the world's largest archives of visual art documentation β€” serving scholars, researchers, and the curious public.

Information ArchitectureSearch UXAccessibilityCultural UX
Role
UX Designer (Lead)
Timeline
6 Months · 2022
Team
2 Designers · Archivists
Platform
Web · Public Institution
Context

The Smithsonian's Archives of American Art holds over 20 million items β€” letters, sketchbooks, photos, oral histories. The existing search experience made this treasure largely undiscoverable.

My Role

Lead the end-to-end redesign of the search and browse experience. Balance the needs of academic researchers who need precision with casual visitors who want serendipitous discovery.

01

The Dual User Challenge

The most complex challenge was designing for two radically different mental models on the same interface. A PhD art historian has completely different needs than a high school student browsing for reference material.

πŸŽ“
Researchers need precision tools
Faceted search with 12+ filter dimensions, Boolean operators, date range sliders, and the ability to save and share search queries.
🌿
Visitors need serendipity
Visual browse modes, "Related works" connections, artist timelines, thematic collections. Discovery over search. The archive as a museum, not a database.
β™Ώ
Accessibility is an ethical imperative
Public cultural institutions have a WCAG AA obligation. Every design decision was tested against screen readers, contrast ratios, and keyboard navigation.
πŸ“œ
Metadata quality is uneven
Some items have rich metadata; others are a scan with a two-word description. The UI needed to gracefully handle both extremes.
02

Key Design Solution

We introduced a "Search Mode" toggle β€” a subtle but powerful interaction that transformed the interface from an academic search tool to a visual browse experience without creating two separate sites.

Single experience tried to serve both audiences. Advanced filters cluttered casual browse; removing them frustrated researchers.
Progressive disclosure with mode-switching. Default "Explore" mode is visual and spacious. "Research" mode reveals all 12 filter dimensions. Both share the same URL structure for citation.
Items with sparse metadata showed as near-empty cards β€” broken-feeling, appearing low-quality vs. richer items.
Metadata-adaptive card templates. Cards intelligently resize based on available metadata. Sparse items get a more visual treatment; rich items show structured data grids.
03

Outcomes

+47%Search Success Rate
WCAG AAAccessibility Rating
2.8Γ—Time on Archive
Next Case Study
Wingman — AI Travel Companion
β†’
✈ Case Study 04

Wingman

Integrating a Generative AI chatbot into a travel companion iOS app β€” designing conversational UX that feels genuinely human, contextual, and delightful.

Conversational UIiOS DesignGen AIPrompt Design
Role
Lead Product Designer
Timeline
4 Months · 2023
Team
2 Designers · AI Eng
Platform
iOS · Mobile
The Challenge

Wingman had a powerful AI travel engine. But the chat interface felt cold and robotic β€” like texting a FAQ bot. Users disengaged after 1–2 messages. The AI's capability was invisible behind a poor conversational experience.

The Goal

Design a conversational experience where the AI feels like a knowledgeable local friend β€” proactive, contextual, with personality β€” while staying honest about its limitations.

01

Designing for Conversation

Conversational UI design sits at the intersection of UX writing, interaction design, and prompt engineering. I led discovery with 15 frequent travellers across Vancouver, Toronto, and Delhi to understand how people naturally ask for travel advice.

πŸŽ™οΈ
Voice & Personality Definition
Defined the AI's personality: knowledgeable but not condescending, enthusiastic but not exhausting, honest about uncertainty. Created a voice guide with 20+ do/don't example exchanges.
πŸ—ΊοΈ
Conversation Flow Mapping
Mapped 8 core conversation journeys and identified the 12 most common "dead end" patterns that needed graceful recovery design.
πŸ’¬
Prompt Scaffolding Design
Worked directly with AI engineering to design conversation-starter suggestions, contextual prompts, and response format guidelines that make outputs visually parseable within iOS.
02

Key Innovations

🎯
Contextual Conversation Starters
When a user opens the app at 7pm in Tokyo, the AI surfaces "Good evening! Looking for dinner recommendations nearby?" β€” not a blank chat input.
πŸƒ
Rich Response Cards
AI responses that include places render as interactive cards β€” not walls of text. Tap to open in Maps, save to itinerary, or share with travel partners.
🀍
Honest Uncertainty Design
A distinct visual treatment for hedged responses ("I'm not 100% certain β€” verify on Google"). Users trusted the AI more because it acknowledged its limits.
πŸ”
Trip Memory & Continuity
"How was that ramen place I suggested yesterday?" feels magical and creates genuine emotional engagement with the product across sessions.
"I forgot I was talking to an AI. It felt like messaging a friend who happened to know everything about Tokyo."
— Beta Tester, UX Feedback Session · Vancouver
03

Outcomes

4.8Γ—Chat Session Length
78%D7 Retention Rate
4.6β˜…App Store Rating
Back to First
← Listing Doctor
←