Listing Doctor
Designing a zero-to-one application to track, diagnose, and resolve product listing errors across Amazon, Walmart, and eBay marketplaces.
Sellers on multi-channel marketplaces lose significant revenue due to undetected listing errors β suppressed products, missing attributes, policy violations β impossible to monitor manually across thousands of SKUs.
A centralized dashboard aggregating listing health data from Amazon, Walmart, and eBay APIs, surfacing critical errors with severity scoring, and guiding sellers through step-by-step resolution workflows.
Discovery & Research
I conducted 12 contextual inquiry sessions with e-commerce operations managers. The goal was to understand the current workaround landscape β what tools they used, where errors fell through the cracks, and the emotional cost of a suppressed listing.
"I found out my top-selling product was suppressed on Amazon because I noticed a dip in sales β not because any tool told me."— Operations Manager, Mid-market Seller
Design Process
The design went through 4 major iterations, each validated with a different cohort of sellers. I used a mix of low-fidelity concept testing, task-based usability testing, and desirability studies.
Key Design Decisions
Outcomes
SaaS Contract Analyzer
Augmenting the financial auditing process by designing human-centered interfaces for ML-powered contract analysis and anomaly detection.
Financial auditors review hundreds of SaaS contracts manually β checking for auto-renewal clauses, price escalators, and compliance flags. The ML model could identify these patterns, but the outputs were impenetrable for non-technical auditors.
Bridge the gap between ML confidence scores and human judgment. Design an interface where auditors could review, override, and learn from model outputs β turning AI into a trusted collaborator, not a black box.
The AI Trust Problem
The core UX challenge wasn't a flow or navigation problem β it was a trust calibration problem. Auditors were either over-trusting the model (rubber-stamping) or completely ignoring it. Neither extreme was useful.
"When the model flags something at 72% confidence, what does that mean? Do I trust it or not? I need to understand why it thinks that."— Senior Financial Auditor, Research Participant
Design Decisions
Outcomes
Smithsonian Archives of American Art
Reimagining search and discovery for one of the world's largest archives of visual art documentation β serving scholars, researchers, and the curious public.
The Smithsonian's Archives of American Art holds over 20 million items β letters, sketchbooks, photos, oral histories. The existing search experience made this treasure largely undiscoverable.
Lead the end-to-end redesign of the search and browse experience. Balance the needs of academic researchers who need precision with casual visitors who want serendipitous discovery.
The Dual User Challenge
The most complex challenge was designing for two radically different mental models on the same interface. A PhD art historian has completely different needs than a high school student browsing for reference material.
Key Design Solution
We introduced a "Search Mode" toggle β a subtle but powerful interaction that transformed the interface from an academic search tool to a visual browse experience without creating two separate sites.
Outcomes
Wingman
Integrating a Generative AI chatbot into a travel companion iOS app β designing conversational UX that feels genuinely human, contextual, and delightful.
Wingman had a powerful AI travel engine. But the chat interface felt cold and robotic β like texting a FAQ bot. Users disengaged after 1β2 messages. The AI's capability was invisible behind a poor conversational experience.
Design a conversational experience where the AI feels like a knowledgeable local friend β proactive, contextual, with personality β while staying honest about its limitations.
Designing for Conversation
Conversational UI design sits at the intersection of UX writing, interaction design, and prompt engineering. I led discovery with 15 frequent travellers across Vancouver, Toronto, and Delhi to understand how people naturally ask for travel advice.
Key Innovations
"I forgot I was talking to an AI. It felt like messaging a friend who happened to know everything about Tokyo."— Beta Tester, UX Feedback Session · Vancouver