Dopplr

Dopplr

Dopplr was a fashion tech startup building a web-based 3D virtual try-on experience for ecommerce brands. The idea was simple: a shopper lands on a product page, opens Dopplr as a plugin, creates a personalized avatar based on their measurements, and sees exactly how a garment fits before buying.

I was the only designer on the team. There was no design system to inherit, no existing flows to reference, and no prior version to improve. My job was to figure out what this product looked like, how it worked, and how to make something technically complex feel approachable to an average online shopper.

Services

UI/UX Design

User Research

Motion Design

Industries

Industry

Fashion Technology, E-commerce

Tech Stack

Figma

Protopie

Jitter

Year

2022-24

Date

2022-24

Petavue Homepage

The Problem Space

The Problem Space

Virtual try-on wasn't a new idea in 2022, but most existing tools fell into one of two traps. Either they were too technical, surfacing mesh data, body measurements, and calibration controls that meant nothing to a casual shopper, or they were too shallow, offering a basic overlay that gave users no real confidence in how something would actually fit.

Dopplr's bet was that fit confidence required a real avatar. Not an approximation, not a size chart reworded as a visual, but an actual representation of the user's body that garments could be draped on and evaluated. The challenge was making that level of detail feel simple enough that someone shopping for a jacket on a Tuesday afternoon wouldn't bounce the moment it loaded.

The constraints were clear from the start. This was a browser-based web plugin, not a mobile app. It had to integrate into any ecommerce product page without friction. Performance mattered. Clarity mattered. And the 3D canvas had to stay the focus, not the interface around it.

Defining the Product

Defining the Product

Before any screens were designed, the product needed a structure. Working closely with the CEO and CTO, I mapped out the core user journey from the moment someone opened the plugin to the point where they had enough fit information to make a purchase decision.

That journey broke down into three stages. First, avatar creation, where users input their measurements and the system generated a 3D representation of their body. Second, the try-on experience itself, where garments were draped on the avatar and users could rotate, zoom, and inspect. Third, fit insight, where a heat map overlay surfaced tension and pressure points across the garment to show where it was tight, comfortable, or loose.

Each stage had its own design challenge. Together they formed the core product.

Avatar Creation

Avatar Creation

Getting users to input body measurements accurately was harder than it sounds. People don't know their measurements off the top of their head, and asking for too many inputs upfront causes drop-off. Asking for too few produces an avatar that doesn't feel personal enough to trust.

The design had to find the minimum set of inputs that could generate a believable, useful avatar without making the process feel like a form. I explored several approaches, including guided measurement flows with illustrations, size-based shortcuts for users who didn't want to measure, and progressive input where the avatar updated in real time as values were entered. The goal was always to get the user to a point where they looked at the avatar and thought "that's roughly me" as quickly as possible. That moment of recognition was what made the rest of the experience worth engaging with.

The Try-On Interface

The Try-On Interface

Once the avatar existed, the interface had to get out of its way.

Early explorations had persistent control panels, visible toolbars, and labelled buttons for every action. In practice these competed directly with the 3D canvas. Users were reading the interface instead of looking at the garment, which was the opposite of what the experience needed to do.

The direction we settled on was contextual controls. Nothing was permanently visible except the canvas and the garment. Controls surfaced when users interacted, hovered, or needed to take a specific action. Labels were reduced. Motion did the work that text had been doing.

This required close collaboration with the CTO and developers to understand what was technically feasible inside a browser plugin without killing performance. Some ideas got cut. Others got simplified. The final interface was quieter than any of the early explorations and more usable for it.

Dopplr Product Prototype

Fit Visualization

Fit Visualization

The heat map was the most technically interesting surface to design and the one with the most ways to go wrong.

The data it needed to show was genuinely useful: where the garment was tight, where it had room, where it might cause discomfort over time. But a persistent color overlay on top of a 3D garment is visually aggressive. It competes with the fabric texture, flattens the depth of the render, and if the color scale isn't carefully considered, it reads as alarming rather than informative.

The decision was to make it optional. Off by default, one tap to activate, easy to dismiss. The color scale was tuned to be readable across different skin tones and garment colors, which required several rounds of testing across different combinations. The overlay needed to feel like additional information, not an alarm.

Motion as Feedback

Motion as Feedback

3D assets take time to load. That's unavoidable in a browser environment, especially for a plugin that had to work across different ecommerce sites with varying performance conditions.

Rather than masking this with a generic spinner, I designed loading states that reflected the product's visual language and gave users a sense of what was coming. The motion wasn't decorative, it was communicative. It told users the system was working, what it was doing, and roughly how long it would take.

This thinking carried into micro-interactions across the product as well. State changes were animated. Transitions had direction and purpose. The goal was a product that felt responsive and alive without ever pulling attention away from the 3D canvas.

Dopplr Product Prototype

Reflection

Reflection

What this project gave me that nothing else has is the experience of being the only designer at the table from day zero. Every decision about how the product worked, what it prioritized, and how it talked to users came through me. There was no senior designer to check with, no existing patterns to follow, and no room to wait for more information before moving forward.

Designing a 3D experience for the browser in 2022 also pushed me into territory that standard UI work doesn't touch. Performance constraints shaped interface decisions. Technical limitations changed what was possible. Working directly with the CTO meant learning to speak in terms of feasibility, not just desirability.

If I were doing this again, I'd push harder on user testing earlier. Most of our validation came through internal demos and CEO and CTO feedback, which was useful but limited. Getting the avatar creation flow in front of real shoppers sooner would have surfaced friction points we only caught late. That's the thing I'd change.

Defining the Product

Before any screens were designed, the product needed a structure. Working closely with the CEO and CTO, I mapped out the core user journey from the moment someone opened the plugin to the point where they had enough fit information to make a purchase decision.

That journey broke down into three stages. First, avatar creation, where users input their measurements and the system generated a 3D representation of their body. Second, the try-on experience itself, where garments were draped on the avatar and users could rotate, zoom, and inspect. Third, fit insight, where a heat map overlay surfaced tension and pressure points across the garment to show where it was tight, comfortable, or loose.

Each stage had its own design challenge. Together they formed the core product.

Avatar Creation

Getting users to input body measurements accurately was harder than it sounds. People don't know their measurements off the top of their head, and asking for too many inputs upfront causes drop-off. Asking for too few produces an avatar that doesn't feel personal enough to trust.

The design had to find the minimum set of inputs that could generate a believable, useful avatar without making the process feel like a form. I explored several approaches, including guided measurement flows with illustrations, size-based shortcuts for users who didn't want to measure, and progressive input where the avatar updated in real time as values were entered. The goal was always to get the user to a point where they looked at the avatar and thought "that's roughly me" as quickly as possible. That moment of recognition was what made the rest of the experience worth engaging with.

©2026 – Piyush Kulkarni

6:45 PM