top of page

2023

Redesigning Incarnate

Reality to 3D
in a single scan 

Team : 2 Product Managers,1 External Designer,
2 IOS Developers, 4 QA Engineers, 2 CS Team Members, Brand Design members

Incarnate is an enterprise-grade product scanning IOS app (NeRF Tech) that converts physical objects into high-quality 3D assets.

It helps businesses enhance their product catalogs with immersive 3D and AR experiences, ultimately boosting user engagement and driving sales.

My Role: Design Lead

Concept Development, Stakeholder Communication, UI Collaboration, Customer Interviews, Usability Testing, Test Questionnaire Design, PM Collaboration, Developer Handoff Collaboration, User Research

Key Goals

The primary focus of this project was to ensure easy access and login.
 
Enable
smooth object scanning with session time under 3 minutes.

Reducing capture failure rate under 2%.

Provide clear end-result visualisation.

Capture the whole of object

Design Approach

We aimed to increase the ease of scanning by adding tutorial, tips and tricks to reduce each capture session to under 3 minutes.

Minimise capture failures by introducing visual prompts, feed forwards, and reducing the learning curve.

Additionally, we focused on shortening the time users took to discover their final outputs by making them available in their reach.

The Outcome

Insights showed improved scan efficiency, with average scan time dropping from 7 to 3 minutes by the third attempt.

90% of users were able to find their outputs easily, and emotional responses during scanning were positive due to effective feedforwards and prompts.

2D designers perceived the highest value, aligning well with their workflows.

User Personas

Userflows

1. User Journey

This user flow outlines the journey for three user segments highlighting key navigation paths and decisions.

First-Time Users:
They are guided through a tutorial to reduce overwhelm from the app’s multi-step process. 

Second-Time Users:
Users can choose to revisit the tutorial or jump into the main functionality, catering to those who need a refresher.

Recurring Users:
They are taken straight to the core experience, with the tutorial available via the sidebar removing  unnecessary friction.

Side Bar + Number of Visits_edited_edite
2. Tutorial Userflow:

The tutorial user flow walks users through a simulated object capture process, easing the learning curve for newcomers.

It covers key steps like mat placement, object alignment, size verification, and recording capture rings—providing hands-on guidance that boosts confidence and improves first-time success.

Tutorial.png
3. User flow for Entreprise Clients

The enterprise user flow bridges web and mobile for seamless cataloging and capture:

  • Catalog Ingestion: Clients upload product lists via CSV on the Apollo web platform, creating an inventory.

  • Mobile Capture: The synced object list appears in the mobile app, allowing flexible capture based on availability.

  • Web Management: Captured data syncs back to Apollo for status tracking, 3D review, and quality checks.

User journey for entreprise clients_edit

Moodboards
and Quick sketches

Screenshot 2025-03-21 at 3.35.20 PM.png
Screenshot 2025-03-21 at 3.35.36 PM.png

The
Solution

Let me walk you through the key challenges previous designs and how I approached designing effective solutions for them.

iPhone-1.png
iPhone-2.png
iPhone.png

Easy Access and Login

Existing Problems
  • Low Visibility of Login Options: Sign-in buttons are not prominent enough.

  • Limited Login Methods: No email or guest access options.

  • Weak Call to Action: Lacks a clear prompt guiding users to log in.

  • Unclear Carousel Navigation: Dots suggest a carousel but lack explanation. (Which is login and which is signup?)

  • Missing Trust Elements: No security or privacy reassurance for users.

IMG_2305 2.png
IMG_2304 2.png
Log In Redesign

To improve the login experience, the sign-in buttons should be more prominent, ensuring they stand out as primary actions. Offering additional login options, such as email or guest access, can enhance accessibility.

Strengthening the call-to-action with clearer messaging will guide users more effectively. Lastly, including a brief privacy and security reassurance will help build user trust and encourage seamless onboarding.

Both SSO options are highlighted and have consistent design.

Users are made aware that we have a privacy policy to build their trust

Log In.png

Giving users a supporting text that login is required.

Adding an email option too for the users who don't have Apple or Google accounts.

Onboading of a User

First-time users are guided through a tutorial to simplify the app’s multi-step process.

This helps prevent confusion, boosts confidence, and improves retention from the very first interaction.

first login.png
Video played.png
rewatch or scan.png

Object
Smooth Scanning

Problems with
Floor Detection
  • Unclear Instructions: Users may not understand the mat's purpose or placement.

  • Low Readability: Instruction text blends into the background.

  • Weak Markers Visibility: Yellow markers are faint and don’t clearly define placement.

  • Distracting Logout Button: Positioned prominently, risking accidental taps.

  • No Feedback for Incorrect Placement: Lacks guidance if the mat is missing or misaligned.

Proposed Solution

Increased contrast and used a more prominent placement.

Floor Detection - 1.png

Log out was added to sidebar options.

Used a semi-transparent overlay with guiding animation to indicate the mat's ideal position.

The mat uses a pulsing animation in its dotted pattern to guide users during placement. Once the floor is detected, the animation stops, providing clear visual feedback without requiring extra effort. This helps users by reducing confusion, minimizing effort, and ensuring they only need to align their camera correctly

IMG_2305 2-8.png
Floor Detection - 1.2.png

A forward arrow appears upon floor detection, clearly indicating the next step

Problems with
Box Resizing

In this step, users are required to resize the bounding box to fit their object. They do this by selecting a specific face of the box and then using a slider to increase or decrease its size. Once one face is adjusted, they must physically move around the object to select and resize the next face.

  • Unclear Face SelectionUsers may struggle to identify which face is currently selected.

  • Tedious and Time-ConsumingAdjusting each face separately, combined with the need to walk around the object, slows down the process significantly.

  • Mismatch in Speed of Adjustment The bounding box edges may not move in sync with the slider, causing lag or overshooting, making it frustrating.

  • UI Placement IssuesThe slider and navigation buttons are positioned in a way that make one-handed operation difficult, especially for larger screens.

IMG_2305 2-3.png
IMG_2305 2-4.png
Proposed Solution

Used a semi-transparent overlay with guiding animation to indicate box resizing.

Giving clear instructions with proper readability.

We realized that pinching is the most common action for users, so we introduced pinching and sliding gestures for resizing instead of relying on a slider.

Box Resizing .png

The selected face of the box is highlighted, and 3D arrows are added on each face to indicate drag directions. The box now updates instantly and smoothly as users adjust its size, eliminating lag and ensuring precise control.

Box Resizing - 1.4 (1).png

An arrow to the next step appears once the user has completed at least one full round around the object. This ensures users have adequately adjusted the box.

Problems with
Ring Scanning

In this step, users are required to resize the bounding box to fit their object. They do this by selecting a specific face of the box and then using a slider to increase or decrease its size. Once one face is adjusted, they must physically move around the object to select and resize the next face.

  • Unclear Progress Tracking – There is no indicator showing numbers of rings or progress and  completion status.

  • No Distance FeedbackIf the user is too close or too far from the ring, they were not aware.

  • Unclear Vibration Instruction The instruction mentions "Take a step for each vibration," is confusing since the user has not experienced this. 

  • Ambiguous Recording State – The record button is the indicator of scanning, but user doesn't understand when to stop recording.

  • Ring Visibility Issues – Depending on the background, lighting, and transparency of the rings, they were difficult to see.

IMG_2305 2-6.png
IMG_2305 2-5.png

Proposed Solution

Gave instructions indicating which ring the user is currently scanning and the total number of rings they need to complete.

A GIF demonstrating the 360° coverage of rings and guiding users through the scanning process with clear instructions.

wireframe.png

Instruction tells users that they need to be an optimal distance in order to scan the ring. Thus we told user if they need to move closer or move farther from the ring helping them with correct placement.

We also didn't show the record button until and optimal distance was reached.

wireframe-3.png

A clear outline was added to the ring to make it work on different types of backgrounds, since whole of transparency couldn't be removed

wireframe-4.png
wireframe-1.png

Once user was at optimal distance "Record" button was made available to avoid faulty images.

Once the user completed a ring, he was asked to move down to capture the next ring.

End Result
Visualisation

Existing Problems
  • External LinkThe process happens in a browser instead of within the app, disrupting the user flow.

  • Unclear Object NamingThe generated model name is random and not user-friendly.

  • Unstructured StagesThe conversion steps do not feel distinct, making progress unclear.

  • Lost Link IssueUsers must manually save the link, adding extra effort.

  • Lack of Progress FeedbackNo estimated time or clear indicators for completion.

  • No Completion ConfirmationUsers are not notified when the model is ready.

  • Limited User Guidance No clear next steps after model generation (e.g., view, edit, or download).

Group 341.png
Group 340.png
Proposed Solution
Menu [ Empty State ].png

Different tenants shown here, which user can switch and thus work with multiple clents, keeping his work organised.

Empty state, when user didn't take any captures yet.

Menu [ Empty State ]-1.png

A section to view all the captured objects was added to the sidebar instead of a separate external link.

wireframe-2.png

After completing the scanning process, the user is prompted to either assign a custom name to the capture or accept the system-generated name, making it easier to identify later.

Selected.png

A column view allows users to visually recognize each scanned object at a glance, making identification more intuitive

Captures - List View.png

A list view provides a quick overview of the total number of scanned objects along with their current status in the mesh generation pipeline, enabling efficient progress tracking.

App in Action

#LearnAnd Evolve1

Learning:
We noticed that for clients with large product catalogs, constant step-by-step guidance during scanning slowed down their workflow. While guidance ensures accuracy, experienced users needed a faster way to scan multiple items efficiently.

 

Evolving:

To improve efficiency, we introduced a choice between Guided and Freeform scanning. Guided mode provides step-by-step instructions, while Freeform mode allows experienced users to scan at their own pace without interruptions.

Additionally, users can set their preferred scanning mode from the sidebar, ensuring a consistent experience without selecting it every time. This flexibility helps streamline the process while catering to different user needs.

Box Resizing - 1.4.png
Free-form or guided scan.png
Dome Scanning.png
Screenshot 2025-03-21 at 4.09.10 PM.png

#LearningAndEvolving

Learning:
During the scanning process, we realized that capturing only the top of an object was not sufficient for many clients. Businesses with shoe catalogs, for example, needed bottom capture to allow users to visualize the sole. Without this feature, the 3D models lacked crucial details, limiting their usefulness.
 

Evolving:
To address this, we introduced a flexible bottom capture option. Users can choose to enable or disable bottom scanning before starting a scan. If enabled, they can capture the bottom after scanning the top. If not needed, they can disable it during object selection. Additionally, users who consistently require only top scanning can turn off bottom capture entirely from settings, ensuring a seamless experience without unnecessary prompts. This approach balances efficiency with customization to meet diverse client needs.

#Learn&Evolve2

Learning:
During the scanning process, we realized that capturing only the top of an object was not sufficient for many clients. Businesses with shoe catalogs, for example, needed bottom capture to allow users to visualize the sole. Without this feature, the 3D models lacked crucial details, limiting their usefulness.

 

Evolving:
To address this, we introduced a flexible bottom capture option. Users can choose to enable or disable bottom scanning from settings. If enabled, they can capture the bottom after scanning the top. If not needed, they can also skipit after top capture.

Additionally, users who consistently require only top scanning can turn off bottom capture entirely from settings, ensuring a seamless experience without unnecessary prompts. This approach balances efficiency with customization to meet diverse client needs.

Side menu.png
without bottom capture.png

If "Bottom  Capture" is switched on in side bar then User gets an option to add bottom capture to an object.

Capture - Pick Mode.png
Capture - Pick Mode-1.png

Confirming once again with the user just for a few seconds that if he wants to continue with bottom capture or skip.

Key Metrices Measured Impact

Captured key metrics from internal dogfooding post-launch
to assess user reactions and uncover early experiences, Also from usability testing from our existing clients

output_edited.jpg
User personas distribution

Captured key metrics from internal dogfooding post-launch to assess user reactions and uncover early experience

output (2).png
2x faster average time per scan

Average scan time dropped with each attempt, showing that while users felt overwhelmed at first, feedforwards and tutorials helped them learn quickly.

output (6).png
70% users founded bounding box adjustment easy

Ease of bounding box adjustment ranged between 3.0–4.0 across professions, with photographers finding it the easiest. Photographers likely found it easier due to their hands-on experience with framing in physical spaces, unlike others who work primarily on screens.

output (5).png
80% users found prompts very effective during Scanning

Prompts like “Move closer,” “Move farther,” and “Move down slowly” helped guide users through what could’ve been an overwhelming scanning process. Clear instructions on how to scan and which ring was active  reduced confusion.

output (9).png
92% users were able to find their scan outputs easily 

Most users found their scanned outputs easily, while a small portion struggled with 7% unable to locate them and 16% facing some friction, likely due to lack of sidebar exploration and absence of push notifications.

output (8).png
80% users rate Ease of App as 4 and 5 with 5 being easiest

Most users rated the app’s ease of use as 4/5, though some found navigation confusing and the scanning process challenging due to the physical movement involved.

output (3).png
Above 75% of users gave positive emotional score

Emotional scores for design and layout were positive across all groups, with photographers and technologists responding most favorably while category managers and developers gave lower scores pointing a need for better clarity tailored to less visually-inclined users.

output (10).png
App Value to User and their Organisation

2D designers showed the highest perceived value indicating strong alignment with their workflows. In contrast, photographers despite finding the app easy to use showed the lowest perceived value, possibly due to evolving product quality
and full studio setups for controlled results.

Some Assumptions
and hypothesis

Hypothesis

We believe that integrating 3D scans into e-commerce pages will drive higher purchase rates, ultimately leading to increased revenue for stores.

Assumptions

  • Users prioritize high-quality scans while minimizing the time required for the process.

  • Our customers are willing and capable of investing time in scanning their products effectively.

Explore More Work

bottom of page