top of page
Fleet.jpg

Discover Fleet Software

Hero Section.jpeg

About Onto Innovation

Onto Innovation is a leading semiconductor solutions company that develops advanced process control, metrology, and yield optimization software.

https://ontoinnovation.com/

Discover Fleet Software

A web-based analytics platform to monitor tool health and improve chip yield across fabs.

My Contribution

As Lead Designer in a 3-member design team. I conducted user research, created concepts, prototyping and developed a comprehensive design system covering visuals. Collaborated with UX Director and Product Manager to develop design strategy, product roadmap, feature mapping etc.

animation_llkc2fmu_small (1).gif

Company & Product

Onto Innovation
Discover Fleet

animation_llkc2fmu_small (1).gif

Duration

12 Months
 

animation_llkc2fmu_small (1).gif

Responsibilities

UX Research, UX Design, UI Design, Prototyping

Here's a 1 min TL;DR version.

What did I do?

Led the end-to-end design of Fleet, simplifying complex defect analysis into a guided system for engineers. The challenge was to automate the manual workflow engineers had been following for years and reduce the time on tasks.

Why was it done?

What impact did it have?

Drove faster, data-driven decisions and reduced wafer losses, strengthening client confidence and accelerating Fleet software adoption across major semiconductor accounts like Micron and Intel.

Manual data collection and Excel-based analysis often led to delays and costly wafer losses for semiconductor manufacturers. Onto set out to bridge this gap by introducing Fleet, a data-driven analytics platform designed to help leading clients like Micron and Intel optimize tool performance, reduce variability, and prevent yield losses.

Before

Screenshot 2025-10-15 at 3.35_edited.jpg

After

By-Uptime-2.png

Impact Metrics

Defect Resolution Time  
reduced from
months to 2-4 days

Tool Health Efficiency
Increased by 30%

Pre-Sales Conversion
Improved by
25%

The story behind Fleet..

ChatGPT Image Oct 10, 2025 at 02_28_17 PM.png

1. The Problem 

For years, tool health inspection lived inside endless Excel sheets copied, merged, and managed by hand. Months were lost to manual tracking, and each delay made repairs, installations, and fixes painfully slow. What was meant to ensure precision instead created chaos.

Time lost in parameter/Errors management across tools

47%

Engineering time wasted
Anually

520+ hours

Project delays due to data complexity

61%

Due to delayed/incorrect defect detection

$1.2M+ annual Loss

2. Research & Insights

Who We Spoke To

Fleet’s users represent a full spectrum of the semiconductor manufacturing ecosystem:

  • Field Service Engineers (FSEs) – On-site tool health and maintenance

  • Manufacturing Engineers (MFG ENG) – Responsible for uptime and installation quality

  • Data Scientists & Domain Experts – Build predictive models and validate algorithms

  • End Customers (Micron, Intel, TSMC) – Monitor tool fleets across fabs to reduce downtime

Research Inputs came from

  • Field observations & User Interviews — captured real logs and shift patterns

  • Workshops  — Design Thinking sessions

  • Failure-mode analysis — traced multiple cases where missing parameter correlation delayed fixes by days

  • Data forensics — reviewed 25 GB+ FDC logs and QC monitor data per site.

  • Artifact studies — Excel macros, Tableau dashboards, and JMP models used by engineers pre-Fleet

  • Pilot deployments — Alpha and Beta phases validated real-world usage and latency limits

Insights

3. Prioritization - Design Worksop to Decisions

Synthesizing field insights into a focused, high-impact MVP roadmap.

a. Research Clusters & Pain Points

Key user challenges grouped into four clusters: Visibility, Autonomy, Predictive Intelligence, and Knowledge Sharing. These clusters became the foundation for Fleet’s MVP prioritization.

  • Fragmented data across fabs.

  • Manual merges from Autotest, FDC, Metrology.Delays, errors, low trust in dashboards.

  • Four clusters: Visibility, Autonomy, Predictive, Knowledge

The workshop unified product, design, and engineering around a single principle;clarity before prediction. 

Dashboard first: Solved the fragmented visibility issue and established a single health model (Hx). 

Query Builder next: Empowered engineers with self-serve RCA, freeing analysts for higher-value modeling. 

This phased strategy de-risked delivery and built adoption momentum early.

3. Goals and Key Features

After the design workshop, we aligned on a clear mission:
Fleet should simplify how semiconductor engineers understand and act on tool data, turning fragmented logs into actionable intelligence.

Product Goals

  • Unify disconnected Autotest, FDC, and Metrology data into a single fleet view.

  • Build trust through explainable health scoring and clear traceability.

  • Enable real-time insights instead of manual, reactive diagnosis.

  • Create a design system flexible enough to scale across 11+ tools and 300+ subsystems.

Design Goals

  • Simplify visual hierarchy for engineers dealing with data-dense dashboards.

  • Balance depth and clarity, show anomalies fast, without hiding technical detail.

  • Make complex correlations feel natural with intuitive flows.

Business Outcomes

Fleet wasn’t just solving usability pain points, it was redefining Onto Innovation’s analytics capability across its global customer base.

Target Metrics

25–50% faster defect resolution through contextual tool views

15–30% reduction in downtime, saving millions in wafer yield

1% productivity gain per fab, driven by automated correlation and reports

MVP Focus

To meet both user and business goals, we focused on delivering an MVP built around two high-impact features:

AI Fleet Dashboard

A unified, explainable overview of tool health and anomalies.

AI Query Builder

A simplified interface to correlate thousands of parameters without manual scripting.

4. Exploration & Ideation

Early brainstorming session with product managers and engineers to define user flow for query execution and dashboard visualization. Focus was implifying parameter selection and tool health interpretation. I carried out audits of existing Onto products along with other data analytics platforms to study how we can display data easily and effectively. During early ideation, I discussed how AI could streamline complex workflows, automating parameter selection in the query builder and forecasting tool health anomalies in dashboards. These concepts later evolved into Fleet’s predictive intelligence layer.

Screenshot 2025-10-14 at 11.39.25 PM.png

Concept Foundation

a. Concept 1 — Query Builder (Exploration of Structure)

I focused on initial concept of modular filter blocks (Autotest, FDC, Metrology). Each dataset had independent logic controls, flexible but cluttered for daily users.

Queries Editor - Selected Fields.png

Learning: Users preferred consolidated filters with auto-suggestions over manual rule creation. Equipment selection is cumbersome.

a. Concept 2 — Parameter Selector (Simplifying High-Volume Outputs

 I explored different dropdown models for handling 1000+ parameters, from infinite scrolls to categorized chips and “recently used” tags. Iterated the Equipment Selection and simplied the Data Channel parameters selection based on team's and users feedback.

Queries Editor - Default.png

Learning: Tag-based search with smart grouping (Mean, SE, SR, etc.) was most discoverable and reduced selection time by ~40%

a. Concept 3 — Dashboard Exploration

Data explorations for representing tool health using tile-based hierarchies, balancing subsystem visibility (Tool -> Subsystem -> Parameters) and overall fleet trends.

Dashboard-Light-Blue.png
Dashboard-Sub-Light-Blue.png
Dashboard-Sub-Para-Light-Blue.png

Learning: Gradual drill-down views (Tool → Subsystem → Parameter) improved mental model clarity for engineers but increased the cognitive load in the concept testing.

I conducted Concept Testing with 5 users and with the team to validate and gather feedback. The testing insights were iterated further to Final Designs. These explorations shaped the final Fleet Dashboard and Query Builder MVP, aligning usability with Onto’s engineering workflows and AI-readiness roadmap

5. Final Designs - Discover Fleet Dashboard: From Overview to Root Cause

A real-time command center enabling engineers to monitor 50+ tools, detect anomalies, and drill down into subsystem-level issues in seconds.

Real-time health index of 50+ tools selection 

By-Uptime-2.png

Single-click drill-down for subsystem insights.

Subsystems with projected error risk are subtly pre-highlighted (based on AI forecasts)

AI-Generated Error Summary: Summarizes and ranks recurring error patterns using frequency + severity weighting. Uses LLM logic to group similar failure types and recommend root-cause categories.

Adaptive performance indicators (AI-assisted): KPIs update dynamically using anomaly detection and trend forecasting models

Dynamic color-coded health map: Visual correlation of uptime, errors, and wafer throughput powered by AI trend recognition.

Drill-Down Journey (Three-Step Visual Flow)

Step 2 - Error Panel Opens

Step 1 - Tool tile Selection

Select a tool tile to investigate performance drop

Drill-down reveals error patterns and timestamps helping engineers correlate downtime with critical events

Step 3 - Advanced Analysis View

Engineers can switch to correlation and yield views to identify systemic causes

By-Uptime-2.png
By-Wafers.png

Pareto Chart: Prioritizes top recurring failure categories

Introducing AI layer for predictive anomaly detection and automated recommendations.

Quantifies productivity vs. defect trend

XY Chart: Maps parameter dependencies (e.g., purity vs. humidity)

X-Y Chart.png

Mosaic Plot: Maps parameter dependencies (e.g., purity vs. humidity)

6. Testing & Feedback

Screenshot 2025-10-15 at 3.35.38 AM.png
IMG_0635.HEIC
IMG_0641.HEIC
IMG_0634.HEIC
IMG_0637.HEIC

I conducted moderated usability sessions with 8 field service engineers and 2 product managers to validate the solutions's effectiveness and mental model. The response was 80% positive. Though there is a room for further development. More advancements can be brought in near future with continuous LLM model trainings.

7. Learnings

This project was a once-in-a-lifetime opportunity for me, and I feel very lucky to have been a part of it. It felt like I was playing connect-the-dots, but I had to draw the dots myself in order to connect them.

My Awesome Team!

IMG_5526.jpeg
bottom of page