Back

Turning aircraft engineering into an AI system

I led product discovery & definition, workflow design, and UX/UI for a major airline’s AI maintenance engineering platform. Shipped two modules that trained an AI to assist aircraft-level analyses, contributing to a projected ~$20M in-year savings target.

Duration

3 months

Role

Product Designer

A maintenance program that hardly questions itself

Uprise Airline manages hundreds of inspections and checks across its fleet.

When addressing or preventing aircraft failures, the aviation industry relied on defaulting to add tasks for years. With slow regulatory approval process for any changes, the industry favored overloading safety checks without analyzing the issue. There was no tool to make an informed case for prioritizing the right tasks.

I admit that we have a bias towards action and pick solution before the problem is well known.. That makes it difficult for Maintenance Programs to execute on multiple conflicting workcards.

Director of Reliability Engineering

Getting from “this task might be ineffective” to a program change meant aligning three departments across different data, recurring meetings, and manual document creation for federal review.

Key departments in Aircraft Maintenance and Safety

I know which systems are causing problems. But I can’t escalate the problem and solutions fast enough to act on them before they become bigger issues.

Key responsibilities

Lead engineering projects including part redesigns and structural modifications. Own the business case for program changes. Primary point of contact when a regulatory review package goes to the board.

Pain points and needs

Impact estimates are non-standardized and treated as late-stage formalities. Difficult to investigate chronic or tail-specific problems. Hard to connect financial impact to a failure mode proactively.

Measurement of success

  • Mechanical Reliability Index (MRI) trends
  • Cost of operational events
  • Aircraft-on-ground frequency
  • Engineering project ROI

Fig 1. Key departments and pain points

We worked with 20+ engineers across the departments to define requirements for AI-assisted failure mode analysis and monitoring workflow. The new process could reduce unnecessary checks, surface real problems faster, and cut delays and cancellations.

Scaling the engineering decision to a machine-learning workflow

Discovery revealed a process that worked but couldn’t scale. The existing analysis flow was manual and delayed, and context easily got lost between departments.

Our mission was to enable collaborative analysis and capture observations as model training data. The analysis flow split into two paths based on journey stage and owner.

Types of Fleet Analysis Modules

Rank underperforming tasks, investigate the worst ones, and produce the approval package

Measurement of success

  • Task removal or interval extension rate
  • Time to complete a review cycle
  • Approval package completion rate
  • Cost saved per approved task change

Ownership

Maintenance Programs (Primary)

Fleet Engineering (Secondary)

Fig 2. Two analysis modules enabled in the tool

Future State Failure Investigation Workflow

  • Support root cause investigation with historical data and failure modes
  • Act as an expert for the fleet during failure analysis in Working Group sessions
  • Lead the failure investigation process; cross-reference failure mode paretos, findings, aircraft data
  • Validate issues from existing tasks — execution feasibility, chronic patterns
  • Align on potential solutions in Working Group sessions alongside Reliability and Fleet

Fig 3. Redesigned failure investigation stages and description for each stage. the original workflow and information architecture.

Tying failure insights to maintenance tasks

Though analysis was the most critical stage of the workflow, it was not the centerpiece of the UX. It was the interaction for reviewing log entries, confirming or updating the AI-suggested failure classification, and improving the model’s ability to classify next time.

Task Details

Pressure Regulation Health Check

Applied to

A320-000

Function Code

0

Task Code

0001

Workcard

0000-0000

Air ENG (2) Bleed Adnorm.PR, Air ENG (2) Bleed Not Clsd.

Sample Logs (5)

Defect descriptionDateFailure ModeFunctional Failure
#2 engine bleed-air valve — no indication05/30/2023Engine #2 bleed-air valve stuck openEngine #2 bleed-air valve stuck open
#2 engine bleed-air temperature abnormal05/30/2023Bleed-air temperature regulator leakingBleed-air temperature regulator leaking
#1 engine bleed-air temperature abnormal05/30/2023Temperature regulator not controlling the fan-air valveTemperature regulator not controlling the fan-air valve
#2 engine bleed-air pressure fluctuating05/30/2023Bleed-air temperature regulator leakingBleed-air temperature regulator leaking
#1 engine pre-cooler outlet temperature high05/30/2023Fan-air valve fails to openFan-air valve fails to open

Fig 4. The task detail view after the AI model has populated failure type and failure mode columns. Engineers confirm correct suggestions and correct wrong ones. Every correction feeds back into the model.

Assessing task and failure performance

Task and failure classification, the two entry points for analysis workflow, were paired with a dedicated view to help engineers determine if a task or failure type was worth investigating.

All tasksHealth CheckPressure Regulation

Health Check — Pressure Regulation

Reference task ID 15-2020, last reviewed 2 months ago

TEI

30

Low

Effectiveness

MRI

60%

Avg yield for task

Log rate

70%

70% Sample Pool

Flight cycles

88%

75% Sample Pool

Non-routine findings by Category

Findings
50250
Bleed air
Hydraulics
Landing gear
Flight ctrl
APU
Air cond.

Logs (4)

DefectDateYieldFailure Mode
#2 Engine Surge Shutdown (Prev.)06/25/20239.5%Air-Bleed Reroute Open
FM Cocked subject Ctrl Mtr C…06/08/20233.2%TCT leakage
#2 engine came back and need…06/16/202334%Temperature Control (TCT)…
APU/Airpack Nearby Refurbish…05/28/202356%APU bleed check-valve fa…

Fig 5. Task and failure detail views allow quick assessment of whether cross-departmental analysis is needed

From first task reviewed to MM in identified savings

A few weeks into the launch, analysis that previously took months of manual log review began producing actionable findings in days. We saw three significant impact cases from the tool:

  1. 1. $620K savings from compression check removal

    A 11-year-old gas compression check was consuming 6K man hours annually and surfaced as a candidate for removal. The task is now reviewed and successfully removed without impact in passenger safety or delays, producing $620K in in-year labor savings.

  2. 2. $3.3M savings from yield optimization

    Through aircraft-level analyses, yield optimization opportunity was identified across 64 aircraft for 2025, including task removals and execution interval extensions.

    The AI classification layer the engineers had been actively reviewing and labeling reached ~80% labeling accuracy across 600K+ log entries by launch, and continued to improve as more functional failure tags were confirmed in the UI.

  3. 3. 80% accuracy rate and evolving

    The AI classification engineers helped label reached ~80% accuracy across 600K+ log entries by launch and continues to evolve as more labels are validated and updated in the tool.

What I’d carry into my next projects

  • Show uncertainty honestly. “The model flagged this as 62% likely” is more useful than a green check.
  • Tie every disagreement to a measurable artifact (a tag change, an approved package). Feedback only compounds if someone can see it shift next week’s output.

© Jeewoon Lee

Client name and information have been redacted for confidentiality.