Supplier Quality Engineer · Medical Device · Relocating to Charlotte / Raleigh, NC

Derek
Klein.

Seven-plus years in medical device quality. Class II and III. FDA and ISO 13485. I've worked as a quality engineer on both sides — driving corrective actions as the OEM, and managing customer quality requirements as the QE for component suppliers. That dual experience is what makes me effective on day one.

7+
Yrs Med Device QE
2+
Yrs as Component Supplier QE
II&III
Device Classes
Phone
Location
Relocating to Charlotte / Raleigh, NC
Education
B.S. Mechanical Engineering · NDSU

Why Derek for SQE
YOU NEED
BOTH SIDES.

Most SQEs have only seen it from one direction. I've managed suppliers as the OEM — and answered to customers as the supplier. Here's what that looks like in practice.

🏭
I Go On-Site.
Emails don't fix measurement systems. Showing up does.
A supplier couldn't pass OQ/PQ. Their dimensions were reading lower than ours — consistently. Before calling it a process problem, I called it what it was: a measurement system problem. I traveled on-site, identified the issue in their vision system setup and fixturing, and ran a joint MSA across both facilities to confirm it.

The gap wasn't in their process. It was in how they were measuring. We aligned the methods, they tuned the tooling, and they passed.
Result
Supplier OQ/PQ completed. Measurement gap eliminated. Qualification unblocked.
⚖️
Not Every NC Is a SCAR.
Risk-based decisions. Documented. Defensible.
Escalation decisions aren't a committee vote. I evaluate each nonconformance against component criticality, failure type, and supplier history — and I make the call. SNO (supplier notification only) or full SCAR (supplier corrective action request) — the risk-based escalation path appropriate to the situation. Every decision is risk-justified and documented.
Result
Escalation decisions that held up through notified body audits. No findings. No surprises.
🔁
I've Worked Both Sides.
2+ years on the other side of the SCAR.
At TE Connectivity and Cretex Medical, I wasn't issuing SCARs. I was responding to them. I managed incoming complaints from OEM customers, drove corrective actions under direct customer scrutiny, and owned the quality relationship end to end as the component supplier.

When I evaluate a supplier now, I already understand how they operate — because I've worked in that role.
Result
2+ years as the QE for component suppliers to OEM customers. I build better supplier relationships because I understand what it's like to be on the receiving end.
🔗
Two Fires. One Engineer.
Customer waiting. Supplier causing it. Both managed at once.
Customer reporting coating failures. Root cause traced upstream to our coating supplier — inadequate coverage on complex geometry. Two relationships, both urgent, neither willing to wait.

I kept the customer informed throughout while driving root cause and corrective action at the supplier simultaneously. Containment at all three sites. Rework executed. Process updated. Complaint closed.
Result
Complaint closed. Supplier process corrected. No repeat occurrence.
📋
Concept to Production.
I don't hand off. I own the loop.
Quality lead for a Class II sterilization case from concept through production release. Every step owned:
Result
Product successfully taken from concept through to full production release. Quality requirements met at every stage. Customer complaints resolved with documented root cause and corrective action.
50 Open NCRs. No System.
Six weeks later: 25. Here's how.
Tasks were falling through the cracks. No ownership. No routing. No visibility. I built a Power Apps NCR tool with a colleague — automated assignment by role, real-time notifications, live management dashboard.

It didn't just organize the backlog. It surfaced repeat failures that had been invisible in disconnected records. That's what made the reduction stick.
Result
50 open NCRs to 25 in six weeks. System still in use after I left.

Career History
WHERE I'VE
BEEN.
Dec 2024 – Present
Quality Engineer II
Olympus Surgical Technologies America · Minnesota Current
Key Result
Successfully qualified an extrusion supplier through on-site MSA alignment and validation support — resolving a dimensional nonconformance that had been blocking OQ/PQ completion
Oct 2023 – Dec 2024
Quality & Reliability Engineer
TE Connectivity · Minnesota
Key Result
Co-developed a Power Apps NCR management tool that cut open nonconformances from 50 to 25 in six weeks by automating task routing and surfacing repeat failure patterns
Sept 2022 – Oct 2023
Quality Engineer II
Cretex Medical · RMS Surgical · Minnesota
Key Result
Owned full NPI quality lifecycle from concept through production release for a Class II sterilization case, including successful resolution of a complex three-party coating defect
June 2020 – Sept 2022
Quality Engineer II
Baxter International, Inc. · Minnesota
Key Result
Led site-wide visual inspection standardization — developed consistent criteria and training across production lines, improving defect detection reliability
March 2019 – June 2020
Quality Engineer I
Abbott Laboratories · Minnesota
Key Result
Developed and executed validation protocols for Class III devices across NPD and sustaining environments — establishing the validation foundation carried throughout my career
Aug – Dec 2018
Validation Engineering Intern
Aldevron · North Dakota
Key Result
Completed IQ/OQ validation protocols for cold storage and equipment systems — first hands-on experience in regulated validation

What I Bring
THE TOOLS
I USE.
Supplier Quality
SCAR Management Supplier Site Visits PPAP FAI Control Plans Joint MSA Risk-Based Escalation (SNO / SCAR) Customer Complaints
Quality Systems
CAPA NCR Management Risk Assessment PFMEA Audit Preparation Audit Leadership KPI Tracking
Validation
Process Validation Test Method Validation IQ / OQ / PQ MSA Minitab Process Capability
Root Cause Tools
5 Whys Fishbone Fault Tree Analysis A3 Methodology Power Apps
Regulatory
FDA 21 CFR Part 820 ISO 13485:2016 ISO 14971:2019 ISO 9001:2015 EU MDR IIa/IIb TÜV SÜD Notified Body Class II Devices Class III Devices

My Approach · Methodology
CASE
STUDIES.

Effective supplier quality isn't just about managing problems when they happen — it's about bringing the regulatory depth, technical rigor, and structured thinking that prevents them from recurring. The following examples illustrate how I approach common supplier quality challenges — the methodology, tools, and decision-making I bring to an organization.

Case Study 01 · Supplier Development & Validation
How I Approach Supplier Measurement System Qualification
Hypothetical Scenario — Supplier OQ/PQ Qualification
↗ Supplier Site Visits ↗ Risk-Based SCAR Decisions ↗ Full NPI Supplier Quality Loop

A supplier is blocked from completing OQ/PQ qualification because their internal dimensional measurements don't align with your incoming inspection results. The gap is consistent — not random. A SCAR would be initiated, but before demanding process changes, the SCAR investigation needs to answer a critical question first: is this a process problem or a measurement system problem? Those require completely different corrective actions, and getting it wrong wastes everyone's time.

When the data points to a measurement system issue, the most effective path is an on-site visit — and when justified, I'll push for it. You cannot fully diagnose a measurement system problem remotely. That said, I recognize it's not always feasible, so I'd start by requesting detailed measurement system information from the supplier — fixture documentation, gauge type, measurement program details, and operator instructions — to narrow the likely causes before committing to a site visit.

On-site, I evaluate the full measurement setup — fixturing, part orientation, gauge type, operator technique, and environmental conditions — looking for anything that could introduce variability between facilities. For vision-based measurement systems specifically, I look at lighting configuration (ring light vs. backlight), edge detection thresholds, and whether the measurement program was validated for this specific part geometry. These are common sources of inter-facility gaps that are invisible until you're standing in front of the machine.

Once I've identified potential sources of variation, I design and run a joint MSA study across both facilities — typically a Crossed Gage R&R structure — to quantify repeatability and reproducibility separately. This tells me whether the gap is driven by the instrument, the operator, or the setup, and directs the corrective action precisely.

Gage R&R Study Structure
Study TypeCrossed Gage R&R — separates repeatability from reproducibility
OperatorsMinimum 3 — to detect operator-to-operator variation
PartsRepresentative sample spanning the process range
ReplicatesMinimum 2 per operator per part — required to quantify repeatability
Acceptance Criterion%GRR evaluated against industry-standard thresholds for the characteristic risk level
MethodTolerance-based %GRR — evaluates capability relative to the engineering tolerance, not part variation
Analysis ToolMinitab — Gage R&R (Crossed)

I run the study before and after any corrections — so I can quantify the improvement, not just assert it. If reproducibility is the dominant error source, the fix is fixture and setup standardization. If repeatability is the issue, the gauge itself needs attention. The data tells you where to go.

Once the measurement system is validated and aligned between facilities, the supplier can run meaningful qualification studies. Sample sizes are determined statistically — using a confidence/reliability-based approach where sample size is driven by the required confidence that a defined proportion of the population meets specification. This is more defensible than an arbitrary fixed number.

Capability Analysis — Critical Characteristics
Sampling BasisConfidence/reliability statistical method — risk-tiered by characteristic criticality
Capability IndexPpk — actual process performance, two-sided for bilateral specifications
Why Ppk not CpkPpk measures what the process actually does — Cpk assumes centering potential
Normality CheckAnderson-Darling test in Minitab before capability analysis
OQ PurposeDemonstrate capability under controlled conditions
PQ PurposeConfirm sustained capability under normal production conditions
What This Demonstrates
I go on-site — I don't manage supplier measurement problems remotely
I diagnose before I act — MSA first, process changes only after the measurement system is confirmed valid
I use the right statistical tools — Gage R&R, Ppk, confidence/reliability sampling — and I know why each one is appropriate
I can run a supplier through a full OQ/PQ qualification cycle and defend every decision in an audit
Case Study 02 · Supplier Corrective Action
How I Approach a Supplier-Caused Customer Complaint
Hypothetical Scenario — Three-Party Quality Problem
↗ Three-Party Problem Solving ↗ Risk-Based SCAR Decisions ↗ The Supplier Perspective

A customer is reporting defects on components you supplied. The root cause traces back to your upstream supplier — not your process. A SCAR gets initiated, but the investigation needs to run in parallel with containment — not after it. You are now the bridge between two parties who both need something from you immediately: a customer who needs containment and answers, and a supplier who needs to be driven to root cause and corrective action. The defining skill here is managing both simultaneously without letting either relationship deteriorate.

The customer gets contacted immediately — before I have answers. I acknowledge the complaint, communicate what containment actions are being taken right now, and set a realistic timeline for root cause. Customers in regulated industries don't expect perfection. They expect transparency, responsiveness, and a clear corrective action path. Keeping them informed throughout the investigation is what preserves the relationship — not just the final resolution.

Containment is executed in parallel with the investigation. That means placing holds at the customer site, at our site, and at the supplier, and performing physical sorts at all three locations to establish the scope of affected product. Where an on-site sort is justified and feasible, I'll push for it — direct involvement in containment at the customer or supplier builds more confidence than a remote instruction.

I use a structured, documented approach — not gut feel. A fishbone diagram across all six cause categories (Method, Machine, Material, Man, Measurement, Environment) ensures no branch is skipped before narrowing focus. Once the most likely cause categories are identified, a 5 Why analysis drills to the systemic root cause — not just the proximate failure.

The investigation always includes a review of the supplier's inspection method. In my experience, detection gaps are as common a root cause as process failures — a supplier's process may have always produced the defect, but an inadequate inspection method prevented it from being caught. Both need to be addressed in the corrective action.

Investigation Framework
Cause MappingFishbone — all 6M categories evaluated before narrowing
Root Cause Method5 Why — drills to systemic cause, not proximate failure
Risk EvaluationPFMEA review — assess whether existing controls were adequate for this failure mode
Detection Gap CheckSupplier inspection method reviewed for validation, criteria clarity, and detection aids
ContainmentPhysical sort at customer, our site, and supplier — simultaneously, not sequentially
Corrective ActionAddresses both the process failure and the detection gap

A complaint isn't closed when the rework is done. It's closed when the corrective action has been verified effective, the PFMEA has been updated to reflect the corrected controls, and the customer has formally accepted the response. I track all of this to closure — not just the immediate fix.

What This Demonstrates
I manage customer relationships proactively — communication starts before I have answers, not after
I contain first, investigate in parallel — not sequentially
I use structured root cause tools — fishbone and 5 Why — to find systemic causes, not just proximate failures
I always check the detection gap — process fixes without inspection fixes leave the door open for recurrence
I close to verified effectiveness — not just completed rework
Case Study 03 · Quality System Development
Power Apps NCR Management System — 50 Open NCRs to 25
Quality System Development · Medical Device Manufacturing
↗ Built the System That Fixed It ↗ The Supplier Perspective

When I joined the site we had 50 open nonconformances distributed across multiple product lines and customers with no structured ownership, routing, or tracking system. NCR tasks were assigned manually via email, follow-up was inconsistent, and management had no real-time visibility into open items or their age. Repeat failure patterns were invisible because there was no way to correlate NCRs across disconnected records. The problem wasn't individual NCRs — it was the absence of a system to manage them.

Microsoft Power Apps Automated Task Routing Real-Time Notifications Management Dashboard Pareto Analysis Failure Mode Trending Root Cause Analysis

The first design decision was how to classify NCRs to enable meaningful analysis. We structured the system around two primary classification dimensions:

01
Impact Type — Process vs. Product: Each NCR was tagged as either a process nonconformance (deviation from procedure, method, or system) or a product nonconformance (out-of-spec material, dimensional failure, cosmetic defect). This distinction drove different investigation pathways and ownership routing — process NCRs routed to engineering/process owners, product NCRs routed to quality and manufacturing.
02
Segregation by Customer and Production Line: Each NCR was tagged to the specific customer and production line it originated from. This enabled Pareto analysis by customer and line — surfacing which combinations were generating the highest volume and enabling targeted corrective action rather than site-wide solutions for localized problems.
Initiation Timeline
Date NCR was opened vs. date the nonconformance occurred — tracked to identify reporting lag and ensure timely initiation
Containment Date
Date containment action was completed — measured against target timeframe to ensure affected product was controlled before investigation continued
Disposition Date
Date nonconforming product was formally dispositioned — tracked to prevent product sitting without a decision
Investigation Date
Date root cause investigation was completed — measured against target to ensure investigations were not delayed after containment
Closure Date
Final closure date once corrective actions were verified effective — the primary aging metric on the management dashboard
Open NCR Age
Days open per NCR — color-coded on the dashboard to surface overdue items requiring escalation or management attention

Once NCRs were flowing through the structured system, the classification data enabled trending that had been impossible before. Pareto charts were generated linking failure types to specific products, production lines, and customers — identifying the 20% of failure modes driving 80% of volume. This shifted the corrective action strategy from reactive (fix each NCR individually) to targeted (address the systemic failure modes generating the most NCRs).

Graphs were built to track failure type frequency by production line and customer over time — allowing the team to see whether corrective actions were having a measurable effect on recurrence rates. Repeat failures that previously went unrecognized became visible as patterns in the data, enabling structured root cause work on the highest-impact failure modes.

Outcomes
Open NCR count reduced from 50 to 25 within six weeks of system deployment
Automated task routing eliminated manual email assignment — ownership was clear at every stage
Fewer escalations — real-time age tracking and notifications surfaced at-risk NCRs before they required management intervention
Management gained live dashboard visibility into open NCR count, age, owner, and status across product lines
Pareto analysis identified highest-frequency failure modes — enabling targeted root cause work rather than isolated fixes
System adopted as standard practice and remained in active use after departure

Let's Talk
YOU NEED A QE
WHO GETS IT.

I know what it takes from both sides of the relationship. Seeking SQE roles in medical device or life sciences in the Charlotte / Raleigh, NC area. Available now — relocating to Charlotte / Raleigh, NC.

✉ d.klein1196@gmail.com 📞 763-276-5418