Seven-plus years in medical device quality. Class II and III. FDA and ISO 13485. I've worked as a quality engineer on both sides — driving corrective actions as the OEM, and managing customer quality requirements as the QE for component suppliers. That dual experience is what makes me effective on day one.
Most SQEs have only seen it from one direction. I've managed suppliers as the OEM — and answered to customers as the supplier. Here's what that looks like in practice.
Effective supplier quality isn't just about managing problems when they happen — it's about bringing the regulatory depth, technical rigor, and structured thinking that prevents them from recurring. The following examples illustrate how I approach common supplier quality challenges — the methodology, tools, and decision-making I bring to an organization.
A supplier is blocked from completing OQ/PQ qualification because their internal dimensional measurements don't align with your incoming inspection results. The gap is consistent — not random. A SCAR would be initiated, but before demanding process changes, the SCAR investigation needs to answer a critical question first: is this a process problem or a measurement system problem? Those require completely different corrective actions, and getting it wrong wastes everyone's time.
When the data points to a measurement system issue, the most effective path is an on-site visit — and when justified, I'll push for it. You cannot fully diagnose a measurement system problem remotely. That said, I recognize it's not always feasible, so I'd start by requesting detailed measurement system information from the supplier — fixture documentation, gauge type, measurement program details, and operator instructions — to narrow the likely causes before committing to a site visit.
On-site, I evaluate the full measurement setup — fixturing, part orientation, gauge type, operator technique, and environmental conditions — looking for anything that could introduce variability between facilities. For vision-based measurement systems specifically, I look at lighting configuration (ring light vs. backlight), edge detection thresholds, and whether the measurement program was validated for this specific part geometry. These are common sources of inter-facility gaps that are invisible until you're standing in front of the machine.
Once I've identified potential sources of variation, I design and run a joint MSA study across both facilities — typically a Crossed Gage R&R structure — to quantify repeatability and reproducibility separately. This tells me whether the gap is driven by the instrument, the operator, or the setup, and directs the corrective action precisely.
I run the study before and after any corrections — so I can quantify the improvement, not just assert it. If reproducibility is the dominant error source, the fix is fixture and setup standardization. If repeatability is the issue, the gauge itself needs attention. The data tells you where to go.
Once the measurement system is validated and aligned between facilities, the supplier can run meaningful qualification studies. Sample sizes are determined statistically — using a confidence/reliability-based approach where sample size is driven by the required confidence that a defined proportion of the population meets specification. This is more defensible than an arbitrary fixed number.
A customer is reporting defects on components you supplied. The root cause traces back to your upstream supplier — not your process. A SCAR gets initiated, but the investigation needs to run in parallel with containment — not after it. You are now the bridge between two parties who both need something from you immediately: a customer who needs containment and answers, and a supplier who needs to be driven to root cause and corrective action. The defining skill here is managing both simultaneously without letting either relationship deteriorate.
The customer gets contacted immediately — before I have answers. I acknowledge the complaint, communicate what containment actions are being taken right now, and set a realistic timeline for root cause. Customers in regulated industries don't expect perfection. They expect transparency, responsiveness, and a clear corrective action path. Keeping them informed throughout the investigation is what preserves the relationship — not just the final resolution.
Containment is executed in parallel with the investigation. That means placing holds at the customer site, at our site, and at the supplier, and performing physical sorts at all three locations to establish the scope of affected product. Where an on-site sort is justified and feasible, I'll push for it — direct involvement in containment at the customer or supplier builds more confidence than a remote instruction.
I use a structured, documented approach — not gut feel. A fishbone diagram across all six cause categories (Method, Machine, Material, Man, Measurement, Environment) ensures no branch is skipped before narrowing focus. Once the most likely cause categories are identified, a 5 Why analysis drills to the systemic root cause — not just the proximate failure.
The investigation always includes a review of the supplier's inspection method. In my experience, detection gaps are as common a root cause as process failures — a supplier's process may have always produced the defect, but an inadequate inspection method prevented it from being caught. Both need to be addressed in the corrective action.
A complaint isn't closed when the rework is done. It's closed when the corrective action has been verified effective, the PFMEA has been updated to reflect the corrected controls, and the customer has formally accepted the response. I track all of this to closure — not just the immediate fix.
When I joined the site we had 50 open nonconformances distributed across multiple product lines and customers with no structured ownership, routing, or tracking system. NCR tasks were assigned manually via email, follow-up was inconsistent, and management had no real-time visibility into open items or their age. Repeat failure patterns were invisible because there was no way to correlate NCRs across disconnected records. The problem wasn't individual NCRs — it was the absence of a system to manage them.
The first design decision was how to classify NCRs to enable meaningful analysis. We structured the system around two primary classification dimensions:
Once NCRs were flowing through the structured system, the classification data enabled trending that had been impossible before. Pareto charts were generated linking failure types to specific products, production lines, and customers — identifying the 20% of failure modes driving 80% of volume. This shifted the corrective action strategy from reactive (fix each NCR individually) to targeted (address the systemic failure modes generating the most NCRs).
Graphs were built to track failure type frequency by production line and customer over time — allowing the team to see whether corrective actions were having a measurable effect on recurrence rates. Repeat failures that previously went unrecognized became visible as patterns in the data, enabling structured root cause work on the highest-impact failure modes.
I know what it takes from both sides of the relationship. Seeking SQE roles in medical device or life sciences in the Charlotte / Raleigh, NC area. Available now — relocating to Charlotte / Raleigh, NC.