Completed chips coming in from the foundry are topic to a battery of exams. For these destined for essential techniques in automobiles, these exams are notably in depth and might add 5 to 10 p.c to the price of a chip. However do you really want to do each single take a look at?
Engineers at NXP have developed a machine-learning algorithm that learns the patterns of take a look at outcomes and figures out the subset of exams which can be actually wanted and those who they might safely do with out. The NXP engineers described the method on the IEEE Worldwide Check Convention in San Diego final week.
NXP makes all kinds of chips with complicated circuitry and superior chip-making know-how, together with inverters for EV motors, audio chips for client electronics, and key-fob transponders to safe your automotive. These chips are examined with completely different indicators at completely different voltages and at completely different temperatures in a take a look at course of referred to as continue-on-fail. In that course of, chips are examined in teams and are all subjected to the entire battery, even when some components fail a few of the exams alongside the way in which.
Chips had been topic to between 41 and 164 exams, and the algorithm was in a position to suggest eradicating 42 to 74 p.c of these exams.
“Now we have to make sure stringent high quality necessities within the subject, so we’ve got to do numerous testing,” says Mehul Shroff, an NXP Fellow who led the analysis. However with a lot of the particular manufacturing and packaging of chips outsourced to different corporations, testing is among the few knobs most chip corporations can flip to regulate prices. “What we had been attempting to do right here is provide you with a approach to scale back take a look at value in a manner that was statistically rigorous and gave us good outcomes with out compromising subject high quality.”
A Check Recommender System
Shroff says the issue has sure similarities to the machine learning-based recommender techniques utilized in e-commerce. “We took the idea from the retail world, the place an information analyst can take a look at receipts and see what objects individuals are shopping for collectively,” he says. “As a substitute of a transaction receipt, we’ve got a novel half identifier and as an alternative of the objects {that a} client would buy, we’ve got an inventory of failing exams.”
The NXP algorithm then found which exams fail collectively. In fact, what’s at stake for whether or not a purchaser of bread will wish to purchase butter is kind of completely different from whether or not a take a look at of an automotive half at a specific temperature means different exams don’t should be achieved. “We have to have one hundred pc or close to one hundred pc certainty,” Shroff says. “We function in a distinct area with respect to statistical rigor in comparison with the retail world, however it’s borrowing the identical idea.”
As rigorous because the outcomes are, Shroff says that they shouldn’t be relied upon on their very own. It’s important to “ensure it is sensible from engineering perspective and you can perceive it in technical phrases,” he says. “Solely then, take away the take a look at.”
Shroff and his colleagues analyzed information obtained from testing seven microcontrollers and functions processors constructed utilizing superior chipmaking processes. Relying on which chip was concerned, they had been topic to between 41 and 164 exams, and the algorithm was in a position to suggest eradicating 42 to 74 p.c of these exams. Extending the evaluation to information from different kinds of chips led to a fair wider vary of alternatives to trim testing.
The algorithm is a pilot undertaking for now, and the NXP workforce is trying to develop it to a broader set of components, scale back the computational overhead, and make it simpler to make use of.
From Your Website Articles
Associated Articles Across the Internet