Making predictive analog/RF alternate test strategy independent of training set size
Abstract
This paper presents an alternate test implementation based on model redundancy that permits to achieve lower prediction errors than a classical implementation, even if training is performed over a small set of devices. The idea is to build different regression models for each specification during the training phase, and then to verify prediction consistency between the different models during the production testing phase. In case of divergent predictions, the devices are removed from the alternate test tier and directed to a second tier where further testing may apply. The approach is illustrated on a real case study that employs production test data from an RF power amplifier. Results show that, on the contrary to the classical implementation where prediction accuracy degrades when reducing the training set size, the proposed approach permits to preserve prediction accuracy independently of the training set size, while only a very small number of devices are directed to the second tier of the test flow.