Projects

Description of funded and associated projects.

Project descriptions

The following projects are currently being conducted within the Graduate School. Click on a project's title to learn more about the project and the involved investigators.

Information about open positions in the Graduate School can be found under this address.

 

Open Positions

Supervisors:

Prof. Dr. Jens Anders
Institut für Intelligente Sensorik und
Theoretische Elektrotechnik

Pfaffenwaldring 47

D-70569 Stuttgart

Prof. Dr. Ilia Polian
Institut für Technische Informatik
Abt. Hardwareorientierte Informatik

Pfaffenwaldring 47

D-70569 Stuttgart

Co-supervisor: Dr. Matthias Sauer, Advantest

Test quality, defined as the absence of test escapes (defective circuits that had passed post-manufacturing test), is the ultimate target of testing. Customers apply system-level test (SLT) to circuits that already have been tested post-fabrication and reportedly identify test escapes. The objective of this project is to understand the nature of such hard-to-detect failures. Establishing a better understanding of SLT and making it more effective and efficient could drastically improve the economy of circuit design and manufacturing.

A number of theories exist for the type of failures that cause SLT-unique fails that are missed by post-manufacturing tests:

  1. Complex defect mechanisms with subtle parametric or intermittent manifestations that are not adequately covered by standard fault models. The test coverage of such defects can be improved by the use of more advanced defect-oriented models.
  2. Systematic ATPG coverage holes: Insufficient coverage of structures such as clock domain boundaries, asynchronous or analog interfaces, clock distribution networks and other sources of unknown values. Standard automatic test pattern generation (ATPG) tools tend to classify faults in such structures as “untestable” even though they can manifest themselves during normal operation of the device.
  3. Marginal defects exposed only during system-level interactions: Subtle defects, in particular related to timing, in “uncore logic” of complex multicore systems on chip (SoCs), i.e., logic that is part of the SoC but does not belong to a core.

The specific objectives of the project are as follows:

  • To establish a theoretical and systematic understanding of SLT-unique fails, identifying specific mechanisms leading to such fails and their manifestation conditions (e.g., hot-spots).
  • To create an experimentation environment where SLT-unique fails can be reproduced and practically investigated.
  • To explore solutions that prevent or address SLT-unique fails. These can include guidelines for “clean” circuit designs that do not give rise to coverage holes (e.g., use of well-defined asynchronous protocols for clock domain crossings); design for testability (DFT) methods that address specific weaknesses known to the designer or DFT engineer; extended ATPG methods that can detect defects missed by regular ATPG, or methods to create effective and efficient SLTs that specifically target SLT-unique failure mechanisms.
Task 2: Evaluation and Experimentation

 

The project is structured into three tasks according to the three above-mentioned scientific objectives. The more theoretical Tasks 1 and 3 deal with SLT-unique fails and solutions to counteract them, respectively. Task 2 will establish a complete evaluation and experimentation flow that can be used for practical demonstration of SLT-unique fails and studies of applicable solutions. Figure 1 summarizes the planned project structure and the interaction of its theoretical (red) and practical (blue) parts. An SLT evaluation platform will be created and SLT-unique fail conditions from Task 1 will be incorporated into this platform. Based on the outcome of experiments , solutions for the SLT-unique fails from Task 3 (e.g., addition of special DFT logic) will be incorporated into the SLT evaluation platform , thus closing the loop.

Supervisors:

Prof. Dr. Dirk Pflüger
Institut für Parallele und
Verteilte Systeme

Universitätstraße 38

D-70569 Stuttgart

 
 
 

 

 

Co-supervisor: Jochen Rivoir, Advantest

In post-silicon validation, devices are examined under test to determine tuning parameters and to detect critical conditions. This project targets the generation of test cases for semiconductor test. The goal is to identify a representative set of test cases in a high-dimensional space. We have to deal with up to a few hundred input parameters (conditions and tuning parameters) and several output parameters. Depending on the context, one refers to parameters, variables, or dimensions.

Conventionally, test cases are either generated based on expert knowledge (Shmoo tests) or by randomized techniques such as Monte Carlo (MC) methods. The latter explore the whole parameter space uniformly and randomly, possibly restricting the parameter space to valid parameter combinations. Shmoo tests, in contrast, are informed explorations of the feature space. They start from a stable operation mode that is provided by an expert, and explore the surrounding input space. However, it is difficult even for experts to navigate and explore a high-dimensional parameter space.

In general, a full exploration of the input space is infeasible, even if each dimension can be assumed to be discrete and contain only few values. This is due to the curse of dimensionality, the exponential dependency of the overall number of combinations on the number of parameters. Given that we have tens or even hundreds of parameters, any grid-based strategy has to fail.

As the device under test has to undergo a (costly) test for each parameter combination, the aim is to reduce the amount of test samples and thus the overall time for post-silicon testing in an adaptive, self-learning way. Therefore, we aim to examine and to transfer adaptive sampling techniques that are used in the context of simulations to post-silicon validation.

We aim to establish an exploration-reduction-truncation cycle to generate meaningful test sets. With that, we aim to combine random sampling strategies, statistical analysis and a truncation of the exploration space for the next iteration. The resulting methodology will lead to more meaningful test cases, and the information of the truncation of the feature space can be used to estimate the contribution of input parameters with respect to output/target variables.The project is closely related to the projects P6 "Deep learning based variable selection for post-silicon validation" and P2 "Visual Analytics for Post-Silicon Validation". P2 will use the results and the gathered knowledge about the input - output relationship to visualize and possibly guide the generation of further test cases in interaction with human experts. P6 will benefit from meaningful test sets generated in this project to employ deep learning methods learning variable selection based on test data, while our project can benefit from information about learned sets of input variables.

Supervisors:

Prof. Dr. Ralf Küsters
Institut für Informationssicherheit

Universitätstraße 38

D-70569 Stuttgart

 
 
 

 

 

Co-supervisor: Dr. Matthias Sauer, Advantest

Abstract

Semiconductor testing plays an important role in the semiconductor manufacturing process. The tests not only ensure the quality of individual chips, but the data obtained during the tests is used to improve the manufacturing process itself. Often manufacturers use third-party services to perform the tests and evaluate the test data, as this requires specialized expertise. Since the test data and the models and methods to evaluate the data, such as machine learning models, are typically highly sensitive trade secrets, on the one hand, manufacturers are reluctant to share their test data with third-party evaluation services, and on the other hand, the services do not want to reveal their evaluation models and methods.

The idea of the project is to use, further develop, and adapt secure cryptographic protocols, to protect the digital assets in a globalized and distributed semiconductor manufacturing flow.

Supervisors:

Prof. Dr.
Steffen Becker

Institut für Softwaretechnologie
Abt. Zuverlässige Softwaresysteme

Universitätsstraße 38

D-70569 Stuttgart

Prof. Dr.
Ilia Polian

Institut für Technische Informatik
Abt. Hardware-
orientierte Informatik

Pfaffenwaldring 47

D-70569 Stuttgart

Prof. Dr.
Stefan Wagner

Institut für Softwaretechnologie
Abt. Software Engineering

Universitätsstraße 38

D-70569 Stuttgart

Co-supervisor: Dr. Matthias Sauer, Advantest

Description of the problem tackled

Modern Systems-on-Chip (SoCs) are extremely powerful but also highly complex products. They incorporate heterogeneous components, such as multiple processor cores, on-chip memories, application-specific digital circuitry or input-output interfaces. System-Level Test (SLT), where actual application software is run on the circuit, has emerged as an important additional test method for such SoCs. SLT can be run by the circuit manufacturer in the final stage of production; or by the buyer of the circuit (e.g., an automotive Tier-1 supplier who will integrate the circuit into a product) as part of incoming quality control. SLT can also be used during the post-silicon characterization phase where a circuit’s extra-functional properties are measured on a population of several hundreds or thousands of “first-silicon” circuits. To facilitate test and characterization, many SoCs include an infrastructure of sensors and monitors that collect data about the actual values of parameters and make it available to the test program and to the automatic test equipment. We refer to this infrastructure as Design-For-X or DFX, where “X” stands for “testability”, “resilience”, “yield”, “diagnosis” and other related terms. DFX infrastructure is also used to provide self-awareness to an SoC when it is deployed in the field after it has passed the test.

The proposed project focuses on generation of SLT programs with desired characteristics. Its main goal is to provide automated methods to produce SLT programs with extra-functional properties, like power consumption, based on model-driven performance stress test generation from high-level software architecture models. It will also make first steps towards coverage metrics for SLT, leveraging latest results from the field of integration test. The project will incorporate self-awareness of the SoC under test achieved through on-chip sensors organized in a DFX infrastructure and coordinated and assisted by the ATE during test application. For example, an SLT program may aim at achieving a certain power consumption and to this end apply stress tests while monitoring a current sensor, stopping once the desired value has been reached

Task 2: Evaluation and Experimentation

The project focuses on three methodical challenges:

Parametric property management: capture and control the parametric, or extra-functional (timing and power), behavior of an SoC and its components at software level. We plan to provide compact models of software-level artifacts (C-level instructions or basic blocks) and consider the SLT program as a composition of such software-level artifacts associated with parametric properties.

Coverage metrics for SLT test: SLT program generation on SoC level bears similarity to black-box integration testing of complex systems such as automobiles. Such systems consist of a large number of components that are highly complex on their own and can impossibly be modeled in full detail. We will conduct first evaluations of “graybox” models with partial information, and of coverage concepts recently introduced and applied in the context of black-box integration testing.

Harnessing and supporting device’s self-awareness: To achieve a good control of extra-functional properties, we will leverage self-awareness features which many today’s SoCs provide, namely on-chip sensors and monitors located in different SoC components (red circles in the figure) accessible through the SoC’s DFX infrastructure.

Supervisor:

Prof. Dr.-Ing. Bin Yang
Institute of Signal Processing
and System Theory

Pfaffenwaldring 47

D-70569 Stuttgart

 
 

 

 

Co-supervisors: Jochen Rivoir and Raphael Latty, Advantest

Post-silicon validation deals with the test of devices under test (DUT) in order to find and correct design bugs before mass production. For doing this, up to several hundreds of input variables or features are recorded. They characterize the input stimuli to the DUT, various tuning parameters and environmental conditions. At the same time, some target variables are calculated from the responses of the DUT. By studying the relationship between the input and target variables, design bugs and unexpected effects have to be detected, localized and mitigated. Today this is still done manually by experienced engineers. However, several hundreds of input variables are too much for visualization and manual inspection. Since a single target variable is typically related to a few input variables, the selection of relevant input variables for a specific target variable becomes a crucial problem.

Numerous traditional methods have been developed for the task of feature or variable selection, e.g. wrapper/filter/embedded methods. They were successful in some applications and failed in others. The goal of this project is to take a fresh look at the old problem of variable selection from a new perspective of deep learning. Deep learning in this context does not mean a deep and large neural network as a black box for everything. We rather mean an increasing number of recent and successful ideas and architectures developed for deep learning. Some of them have a strong relationship to the problem of variable selection. We aim to adopt these ideas to variable selection and to develop new approaches which hopefully outperform the traditional methods. Of course, also a combination of the traditional methods and new approaches is highly desirable.

Two first ideas of deep learning based variable selection to be studied are:

  • Attention-based variable selection. An attention network is used to compute an attention vector containing weights to select or deselect the individual input variables. Here, the selection is considered as a binary classification problem (select or deselect) and the corresponding weights denote the probabilities of selection. The weighted input vector is the input for a second evaluation network which evaluates the ability of the weighted input vector to predict the desired target variable.
  • Concrete autoencoder (AE). AE is a well-known architecture to learn a nonlinear hidden lower-dimensional representation for the given input. This technique is a nonlinear extension of the conventional principal component analysis (PCA). The bridge between AE and variable selection is the use of a Concrete distribution, a continuous relaxation of discrete random variables. By doing this, the continuous-valued weight matrix of the encoder can be trained by normal backpropagation. During inference, each column of the weight matrix approaches a one-hot vector for a zero temperature limit and thus selects a single feature.

This project is closely related to two other projects: P2 “Visual analytics for post-silicon validation” and P3 “Self-learning test case generation for post-silicon validation”. P2 uses the results from this project to visualize the relationship between the target variable and the selected input variables. P3 employs self-learning methods to generate more test cases which will improve the variable selection. On the other hand, the results of variable selection will provide an improved understanding of the input-target-relationship to guide the self-learning test case generation.

Supervisor:

Prof. Dr.-Ing. Ingmar Kallfass
Institute of Robust Power
Semiconductor Systems

Pfaffenwaldring 47

D-70569 Stuttgart

 
 

 

 

Co-supervisor: Dr. Christian Volmer, Advantest

This project addresses the research area “Advanced design methodologies for testing next generation and beyond RF devices” by proposing a miniaturized and multi-functional frequency extension into the high millimeter-wave frequency range for RF testing. One goal of this project is to cover the frequency range from 20 to 86 GHz to enable testing for applications such as 24 GHz ISM i.e. from K-band to E-band, including important frequency ranges like 77-79 GHz for automotive radar or 81-86 GHz fixed wireless point-to-point links by using high-speed semiconductor technologies (see Figure 1).

Figure 1: Frequency ragne of the INWAVE projectFigure 1: Frequency range of the INWAVE project

 

The transceiver module is designed to be coupled to a 20 GHz RF base card from Advantest. The project covers the most challenging components of an entire transceiver chain including RF up-conversion, RF multi-pole switching, RF adaptive power amplification, RF filtering and LO multiplication over the entire frequency range (see Figure 2).

Figure 2: Signal processing and transceiver chainFigure 2: Signal processing and transceiver chain

 

In order to meet the challenging requirements of wide frequency range, high linearity and high output power and at the same time achieve a high level of system integration, it will probably become necessary to design and combine different parts of the system in different semiconductor technologies, e.g., RF-CMOS, GaN, InP or GaAs.

Therefore, this project is divided into two main phases. In the first project phase a deep investigation of all commercially available technologies will be made. After that a line-up will be created to allow a detailed simulation of each sub-block. Two approaches will be considered: a hybrid version that will select the most suitable semiconductor technology for each element of the design, and a compact system-on-chip (SoC) version based on a state-of-the-art CMOS process (e.g., 22nm FD-SOI). After this first phase a decision about the target technology will be made together with Advantest to initiate the second phase of this project.

At the beginning of this phase the circuit design and layout with a full-3D simulation of the whole system will be performed. In order to optimize the design, three fabrication cycles are planned. Depending on the chosen packaging technology the assembly will also be designed. Finally the project ends with the testing and measurement of the developed system.

Supervisors:

Prof. Dr.
Steffen Becker

Institute of Software Technology
Reliable Software
Systems
 

Universitätsstraße 38

D-70569 Stuttgart

Dr.-Ing.
André van Hoorn

Institute of Software Technology
Reliable Software
Systems
Research Group

Universitätsstraße 38

D-70569 Stuttgart

Prof. Dr.
Stefan Wagner

Institute of Software Technology
Software Engineering
 
 

Universitätsstraße 38

D-70569 Stuttgart

Co-supervisor: Martin Heinrich, Advantest

Software plays a vital role in large-scale hardware testing. Test programs allow hardware testers to deal with the complexity of modern chips and enable them to automate the tests. A tester operating system and development environment (TOSDE) connects the customer test programs to the test system with the device under test (DUT). The TOSDE is a complex high data-volume software, for which it is extremely challenging to provide and assure the requested level of correctness, robustness, and performance. This is due to the ecosystem being many-fold and only partially under the control of the automatic test equipment (ATE) platform developers. For instance, typical test system hardware handles millions of instructions per second (MIPS) running an embedded software that communicates with the tester operating system. Customers define and/or generate test programs. As a result of growing chip complexity (Moore's law), the test program complexity and the resulting data volumes are growing exponentially. Hence, this leads to performance-related questions about the TOSDE, including data transfer rates to local discs and network drives. The customer devices and the respective test programs are not available to the TOSDE team for intellectual property reasons. It is vital that the software-intensive TOSDE is of high quality to support the effectiveness and efficiency of the hardware testers. In particular, it needs to be correct, robust to a wide variety of uses by hardware test programs, and perform efficiently to minimize test time.

The objective of the project is to develop and evaluate a novel approach to analyze and optimize software test suites for correctness, robustness, and performance. The particular focus is the support for testing high and exponentially growing data-volume software in a context in which unknown code (test programs) will run on top of this software, which has not been considered by previous approaches. Generating tests using fuzzing seems like a promising approach to tackle the vast space of possible test programs. Yet, to overcome the problems discussed above, we need to find a novel combination of tailored techniques from functional (e.g., coverage analysis, fuzzing, mutation testing) and non-functional testing (e.g., operational profile based scalability testing), as well as model-based performance analysis (e.g., anti-pattern detection, what-if-analysis) to be able to decide what is interesting and important to test so that we get enough performance to handle the high data-volume while reducing the test execution time to feasible levels.

The project will be structured into the following research areas:

  • Area 1 (Test Suite Analysis) will investigate and propose methods to analyze the effectiveness of test suites in the context of complex, high data-volume software such as TOSDEs.
  • Area 2 (Test Suite Optimisation) will investigate and propose methods to optimize test suites regarding the trade-off between effectiveness and test execution time.
  • Area 3 (Test Case Generation) will utilize the effectiveness analysis results from area 1 to automatically generate test cases to achieve desired levels of the identified effectiveness metrics.
  • Area 4 (Combination of Manual and Generated Test Cases) will provide the methods to support the adequate combination of manually created and automatically generated test cases. The goal is to provide methods to optimize the interplay by analyzing and optimizing these hybrid test suites, building on the methods from areas 1-3.
  • Area 5 (Validation and Evaluation) comprises all activities conducted to assess the developed methods. The experimental evaluation will be conducted by using publicly available open-source systems as well as by applying them to an industry-leading TOSDE software.

Contact

Dirk Pflüger
Prof. Dr. rer. nat.

Dirk Pflüger

Institute for Parallel and Distributed Systems (IPVS)

To the top of the page