You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This directory contains the complete infrastructure for performing **device-level verification** of the IHP SG13G2 PDK.
4
-
5
-
It provides a structured environment with Makefile targets, configuration files, and automation scripts that enable consistent and reproducible testing of model cards.
6
-
7
-
The goal is to ensure that all supported device types (MOS, HBT, and PNP) are correctly validated against their reference specifications using standardized simulation flows.
8
-
9
-
Two testing frameworks are supported:
10
-
-**models_verifier**: A custom verification engine designed for regression-style testing and detailed reporting (default).
11
-
-**pytest**: A widely used Python testing framework for integration with broader verification environments.
12
-
13
-
---
14
-
15
-
# Table of Contents
3
+
## Table of Contents
16
4
17
5
-[Device Testing](#device-testing)
18
-
-[Table of Contents](#table-of-contents)
6
+
-[Table of Contents](#table-of-contents)
7
+
-[Introduction](#introduction)
8
+
-[Purpose and Verification Methodology](#purpose-and-verification-methodology)
This directory contains the complete infrastructure for performing **device-level verification** of the IHP SG13G2 PDK.
37
+
38
+
It provides a structured environment with Makefile targets, configuration files, and automation scripts that enable consistent and reproducible testing of model cards.
39
+
40
+
The goal is to ensure that all supported device types (MOS, HBT, and PNP) are correctly validated against their reference specifications using standardized simulation flows.
41
+
42
+
---
43
+
## Purpose and Verification Methodology
44
+
45
+
The purpose of this testing framework is to **validate and qualify the accuracy of the IHP SG13G2 device models** by directly comparing **measured silicon data** from the fabrication process with **simulated results** generated using ngspice.
46
+
47
+
This comparison ensures that:
48
+
- The **SPICE model cards** used for circuit design accurately represent the real physical devices.
49
+
- Any **deviation between measurement and simulation** is within acceptable process variation limits.
50
+
- The **model behavior across process corners** (Fast, Slow, and Typical) stays consistent with the expected fabrication spread.
51
+
52
+
### Verification Approach
53
+
54
+
For each device type (MOS, HBT, and PNP):
55
+
1.**Measured data** from the foundry is parsed, cleaned, and normalized into a standardized format.
56
+
2.**Simulation data** is generated using ngspice, based on automatically created netlists derived from templates.
57
+
3. The measured and simulated results are **merged and compared point-by-point**, evaluating electrical quantities such as current or voltage over multiple bias conditions.
58
+
4.**Statistical summaries** are produced to quantify deviations and to detect potential model inaccuracies or corner mismatches.
59
+
60
+
### Corner-Based Validation
61
+
62
+
This stage validates the SG13G2 model cards by comparing **measured silicon data** from the fab and the **Typical (TT)** simulation results against the performance envelope defined by the **Fast (FF)** and **Slow (SS)** corners. In short: **both the measured data and the Typical simulation must lie inside the FF/SS envelope**, and measured data is then compared to the Typical simulation to quantify model accuracy.
63
+
64
+
#### Validation methodology
65
+
66
+
1.**Simulate all three corners**
67
+
For each device and each test bias condition, generate simulation results for FF, TT, and SS.
68
+
69
+
2.**Build the FF/SS envelope**
70
+
For every x-axis sweep point (e.g., Vgs, Vce) compute the envelope boundaries from the FF and SS curves (FF = upper bound, SS = lower bound). The envelope is the allowed region of physical variation.
The FF/SS bounds are expanded by a **relative tolerance margin** to account for process variation and measurement uncertainty. This is usually based on a **3σ coverage**.
74
+
75
+
4.**Envelope containment checks (FF/SS)**
76
+
-**Measured-in-envelope:** Verify every measured data point falls within the FF/SS envelope (including tolerance).
77
+
-**Typical-in-envelope:** Verify the TT simulation curve also lies within the same envelope.
78
+
Both checks must pass for the model and measured data to be considered consistent with the declared corner spread.
79
+
80
+
5.**Interpretation rules (examples)**
81
+
-**TT inside envelope & measured inside envelope & small measured/TT error:** model validated (expected).
82
+
-**TT inside envelope but measured outside envelope:** likely process or measurement outlier — investigate wafer/measurement data.
83
+
-**Measured inside envelope but TT outside envelope:** model cornering issue — model tuning required to bring TT inside FF/SS envelope.
84
+
-**Both TT and measured outside envelope:** serious mismatch — re-evaluate model and process assumptions.
┃ ┗ 📁 final_reports Aggregated Markdown and CSV reports summarizing
244
+
┃ overall verification metrics and pass/fail statistics.
188
245
```
189
246
190
-
### Folder Contents Explained
247
+
This structure helps you easily trace every stage of data processing — from raw measurements to final summarized reports.
248
+
249
+
---
250
+
251
+
#### `1. run_data/`
252
+
Contains **all intermediate data generated during the run**, including:
253
+
-**Circuit netlists** used for ngspice simulation.
254
+
-**Simulation logs** capturing ngspice outputs and potential warnings.
255
+
-**Raw CSV results** generated directly from each simulation sweep before merging.
256
+
257
+
This directory acts as a complete record of the simulation process — useful for debugging or re-running individual sweeps.
258
+
259
+
---
260
+
261
+
#### `2. clean_measured_data/`
262
+
This directory stores **processed measurement data** extracted from the input MDM (Measured Data Model) files.
263
+
The goal is to provide a clear and uniform format compatible with the verification scripts.
264
+
265
+
Each file corresponds to one test type (e.g., `dc_idvg.csv`, `dc_idvd.csv`, etc.) and contains columns as shown in the following example for the **nmos_lv** device:
266
+
267
+
| Column | Description |
268
+
|--------|--------------|
269
+
| block_id | Unique block identifier within the dataset. |
270
+
| block_index | Sub-index or measurement instance. |
271
+
| input_data | Raw data section name from the MDM file. |
272
+
| input_vars / output_vars | Variables used for biasing and measured outputs. |
273
+
| TEMP | Measurement temperature (°C). |
274
+
| W, L | Device width and length. |
275
+
| AD, AS, PD, PS | Diffusion area and perimeter parameters. |
276
+
| NF, M | Number of fingers and device multiplicity. |
277
+
| vg | Gate voltage. |
278
+
| sweep_var | The swept bias variable (e.g., Vd, Vg, or Vb). |
191
279
192
-
-**netlists**
193
-
Contains the generated .cir netlists created from Jinja2 templates.
194
-
This folder is only produced if **generate_netlists: true** is enabled in the device YAML configuration.
195
-
Useful for debugging the exact simulation inputs used.
280
+
Other devices (e.g., **pmos_lv**, **npn13g2**, etc.) follow a **similar structure** — only the bias variable names and measured outputs differ slightly depending on the device type and test configuration.
196
281
197
-
-**clean_measured_data**
198
-
Contains CSV files with measured data extracted and cleaned from the input MDM sources.
199
-
Provides a standardized format for consistent comparison against simulations.
282
+
---
283
+
284
+
#### `3. combined_results/`
285
+
286
+
This directory contains **merged datasets** that align the measured and simulated results for direct comparison.
287
+
Each file represents one test type (e.g., `dc_idvg.csv`) and includes both measured and simulated data across all corners.
288
+
289
+
| Column | Description |
290
+
|--------|--------------|
291
+
| block_id, block_index | Same identifiers as in measured data. |
|`failed_results.csv`| Only entries that failed tolerance criteria. |
315
+
|`final_summary.md`| A human-readable Markdown summary with statistics and metrics for all tests. |
316
+
317
+
These reports are the final outcome of the model validation process, summarizing how closely the simulation models align with measured fab data across all process corners.
318
+
319
+
320
+
##### Example Output logs — nmos_lv
321
+
322
+
```bash
323
+
2025-10-06 00:43:36 [INFO] Summary report saved to: models_results/nmos_lv/final_reports/results_summary.csv
324
+
2025-10-06 00:43:36 [INFO] Detailed failure report saved to: models_results/nmos_lv/final_reports/failed_results.csv
325
+
2025-10-06 00:43:36 [INFO] Summary written to: models_results/nmos_lv/final_reports/final_summary.md
This directory contains all **YAML configuration files** that define the behavior of each device test in the SG13G2 model verification framework.
4
+
5
+
Each configuration file provides the necessary parameters for:
6
+
- Defining which **device** to test (e.g., `sg13_lv_nmos`).
7
+
- Linking to the corresponding **measured data** from the fab (MDM format).
8
+
- Specifying the **model card** paths.
9
+
- Setting up **simulation parameters** (bias sweeps, corners, etc).
10
+
- Declaring **validation targets** and **tolerance thresholds**.
11
+
12
+
---
13
+
14
+
## Directory Structure
15
+
16
+
```
17
+
📁 configs
18
+
┣ 📁 mos/
19
+
┃ ┣ 📜 sg13_lv_nmos.yaml
20
+
┃ ┣ 📜 sg13_lv_pmos.yaml
21
+
┃ ┣ 📜 sg13_hv_nmos.yaml
22
+
┃ ┗ 📜 sg13_hv_pmos.yaml
23
+
┣ 📁 hbt/
24
+
┃ ┣ 📜 sg13g2_npn13g2.yaml
25
+
┃ ┣ 📜 sg13g2_npn13g2l.yaml
26
+
┃ ┗ 📜 sg13g2_npn13g2v.yaml
27
+
┣ 📁 pnp/
28
+
┃ ┗ 📜 sg13_pnp_mpa.yaml
29
+
┗ 📜 TEMPLATE.yaml
30
+
```
31
+
32
+
Each folder corresponds to a **device category**, and each YAML file inside represents a **specific device configuration**.
33
+
34
+
---
35
+
36
+
## Validation Configuration Overview
37
+
38
+
This section describes the key parameters controlling the validation and tolerance behavior during model verification.
39
+
40
+
---
41
+
42
+
### 1. Thresholds and Out-of-Bound Control
43
+
44
+
| Variable | Description | Example |
45
+
|-----------|--------------|----------|
46
+
|**threshold_percent_oob**| Maximum allowed percentage (%) of points that can fall **outside** the target envelope before a sweep is considered failed. |`0.5`|
47
+
48
+
**Example:**
49
+
If `threshold_percent_oob = 0.5`, then a sweep will pass only if at least **99.5% of its data points** lie within the expected envelope.
50
+
51
+
---
52
+
53
+
### 2. Corner Envelope Tolerance
54
+
55
+
| Variable | Description | Example |
56
+
|-----------|--------------|----------|
57
+
|**corner_tolerance_percent**| Defines the **relative tolerance margin (%)** applied to the FF and SS simulation bounds to account for process variation and measurement uncertainty. |`0.27`|
58
+
59
+
**Example:**
60
+
If `corner_tolerance_percent = 0.27`,
61
+
then the FF and SS bounds will be expanded by ±0.27% around their nominal values.
62
+
63
+
---
64
+
65
+
### 3. Current Clipping
66
+
67
+
| Variable | Description | Example |
68
+
|-----------|--------------|----------|
69
+
|**clip_curr**| Minimum current value (in Amperes) used to exclude extremely small or noisy data points during comparison. |`1e-11`|
70
+
71
+
**Example:**
72
+
Currents below 1e-11 A are ignored during validation.
73
+
74
+
---
75
+
76
+
### 4. Parallel Execution
77
+
78
+
| Variable | Description | Example |
79
+
|-----------|--------------|----------|
80
+
|**max_workers**| Number of parallel threads or workers used for validation tasks. |`8`|
81
+
82
+
This helps speed up processing across multiple sweeps or corners.
83
+
84
+
---
85
+
86
+
## Notes
87
+
88
+
- All file paths are **relative to the device testing root directory (`devices/`)**.
89
+
- If a specific parameter is not defined, a **default** will be applied automatically by the verification framework.
90
+
- For detailed information on the verification logic and output reports, refer to the [main Device Testing README](../README.md).
0 commit comments