Skip to content

Commit eb96241

Browse files
committed
Completing updates for devices models tests
Signed-off-by: FaragElsayed2 <[email protected]>
1 parent 4434fce commit eb96241

File tree

27 files changed

+2137
-1312
lines changed

27 files changed

+2137
-1312
lines changed
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
2+
************
3+
set ngbehavior=hs
4+
set num_threads=1
5+
set ng_nomodcheck
6+
************

ihp-sg13g2/libs.tech/ngspice/testing/devices/README.md

Lines changed: 186 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,14 @@
11
# Device Testing
22

3-
This directory contains the complete infrastructure for performing **device-level verification** of the IHP SG13G2 PDK.
4-
5-
It provides a structured environment with Makefile targets, configuration files, and automation scripts that enable consistent and reproducible testing of model cards.
6-
7-
The goal is to ensure that all supported device types (MOS, HBT, and PNP) are correctly validated against their reference specifications using standardized simulation flows.
8-
9-
Two testing frameworks are supported:
10-
- **models_verifier**: A custom verification engine designed for regression-style testing and detailed reporting (default).
11-
- **pytest**: A widely used Python testing framework for integration with broader verification environments.
12-
13-
---
14-
15-
# Table of Contents
3+
## Table of Contents
164

175
- [Device Testing](#device-testing)
18-
- [Table of Contents](#table-of-contents)
6+
- [Table of Contents](#table-of-contents)
7+
- [Introduction](#introduction)
8+
- [Purpose and Verification Methodology](#purpose-and-verification-methodology)
9+
- [Verification Approach](#verification-approach)
10+
- [Corner-Based Validation](#corner-based-validation)
11+
- [Validation methodology](#validation-methodology)
1912
- [Folder Structure](#folder-structure)
2013
- [Prerequisites](#prerequisites)
2114
- [Building OSDI Models](#building-osdi-models)
@@ -30,7 +23,65 @@ Two testing frameworks are supported:
3023
- [Examples](#examples)
3124
- [Output Results](#output-results)
3225
- [Output Folder Structure](#output-folder-structure)
33-
- [Folder Contents Explained](#folder-contents-explained)
26+
- [`1. run_data/`](#1-run_data)
27+
- [`2. clean_measured_data/`](#2-clean_measured_data)
28+
- [`3. combined_results/`](#3-combined_results)
29+
- [`4. final_reports/`](#4-final_reports)
30+
- [Example Output logs — nmos\_lv](#example-output-logs--nmos_lv)
31+
32+
---
33+
34+
## Introduction
35+
36+
This directory contains the complete infrastructure for performing **device-level verification** of the IHP SG13G2 PDK.
37+
38+
It provides a structured environment with Makefile targets, configuration files, and automation scripts that enable consistent and reproducible testing of model cards.
39+
40+
The goal is to ensure that all supported device types (MOS, HBT, and PNP) are correctly validated against their reference specifications using standardized simulation flows.
41+
42+
---
43+
## Purpose and Verification Methodology
44+
45+
The purpose of this testing framework is to **validate and qualify the accuracy of the IHP SG13G2 device models** by directly comparing **measured silicon data** from the fabrication process with **simulated results** generated using ngspice.
46+
47+
This comparison ensures that:
48+
- The **SPICE model cards** used for circuit design accurately represent the real physical devices.
49+
- Any **deviation between measurement and simulation** is within acceptable process variation limits.
50+
- The **model behavior across process corners** (Fast, Slow, and Typical) stays consistent with the expected fabrication spread.
51+
52+
### Verification Approach
53+
54+
For each device type (MOS, HBT, and PNP):
55+
1. **Measured data** from the foundry is parsed, cleaned, and normalized into a standardized format.
56+
2. **Simulation data** is generated using ngspice, based on automatically created netlists derived from templates.
57+
3. The measured and simulated results are **merged and compared point-by-point**, evaluating electrical quantities such as current or voltage over multiple bias conditions.
58+
4. **Statistical summaries** are produced to quantify deviations and to detect potential model inaccuracies or corner mismatches.
59+
60+
### Corner-Based Validation
61+
62+
This stage validates the SG13G2 model cards by comparing **measured silicon data** from the fab and the **Typical (TT)** simulation results against the performance envelope defined by the **Fast (FF)** and **Slow (SS)** corners. In short: **both the measured data and the Typical simulation must lie inside the FF/SS envelope**, and measured data is then compared to the Typical simulation to quantify model accuracy.
63+
64+
#### Validation methodology
65+
66+
1. **Simulate all three corners**
67+
For each device and each test bias condition, generate simulation results for FF, TT, and SS.
68+
69+
2. **Build the FF/SS envelope**
70+
For every x-axis sweep point (e.g., Vgs, Vce) compute the envelope boundaries from the FF and SS curves (FF = upper bound, SS = lower bound). The envelope is the allowed region of physical variation.
71+
72+
3. **Apply statistical corner tolerance (σ adjustment)**
73+
The FF/SS bounds are expanded by a **relative tolerance margin** to account for process variation and measurement uncertainty. This is usually based on a **3σ coverage**.
74+
75+
4. **Envelope containment checks (FF/SS)**
76+
- **Measured-in-envelope:** Verify every measured data point falls within the FF/SS envelope (including tolerance).
77+
- **Typical-in-envelope:** Verify the TT simulation curve also lies within the same envelope.
78+
Both checks must pass for the model and measured data to be considered consistent with the declared corner spread.
79+
80+
5. **Interpretation rules (examples)**
81+
- **TT inside envelope & measured inside envelope & small measured/TT error:** model validated (expected).
82+
- **TT inside envelope but measured outside envelope:** likely process or measurement outlier — investigate wafer/measurement data.
83+
- **Measured inside envelope but TT outside envelope:** model cornering issue — model tuning required to bring TT inside FF/SS envelope.
84+
- **Both TT and measured outside envelope:** serious mismatch — re-evaluate model and process assumptions.
3485

3586
---
3687

@@ -89,50 +140,53 @@ pip install --break-system-packages -r requirements.txt
89140

90141
## Usage
91142

92-
All commands should be executed from the [**current working directory**](./):
143+
All device tests are controlled through **YAML configuration files** located in the [configs directory](configs/).
144+
Each configuration file defines:
145+
- Which device to test
146+
- Model and measured data sources
147+
- Simulation setup (sweeps, biasing, corners)
148+
- Validation targets and pass/fail thresholds
93149

150+
📘 **For detailed configuration format, template usage, and examples**, see this documentation:
151+
[Configuration Files README](configs/README.md)
152+
153+
---
94154

95155
### Running Tests with Makefile
96156

97157
The provided **Makefile** is the main entry point for running device tests.
98-
It supports both **group-level** (all devices of a given category) and **device-level** (a single device) execution.
99-
158+
It supports both **group-level** (e.g., all MOS or all HBT) and **device-level** (single-device) execution.
100159

101160
#### Test Runners
102161

103-
- **Default runner**: 'models_verifier' (recommended)
104-
- **Alternative runner**: 'pytest' (append 'RUNNER=pytest' to the command)
105-
106-
This flexibility allows you to either:
107-
- Perform **full model validation** against measurement data using 'models_verifier'
108-
- Perform **lightweight regression checks** with 'pytest'
109-
162+
- **Default runner**: `models_verifier` (recommended)
163+
- **Alternative runner**: `pytest` (append `RUNNER=pytest` to the command)
110164

111165
#### Device Configurations
112166

113-
- All device tests are managed through dedicated YAML configuration files located in the [configs directory](configs/).
167+
Each Makefile target corresponds to one of the YAML configuration files under the [configs directory](configs/).
168+
For example:
169+
- `make test-nmos_lv` → uses `configs/mos/sg13_lv_nmos.yaml`
170+
- `make test-npn13g2` → uses `configs/hbt/sg13g2_npn13g2.yaml`
114171

115-
- These files specify data sources, model paths, simulation settings, and validation metrics.
116-
117-
- For example, the LV NMOS configuration is defined in [sg13_lv_nmos](configs/mos/nmos_lv/sg13_lv_nmos.yaml).
118-
119-
Every Makefile target maps directly to one of these configuration files.
172+
The Makefile automatically detects and passes the corresponding configuration file to the selected test runner.
120173

121174
---
122175

123176
#### Available Targets
124177

125178
##### Group Targets
126-
- `make test-all` → Run **all device tests**
127-
- `make test-mos` → Run NMOS and PMOS (LV and HV variants)
128-
- `make test-hbt` → Run HBT devices (NPN13G2 family)
129-
- `make test-pnp` → Run PNP MPA devices
179+
- `make test-all` → Run **all device tests**
180+
- `make test-mos` → Run all MOS devices (LV + HV)
181+
- `make test-hbt` → Run all HBT devices
182+
- `make test-pnp` → Run all PNP devices
130183

131184
##### Device Targets
132-
Each individual device has its own target:
133-
- MOS: `test-nmos_lv`, `test-pmos_lv`, `test-nmos_hv`, `test-pmos_hv`
134-
- HBT: `test-npn13g2`, `test-npn13g2l`, `test-npn13g2v`
135-
- PNP: `test-pnp_mpa`
185+
Each device can be run individually:
186+
- MOS: `test-nmos_lv`, `test-pmos_lv`, `test-nmos_hv`, `test-pmos_hv`
187+
- HBT: `test-npn13g2`, `test-npn13g2l`, `test-npn13g2v`
188+
- PNP: `test-pnp_mpa`
189+
136190

137191
---
138192

@@ -180,30 +234,102 @@ Each device gets its own dedicated subdirectory containing simulation inputs, in
180234

181235
```
182236
📁 models_results
183-
┣ 📁 <device_name>/
184-
┃ ┣ 📁 netlists Generated ngspice netlists (optional).
185-
┃ ┣ 📁 clean_measured_data Extracted measured data from input MDM files.
186-
┃ ┣ 📁 combined_results Contains the fully merged results of simulation and measurement data.
187-
┃ ┗ 📁 final_reports Final aggregated reports summarizing the verification outcomes.
237+
┣ 📁 <device_name>/
238+
┃ ┣ 📁 run_data Intermediate data generated during simulation runs —
239+
┃ ┃ includes circuit files, logs, and raw CSV outputs from ngspice.
240+
┃ ┣ 📁 netlists Generated ngspice netlists (for debugging or inspection).
241+
┃ ┣ 📁 clean_measured_data Extracted and cleaned measured data from the input MDM files.
242+
┃ ┣ 📁 combined_results Fully merged results combining simulated and measured datasets.
243+
┃ ┗ 📁 final_reports Aggregated Markdown and CSV reports summarizing
244+
┃ overall verification metrics and pass/fail statistics.
188245
```
189246

190-
### Folder Contents Explained
247+
This structure helps you easily trace every stage of data processing — from raw measurements to final summarized reports.
248+
249+
---
250+
251+
#### `1. run_data/`
252+
Contains **all intermediate data generated during the run**, including:
253+
- **Circuit netlists** used for ngspice simulation.
254+
- **Simulation logs** capturing ngspice outputs and potential warnings.
255+
- **Raw CSV results** generated directly from each simulation sweep before merging.
256+
257+
This directory acts as a complete record of the simulation process — useful for debugging or re-running individual sweeps.
258+
259+
---
260+
261+
#### `2. clean_measured_data/`
262+
This directory stores **processed measurement data** extracted from the input MDM (Measured Data Model) files.
263+
The goal is to provide a clear and uniform format compatible with the verification scripts.
264+
265+
Each file corresponds to one test type (e.g., `dc_idvg.csv`, `dc_idvd.csv`, etc.) and contains columns as shown in the following example for the **nmos_lv** device:
266+
267+
| Column | Description |
268+
|--------|--------------|
269+
| block_id | Unique block identifier within the dataset. |
270+
| block_index | Sub-index or measurement instance. |
271+
| input_data | Raw data section name from the MDM file. |
272+
| input_vars / output_vars | Variables used for biasing and measured outputs. |
273+
| TEMP | Measurement temperature (°C). |
274+
| W, L | Device width and length. |
275+
| AD, AS, PD, PS | Diffusion area and perimeter parameters. |
276+
| NF, M | Number of fingers and device multiplicity. |
277+
| vg | Gate voltage. |
278+
| sweep_var | The swept bias variable (e.g., Vd, Vg, or Vb). |
191279

192-
- **netlists**
193-
Contains the generated .cir netlists created from Jinja2 templates.
194-
This folder is only produced if **generate_netlists: true** is enabled in the device YAML configuration.
195-
Useful for debugging the exact simulation inputs used.
280+
Other devices (e.g., **pmos_lv**, **npn13g2**, etc.) follow a **similar structure** — only the bias variable names and measured outputs differ slightly depending on the device type and test configuration.
196281

197-
- **clean_measured_data**
198-
Contains CSV files with measured data extracted and cleaned from the input MDM sources.
199-
Provides a standardized format for consistent comparison against simulations.
282+
---
283+
284+
#### `3. combined_results/`
285+
286+
This directory contains **merged datasets** that align the measured and simulated results for direct comparison.
287+
Each file represents one test type (e.g., `dc_idvg.csv`) and includes both measured and simulated data across all corners.
288+
289+
| Column | Description |
290+
|--------|--------------|
291+
| block_id, block_index | Same identifiers as in measured data. |
292+
| input_data, input_vars, output_vars | Source information. |
293+
| temp, w, l, ad, as, pd, ps, nf, m | Device geometry and setup. |
294+
| vg, vd, vb, vs, sweep_var | Applied bias conditions. |
295+
| ib_meas, id_meas, ig_meas, is_meas | Measured bias and current values. |
296+
| ib_sim_mos_tt, id_sim_mos_tt, ig_sim_mos_tt, is_sim_mos_tt | Simulated data at **Typical** corner. |
297+
| ib_sim_mos_ss, id_sim_mos_ss, ig_sim_mos_ss, is_sim_mos_ss | Simulated data at **Slow** corner. |
298+
| ib_sim_mos_ff, id_sim_mos_ff, ig_sim_mos_ff, is_sim_mos_ff | Simulated data at **Fast** corner. |
299+
300+
Each file name (e.g., `dc_idvg.csv`) corresponds to the **test master type** — the same flag defined in the MDM file (e.g., `DC_IDVG`, `DC_IDVD`).
301+
302+
This dataset is used for all later comparisons, tolerance analysis, and summary generation.
303+
304+
---
200305

201-
- **combined_results**
202-
Stores CSV files with side-by-side simulation and measurement data for each run.
203-
Useful for identifying mismatches, analyzing trends, and detecting systematic offsets in device behavior.
306+
#### `4. final_reports/`
307+
308+
This folder holds all **aggregated outputs and summaries** from the validation:
309+
310+
| File | Description |
311+
|------|--------------|
312+
| `full_results.csv` | The complete dataset combining measured and all simulated results. |
313+
| `results_summary.csv` | Aggregated statistics showing pass/fail counts and out-of-bounds (OOB) rates. |
314+
| `failed_results.csv` | Only entries that failed tolerance criteria. |
315+
| `final_summary.md` | A human-readable Markdown summary with statistics and metrics for all tests. |
316+
317+
These reports are the final outcome of the model validation process, summarizing how closely the simulation models align with measured fab data across all process corners.
318+
319+
320+
##### Example Output logs — nmos_lv
321+
322+
```bash
323+
2025-10-06 00:43:36 [INFO] Summary report saved to: models_results/nmos_lv/final_reports/results_summary.csv
324+
2025-10-06 00:43:36 [INFO] Detailed failure report saved to: models_results/nmos_lv/final_reports/failed_results.csv
325+
2025-10-06 00:43:36 [INFO] Summary written to: models_results/nmos_lv/final_reports/final_summary.md
326+
327+
========== RANGE-CHECK SUMMARY ==========
328+
Target Sweeps Pass Fail TotPts FailPts Fail%Cases Fail%Pts
329+
-----------------------------------------------------------------------------------
330+
Measured 7704 6920 784 249960 13612 10.18 5.45
331+
Typical 7704 7663 41 249960 248 0.53 0.10
332+
=========================================
333+
```
204334

205-
- **final_reports**
206-
Aggregated reports summarizing the verification outcomes:
207-
- **summary.csv** → High-level roll-up by device and metric (pass/fail).
208-
- **full_report.csv** → Detailed results for every test block and metric.
209-
- **detailed_failures.csv** → Lists of all failing points (only generated if failures exist).
335+
These summaries provide a concise view of how the model performs across all targets and conditions.
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# Configuration Files for Device Testing
2+
3+
This directory contains all **YAML configuration files** that define the behavior of each device test in the SG13G2 model verification framework.
4+
5+
Each configuration file provides the necessary parameters for:
6+
- Defining which **device** to test (e.g., `sg13_lv_nmos`).
7+
- Linking to the corresponding **measured data** from the fab (MDM format).
8+
- Specifying the **model card** paths.
9+
- Setting up **simulation parameters** (bias sweeps, corners, etc).
10+
- Declaring **validation targets** and **tolerance thresholds**.
11+
12+
---
13+
14+
## Directory Structure
15+
16+
```
17+
📁 configs
18+
┣ 📁 mos/
19+
┃ ┣ 📜 sg13_lv_nmos.yaml
20+
┃ ┣ 📜 sg13_lv_pmos.yaml
21+
┃ ┣ 📜 sg13_hv_nmos.yaml
22+
┃ ┗ 📜 sg13_hv_pmos.yaml
23+
┣ 📁 hbt/
24+
┃ ┣ 📜 sg13g2_npn13g2.yaml
25+
┃ ┣ 📜 sg13g2_npn13g2l.yaml
26+
┃ ┗ 📜 sg13g2_npn13g2v.yaml
27+
┣ 📁 pnp/
28+
┃ ┗ 📜 sg13_pnp_mpa.yaml
29+
┗ 📜 TEMPLATE.yaml
30+
```
31+
32+
Each folder corresponds to a **device category**, and each YAML file inside represents a **specific device configuration**.
33+
34+
---
35+
36+
## Validation Configuration Overview
37+
38+
This section describes the key parameters controlling the validation and tolerance behavior during model verification.
39+
40+
---
41+
42+
### 1. Thresholds and Out-of-Bound Control
43+
44+
| Variable | Description | Example |
45+
|-----------|--------------|----------|
46+
| **threshold_percent_oob** | Maximum allowed percentage (%) of points that can fall **outside** the target envelope before a sweep is considered failed. | `0.5` |
47+
48+
**Example:**
49+
If `threshold_percent_oob = 0.5`, then a sweep will pass only if at least **99.5% of its data points** lie within the expected envelope.
50+
51+
---
52+
53+
### 2. Corner Envelope Tolerance
54+
55+
| Variable | Description | Example |
56+
|-----------|--------------|----------|
57+
| **corner_tolerance_percent** | Defines the **relative tolerance margin (%)** applied to the FF and SS simulation bounds to account for process variation and measurement uncertainty. | `0.27` |
58+
59+
**Example:**
60+
If `corner_tolerance_percent = 0.27`,
61+
then the FF and SS bounds will be expanded by ±0.27% around their nominal values.
62+
63+
---
64+
65+
### 3. Current Clipping
66+
67+
| Variable | Description | Example |
68+
|-----------|--------------|----------|
69+
| **clip_curr** | Minimum current value (in Amperes) used to exclude extremely small or noisy data points during comparison. | `1e-11` |
70+
71+
**Example:**
72+
Currents below 1e-11 A are ignored during validation.
73+
74+
---
75+
76+
### 4. Parallel Execution
77+
78+
| Variable | Description | Example |
79+
|-----------|--------------|----------|
80+
| **max_workers** | Number of parallel threads or workers used for validation tasks. | `8` |
81+
82+
This helps speed up processing across multiple sweeps or corners.
83+
84+
---
85+
86+
## Notes
87+
88+
- All file paths are **relative to the device testing root directory (`devices/`)**.
89+
- If a specific parameter is not defined, a **default** will be applied automatically by the verification framework.
90+
- For detailed information on the verification logic and output reports, refer to the [main Device Testing README](../README.md).

0 commit comments

Comments
 (0)