Skip to content

Commit 27b6a5a

Browse files
committed
initial commit
0 parents  commit 27b6a5a

24 files changed

+2726
-0
lines changed

.github/badges/.blank

Whitespace-only changes.

.github/badges/coverage.svg

Lines changed: 50 additions & 0 deletions
Loading

.github/badges/tests.svg

Lines changed: 50 additions & 0 deletions
Loading

.github/workflows/coverage.yml

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
2+
name: Coverage
3+
4+
on:
5+
push:
6+
branches: [ "**" ]
7+
8+
jobs:
9+
compute_coverage:
10+
runs-on: ubuntu-latest
11+
12+
steps:
13+
- uses: actions/checkout@v4
14+
- name: Set up Pyton
15+
uses: actions/setup-python@v3
16+
with:
17+
python-version: "3.10"
18+
- name: Install dependencies
19+
run: |
20+
python -m pip install --upgrade pip
21+
if [ -f extra_requirements.txt ]; then pip install -r extra_requirements.txt; fi
22+
- name: Compute coverage
23+
run: |
24+
python -m pytest --cov=sar_sampling --cov-report=term-missing --local-badge-output-dir=${{ github.workspace }}/.github/badges > coverage.txt
25+
- name: Push changes
26+
run: |
27+
git config user.name "github-actions"
28+
git config user.email "[email protected]"
29+
if git diff --quiet; then
30+
git status
31+
echo "No files to update"
32+
else
33+
git add -u
34+
git commit -m "chore: update coverage data"
35+
git push
36+
fi

.github/workflows/python-tests.yml

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
2+
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
3+
4+
name: Test
5+
6+
on:
7+
push:
8+
branches: [ "main" ]
9+
pull_request:
10+
branches: [ "main" ]
11+
12+
jobs:
13+
run_tests:
14+
15+
runs-on: ubuntu-latest
16+
strategy:
17+
fail-fast: false
18+
matrix:
19+
python-version: ["3.10", "3.11", "3.12", "3.13"]
20+
21+
steps:
22+
- uses: actions/checkout@v4
23+
- name: Set up Python ${{ matrix.python-version }}
24+
uses: actions/setup-python@v3
25+
with:
26+
python-version: ${{ matrix.python-version }}
27+
- name: Install dependencies
28+
run: |
29+
python -m pip install --upgrade pip
30+
python -m pip install flake8 pytest
31+
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
32+
- name: Lint with flake8
33+
run: |
34+
# stop the build if there are Python syntax errors or undefined names
35+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
36+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
37+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
38+
- name: Test with pytest
39+
run: |
40+
python -m pytest --import-mode=prepend tests

.gitignore

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
**/__pycache__/
2+
**/*.pyc
3+
.venv
4+
.coverage
5+
*.egg-info/

README.md

Lines changed: 197 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,197 @@
1+
[![Pytest](.github/badges/tests.svg) ![Coverage](.github/badges/coverage.svg)](
2+
./coverage.txt
3+
)
4+
5+
# SAR Sampling
6+
7+
A Python package for generating structured samples for SAR (Specific Absorption Rate) testing scenarios using Latin Hypercube sampling with domain-specific constraints and mappings.
8+
9+
## Overview
10+
11+
SAR Sampling is designed to create realistic test scenarios for SAR testing by combining Latin Hypercube sampling with domain-specific knowledge about antenna configurations, frequency/radius mappings, and modulation schemes. The package processes input tables containing antenna data and generates structured samples that maintain the statistical properties of the original data while providing comprehensive coverage of the test space.
12+
13+
## Features
14+
15+
- **Latin Hypercube Sampling**: Implements various LHS methods including classic, centered, maximin, and centermaximin
16+
- **Domain-Specific Constraints**: Handles SAR-specific requirements for antenna configurations, frequencies, distances, and modulation schemes
17+
- **Voronoi Diagram Integration**: Uses spatial distribution analysis for frequency-radius mappings
18+
- **Flexible Input Formats**: Supports both pandas DataFrames and CSV files
19+
- **Configurable Sampling**: Customizable domains, weights, and sampling methods
20+
21+
## Installation
22+
23+
### Prerequisites
24+
25+
The package requires Python 3.10+ and the following dependencies:
26+
- numpy
27+
- pandas
28+
- scipy
29+
- shapely
30+
31+
### Install from source
32+
33+
```bash
34+
git clone <repository-url>
35+
pip install <project-root-dir>
36+
```
37+
38+
## Quick Start
39+
40+
### Input Data Format
41+
42+
The package expects input data in a specific format. Here's an example:
43+
44+
```csv
45+
antenna,frequency,2mm,5mm,10mm,21mm,M1,M2,M3,M4
46+
D750,750,,,,15.4,21.5,1,1,1,0
47+
D835,835,4.6,,15.0,20.8,1,1,0,0
48+
D900,900,4.7,,14.6,20.4,1,1,0,0
49+
D1450,1450,5.4,9.4,,18.3,1,1,0,0
50+
```
51+
52+
**Column requirements:**
53+
- `antenna`: Unique identifier for each antenna configuration
54+
- `frequency`: Frequency value in MHz
55+
- Distance columns: Must end with 'mm' (e.g., '2mm', '5mm', '10mm')
56+
- Modulation columns: Must start with 'M' (e.g., 'M1', 'M2', 'M3')
57+
58+
### Basic Run
59+
60+
```python
61+
from sar_sampling import SarSampler
62+
import pandas as pd
63+
64+
# Create or load input data from csv
65+
input_data = {
66+
'antenna': ['D750', 'D835', 'D900', 'D1450'],
67+
'frequency': [750, 835, 900, 1450],
68+
'10mm': [15.4, 15.0, 14.6, None],
69+
'21mm': [21.5, 20.8, 20.4, 18.3],
70+
'M1': [1, 1, 1, 1],
71+
'M2': [1, 1, 1, 1],
72+
}
73+
input_df = pd.DataFrame(input_data)
74+
75+
# Initialize sampler
76+
sampler = SarSampler(input_df)
77+
78+
# Generate samples
79+
sample = sampler(n_samples=100, method='maximin', seed=42)
80+
81+
# Access results
82+
print(sample)
83+
84+
# save to file
85+
sample.to_csv('sample.csv')
86+
```
87+
88+
## Sampling Methods
89+
90+
The package supports several Latin Hypercube sampling methods:
91+
92+
- **'classic'**: Standard Latin Hypercube sampling
93+
- **'center'/'c'**: Centered Latin Hypercube sampling
94+
- **'maximin'/'m'**: Maximin distance optimization
95+
- **'centermaximin'/'cm'**: Centered with maximin optimization
96+
97+
## Core Components
98+
99+
### SarSampler
100+
101+
The main class for SAR-specific sampling:
102+
103+
```python
104+
from sar_sampling import SarSampler
105+
106+
# Initialize with input table
107+
sampler = SarSampler(input_table, config={})
108+
109+
# Generate samples
110+
samples = sampler(n_samples=100, method='maximin', seed=42)
111+
```
112+
113+
**Parameters:**
114+
- `input_table`: DataFrame or CSV path containing antenna configurations
115+
- `config`: Optional configuration overrides
116+
117+
**Required input columns:**
118+
- `antenna`: Antenna identifiers
119+
- `frequency`: Frequency values
120+
- Distance columns ending with 'mm' (e.g., '10mm', '21mm')
121+
- Modulation columns starting with 'M' (e.g., 'M1', 'M2')
122+
123+
### LhSampler
124+
125+
General-purpose Latin Hypercube sampler:
126+
127+
```python
128+
from sar_sampling import LhSampler
129+
130+
# Define domains
131+
domains = {
132+
'a': [0, 10], # Continuous uniform on [0, 10]
133+
'b': [0, 20], # Continuous uniform on [0, 20]
134+
'c': [1, 2, 3] # Discrete values
135+
}
136+
137+
# Initialize sampler
138+
sampler = LhSampler(domains)
139+
140+
# Generate samples
141+
samples = sampler(100, method='maximin')
142+
```
143+
144+
### lhs Function
145+
146+
Direct Latin Hypercube sampling function:
147+
148+
```python
149+
from sar_sampling import lhs
150+
import numpy as np
151+
152+
# Generate 100 samples in 3 dimensions
153+
samples = lhs(dim=3, size=100, method='maximin', seed=42)
154+
```
155+
156+
### Sample Class
157+
158+
Container for structured data with variable classification:
159+
160+
```python
161+
from sar_sampling import Sample
162+
163+
# Create sample with variable classification
164+
sample = Sample(df, xvar=['x', 'y'], zvar='output')
165+
166+
# Access data
167+
x_data = sample.xdata() # Independent variables
168+
z_data = sample.zdata() # Dependent variables
169+
```
170+
171+
## Examples
172+
173+
### Basic Usage
174+
175+
See `examples/example.py` for a complete working example:
176+
177+
```python
178+
from sar_sampling import SarSampler
179+
import pandas as pd
180+
181+
# Load sample data
182+
input_df = pd.read_csv('data/input_table.csv')
183+
184+
# Create sampler
185+
sampler = SarSampler(input_df)
186+
187+
# Generate samples
188+
samples = sampler(1000, method='maximin', seed=123)
189+
190+
# Analyze results
191+
print(f"Generated {samples.size()} samples")
192+
print(f"X range: {samples.data['x'].min():.1f} to {samples.data['x'].max():.1f}")
193+
print(f"Frequency range: {samples.data['frequency'].min():.0f} to {samples.data['frequency'].max():.0f}")
194+
195+
# Save results
196+
samples.to_csv('output_samples.csv')
197+
```

compute_coverage.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
python -m pytest --cov=sar_sampling ${@}

0 commit comments

Comments
 (0)