Skip to content

Allow no quantization during QATConfig convert #2694

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

andrewor14
Copy link
Contributor

@andrewor14 andrewor14 commented Aug 5, 2025

Summary: This commit adds back the functionality to swap FakeQuantized* modules back to the corresponding torch.nn.* without performing post-training quantization:

QATConfig(base_config=None, step="convert")

This has the exact same functionality as this deprecated config:

FromIntXQuantizationAwareTrainingConfig()

This functionality is added back since it may be useful to users who wish to save QAT trained checkpoints from models containing only torch.nn.* modules (not FakeQuanitzed* modules), e.g. when training and inference need to happen on different machines:

quantize_(model, QATConfig(base_ptq_config, step="prepare"))
train(model)
quantize_(model, QATConfig(step="convert"))
torch.save(model.state_dict(), "my_checkpoint.pt")

# On a different machine
model.load_state_dict(torch.load("my_checkpoint.pt"))
quantize_(model, base_ptq_config)

Test Plan:

python test/quantization/test_qat.py  -k qat_config_init
python test/quantization/test_qat.py  -k qat_api_convert_no_quantization

@andrewor14 andrewor14 requested a review from jerryzh168 August 5, 2025 19:00
Copy link

pytorch-bot bot commented Aug 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2694

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit 53af27a with merge base 418593c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 5, 2025
@andrewor14 andrewor14 force-pushed the qat-config-convert-no-quant branch from b1f35a1 to 9fbcffe Compare August 5, 2025 19:01
@andrewor14 andrewor14 requested a review from drisspg August 5, 2025 19:01
**Summary:** This commit adds back the functionality to swap
`FakeQuantized*` modules back to the corresponding `torch.nn.*`
without performing post-training quantization:
```
QATConfig(base_config=None, step="convert")
```

This has the exact same functionality as this deprecated config:
```
FromIntXQuantizationAwareTrainingConfig()
```

This functionality is added back since it may be useful to users
who wish to save QAT trained checkpoints from models containing
only `torch.nn.*` modules (not `FakeQuanitzed*` modules), e.g.
when training and inference need to happen on different machines:
```
quantize_(model, QATConfig(base_ptq_config, step="prepare"))
train(model)
quantize_(model, QATConfig(step="convert"))
torch.save(model.state_dict(), "my_checkpoint.pt")

\# On a different machine
model.load_state_dict(torch.load("my_checkpoint.pt"))
quantize_(model, base_ptq_config)
```

**Test Plan:**
```
python test/quantization/test_qat.py  -k qat_config_init
python test/quantization/test_qat.py  -k qat_api_convert_no_quantization
```
@andrewor14 andrewor14 force-pushed the qat-config-convert-no-quant branch from 9fbcffe to 53af27a Compare August 5, 2025 19:02
@andrewor14 andrewor14 added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Aug 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant