Skip to content

fix: https://github.com/NVIDIA/Model-Optimizer/issues/981#983

Merged
AAnoosheh merged 4 commits intomainfrom
chenhany/fix_issue_981
Mar 18, 2026
Merged

fix: https://github.com/NVIDIA/Model-Optimizer/issues/981#983
AAnoosheh merged 4 commits intomainfrom
chenhany/fix_issue_981

Conversation

@ChenhanYu
Copy link
Collaborator

@ChenhanYu ChenhanYu commented Mar 5, 2026

What does this PR do?

Type of change: ? Bug fix

An issue is reported in #981 where str(v) on some TransformerConfig fields will raise TypeError. The fix is to catch this error and do repr(type(v)) instead.

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, using torch.load(..., weights_only=True), avoiding pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other source, did you follow IP policy in CONTRIBUTING.md?: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes

    • Improved checkpoint loading stability by handling unusual configuration values more gracefully; such values no longer cause failures and are skipped with a warning instead.
    • Reduced risk of crashes during configuration processing when encountering non-standard or unsupported objects.
  • Chores

    • Checkpoints no longer include saved run configuration or tool-version metadata, yielding smaller, simpler checkpoint files.

Signed-off-by: Chenhan Yu <chenhany@nvidia.com>
@ChenhanYu ChenhanYu self-assigned this Mar 5, 2026
@ChenhanYu ChenhanYu requested a review from a team as a code owner March 5, 2026 19:17
@ChenhanYu ChenhanYu requested a review from kevalmorabia97 March 5, 2026 19:17
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: cdc78d82-1d7e-40ce-956b-b0f82de5ca4e

📥 Commits

Reviewing files that changed from the base of the PR and between 6e567e6 and 68f6c20.

📒 Files selected for processing (1)
  • modelopt/torch/opt/plugins/mcore_dist_checkpointing.py
💤 Files with no reviewable changes (1)
  • modelopt/torch/opt/plugins/mcore_dist_checkpointing.py

📝 Walkthrough

Walkthrough

Removed YAML-based run config handling and related helper logic from the Megatron-Core distributed checkpointing plugin; file no longer writes a run_config YAML or uses YAML imports.

Changes

Cohort / File(s) Summary
Dist checkpoint cleanup
modelopt/torch/opt/plugins/mcore_dist_checkpointing.py
Removed YAML import and run config creation/writing in save_sharded_modelopt_state; eliminated internal helper _parse_transformer_config and the DROP_SUBSTRINGS filtering logic; removed inclusion of nvidia_modelopt_version in saved config.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title is a direct link to a GitHub issue without describing the actual change being made. Use a descriptive title summarizing the fix, such as 'Remove YAML-based config saving in mcore checkpointing' or 'Fix TypeError in transformer config handling'.
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Security Anti-Patterns ✅ Passed No security anti-patterns found: torch.load weights_only, numpy.load allow_pickle, trust_remote_code, eval/exec, or nosec comments.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch chenhany/fix_issue_981
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can enable review details to help with troubleshooting, context usage and more.

Enable the reviews.review_details setting to include review details such as the model used, the time taken for each step and more in the review comments.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelopt/torch/opt/plugins/mcore_dist_checkpointing.py`:
- Around line 151-157: Remove the stray pre-try conversion so the fallback runs:
delete the initial `config[k] = str(v)` before the try block and leave the
try/except that attempts `str(v)` and on `AttributeError`/`TypeError` sets
`config[k] = repr(type(v))`; update the block around `config`, `k`, and `v` in
`mcore_dist_checkpointing.py` so only the `try: config[k] = str(v)` / `except
(AttributeError, TypeError): config[k] = repr(type(v))` sequence exists.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 782435f7-d420-4661-8805-3bf2d93883dd

📥 Commits

Reviewing files that changed from the base of the PR and between 31f0783 and 6ab53db.

📒 Files selected for processing (1)
  • modelopt/torch/opt/plugins/mcore_dist_checkpointing.py

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Keval Morabia <28916987+kevalmorabia97@users.noreply.github.com>
@codecov
Copy link

codecov bot commented Mar 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.30%. Comparing base (edde087) to head (68f6c20).
⚠️ Report is 59 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #983      +/-   ##
==========================================
- Coverage   72.10%   70.30%   -1.80%     
==========================================
  Files         209      227      +18     
  Lines       23629    25860    +2231     
==========================================
+ Hits        17037    18182    +1145     
- Misses       6592     7678    +1086     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: Chenhan Yu <chenhany@nvidia.com>
@AAnoosheh
Copy link
Contributor

@ChenhanYu This can (accidentally) fix NVIDIA-NeMo/Megatron-Bridge#1983 as well

Signed-off-by: Asha Anoosheh <aanoosheh@nvidia.com>
@AAnoosheh AAnoosheh enabled auto-merge (squash) March 18, 2026 22:03
@AAnoosheh AAnoosheh disabled auto-merge March 18, 2026 22:03
@AAnoosheh AAnoosheh enabled auto-merge (squash) March 18, 2026 22:03
@AAnoosheh AAnoosheh assigned AAnoosheh and unassigned ChenhanYu Mar 18, 2026
@AAnoosheh AAnoosheh merged commit 52cfa4e into main Mar 18, 2026
58 of 62 checks passed
@AAnoosheh AAnoosheh deleted the chenhany/fix_issue_981 branch March 18, 2026 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants