Skip to content

Conversation

@jemeza-codegen
Copy link
Contributor

Motivation

The llm response sometimes gets truncated preventing it from calling the create file tool

Content

We know check to see why the llm stopped producing tokens. If the reason is "max_tokens_reached" we return an error to the llm

Testing

Please check the following before marking your PR as ready for review

  • I have added tests for my changes
  • I have updated the documentation or added new documentation as needed

@jemeza-codegen jemeza-codegen requested review from a team and codegen-team as code owners March 19, 2025 01:38
@jemeza-codegen jemeza-codegen enabled auto-merge (squash) March 19, 2025 21:12
@jemeza-codegen jemeza-codegen disabled auto-merge March 19, 2025 21:20
@jemeza-codegen jemeza-codegen enabled auto-merge (squash) March 19, 2025 22:18
@jemeza-codegen jemeza-codegen merged commit 3a3231f into develop Mar 19, 2025
17 of 18 checks passed
@jemeza-codegen jemeza-codegen deleted the jmeza-handle-max-token-output-stop branch March 19, 2025 22:25
@github-actions
Copy link
Contributor

🎉 This PR is included in version 0.52.11 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Zeeeepa added a commit to Zeeeepa/codegen that referenced this pull request Apr 17, 2025
# Motivation

The llm response sometimes gets truncated preventing it from calling the
create file tool

# Content

We know check to see why the llm stopped producing tokens. If the reason
is "max_tokens_reached" we return an error to the llm

# Testing

<!-- How was the change tested? -->

# Please check the following before marking your PR as ready for review

- [ ] I have added tests for my changes
- [ ] I have updated the documentation or added new documentation as
needed
Zeeeepa added a commit to Zeeeepa/codegen that referenced this pull request Apr 23, 2025
Original commit by jemeza-codegen: fix!: LLM truncation error catch (codegen-sh#906)

# Motivation

The llm response sometimes gets truncated preventing it from calling the
create file tool

# Content

We know check to see why the llm stopped producing tokens. If the reason
is "max_tokens_reached" we return an error to the llm

# Testing

<!-- How was the change tested? -->

# Please check the following before marking your PR as ready for review

- [ ] I have added tests for my changes
- [ ] I have updated the documentation or added new documentation as
needed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants