-
Notifications
You must be signed in to change notification settings - Fork 6k
fix(opencode): add input limit for compaction #8465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
|
Hey! Your PR title Please update it to start with one of:
Where See CONTRIBUTING.md for details. |
|
The following comment was made by an LLM, it may be inaccurate: No duplicate PRs found |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR attempts to address the ChatGPT Pro plan's lower context limit compared to the API limit by adding an input field to the model's limit configuration for the gpt-5.2-codex model.
Changes:
- Added
input: 272000to the limit object for the gpt-5.2-codex model configuration
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const count = input.tokens.input + input.tokens.cache.read + input.tokens.output | ||
| const output = Math.min(input.model.limit.output, SessionPrompt.OUTPUT_TOKEN_MAX) || SessionPrompt.OUTPUT_TOKEN_MAX | ||
| const usable = context - output | ||
| const usable = input.model.limit.input || context - output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If an input limit is provided, it's always the usable limit. This can be less than context - output.
|
Auto-compaction now triggers based on the optional input limit. There is a separate issue (I believe with all models) where the next outgoing message can overrun context/input limits. In this scenario, the error should be caught with an auto-compaction running on thread history, and then a retry of the latest message occurring. Perhaps integrated into |
| delete provider.models[modelId] | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See anomalyco/models.dev#646, this is now upstream including limit.input
What does this PR do?
inputlimit from models.devHow did you verify your code works?
bun devto confirm compaction triggers