-
Notifications
You must be signed in to change notification settings - Fork 29
🤖 feat: include model + thinking in bash tool env #1118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ThomasK33
wants to merge
1
commit into
main
Choose a base branch
from
attribution-rrw8
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+72
−5
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Change-Id: If4e0bd35b86bc63b35f15e39e9353c76ee259545 Signed-off-by: Thomas Kosiewski <tk@coder.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Dec 12, 2025
Expose model + thinking level for PR attribution via bash env vars.
- `getMuxEnv(...)` now supports optional `{ modelString, thinkingLevel
}`
- `AIService` threads the current model + effective thinking level into
tool env as:
- `MUX_MODEL_STRING`
- `MUX_THINKING_LEVEL`
- `docs/AGENTS.md` updated to require the new attribution footer format.
Validation:
- `bun test src/node/runtime/initHook.test.ts`
- `make static-check`
---
<details>
<summary>📋 Implementation Plan</summary>
# Plan: Extend GitHub PR attribution with model + thinking level
## Problem
Today, PR bodies often end with a simple attribution footer like:
- `_Generated with \`mux\`_`
You want that attribution to also include:
1. The **model** used to generate the changes.
2. The **thinking level** used.
## Goals
- Provide a **standard, copy/pasteable attribution footer format** that
includes model + thinking level.
- Make the **exact model + thinking level available to the agent at
runtime** so the footer can be accurate (no guessing).
- Keep changes minimal; avoid introducing a full GitHub integration
feature unless required.
## Non-goals (for this iteration)
- Automatically creating PRs from inside mux via a dedicated “Create PR”
tool/UI.
- Retroactively editing existing PRs server-side via GitHub Actions.
## Recommended output format
Make the footer a single line (easy to scan) plus optional
machine-readable comment (easy to parse later):
```md
---
_Generated with `mux` • Model: `<modelString>` • Thinking: `<thinkingLevel>`_
<!-- mux-attribution: model=<modelString> thinking=<thinkingLevel> -->
```
Notes:
- Use the **full mux `modelString`** (e.g. `openai:gpt-5.2-pro`) to
avoid ambiguity.
- `thinkingLevel` should be mux’s unified values:
`off|low|medium|high|xhigh`.
- No PR URL/number is included in the PR body attribution (it’s already
on the PR).
## Approach options
### Approach A (recommended): Expose model/thinking context + update
instructions (low risk)
**Net LoC estimate (product code only): ~40–90 LoC**
Implement:
- Surface `modelString` + `thinkingLevel` in the **bash tool
environment** as `MUX_MODEL_STRING` and `MUX_THINKING_LEVEL` so `gh pr
...` workflows can reference them.
- Update repo `AGENTS.md` guidance to require the richer attribution
footer.
- **Do not** inject `thinkingLevel` into the system prompt (to avoid
prompt-cache misses when switching e.g. `high` → `medium`).
Why this is enough:
- mux doesn’t currently own PR creation; agents typically run `gh` via
the bash tool.
- Env vars are available exactly where PR bodies are usually authored
(shell workflows) without changing the LLM prompt/caching behavior.
### Approach B: Add a helper CLI/script for “create PR with mux
attribution”
**Net LoC estimate (product code only): ~0 LoC** (mostly script/docs),
or **~200–400 LoC** if built into mux CLI/tools.
Implement a script (e.g. `scripts/gh_pr_create_with_mux_attribution.ts`)
that:
1. Creates the PR via `gh pr create ...`.
2. Ensures the PR body includes the attribution footer using
`$MUX_MODEL_STRING` / `$MUX_THINKING_LEVEL`.
This makes “add the correct footer” a single command, but is more
opinionated and adds surface area.
## Detailed implementation plan (Approach A)
### 1) Preserve prompt caching behavior
- **Do not** add per-request dynamic metadata (especially
`thinkingLevel`) into the system prompt.
- Prompt caching (e.g. Anthropic prompt caching) keys off the input;
changing the system prompt when switching `high` → `medium` would reduce
cache hits.
- Therefore, this change intentionally avoids modifying
`buildSystemMessage(...)`.
### 2) Add env vars for bash-based PR flows
- Extend `getMuxEnv(...)` to optionally accept `{ modelString,
thinkingLevel }`.
- Populate:
- `MUX_MODEL_STRING=<modelString>`
- `MUX_THINKING_LEVEL=<thinkingLevel||off>`
Files:
- `src/node/runtime/initHook.ts` (extend `getMuxEnv` signature +
implementation)
- `src/node/services/aiService.ts` (pass model/thinking into `getMuxEnv`
for tool calls)
Notes:
- Keep existing callers intact by making the new argument optional.
- It’s OK that init hooks (which aren’t tied to a model invocation)
won’t always set these vars.
### 3) Update AGENTS.md guidance to require the richer footer
- Replace the existing instruction about `_Generated with \`mux\`_` with
the new required template.
- Add a short note:
- model/thinking values should be sourced from `$MUX_MODEL_STRING` and
`$MUX_THINKING_LEVEL` in bash (preferred; doesn’t affect prompt
caching).
Files:
- `AGENTS.md`
- `docs/AGENTS.md` (keep in sync if both are published)
### 4) Tests
Add/update unit tests so this doesn’t regress:
- Add/extend a `getMuxEnv` unit test (either new or in an existing
runtime test file)
- `getMuxEnv` includes the new env vars when options are provided.
- Existing env vars remain unchanged.
## Validation steps (manual)
- In a bash tool call (or any `gh` workflow), run:
- `echo "$MUX_MODEL_STRING"`
- `echo "$MUX_THINKING_LEVEL"`
- Create/update a PR body footer and confirm it renders as expected.
- Switch thinking level (e.g. `high` → `medium`) and confirm prompt
caching behavior is unchanged (since the system prompt isn’t modified by
this feature).
## Decisions (confirmed)
- Footer includes **only** model + thinking level (no PR URL/number).
- Footer label uses `Thinking:` (aligns with mux naming) and records the
mux `thinkingLevel` value (`off|low|medium|high|xhigh`).
- Include the hidden HTML comment for machine parsing.
- Use the **current** modelString/thinkingLevel at PR creation/update
time (no aggregation across sessions).
</details>
---
_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `high`_
<!-- mux-attribution: model=openai:gpt-5.2 thinking=high -->
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Expose model + thinking level for PR attribution via bash env vars.
getMuxEnv(...)now supports optional{ modelString, thinkingLevel }AIServicethreads the current model + effective thinking level into tool env as:MUX_MODEL_STRINGMUX_THINKING_LEVELdocs/AGENTS.mdupdated to require the new attribution footer format.Validation:
bun test src/node/runtime/initHook.test.tsmake static-check📋 Implementation Plan
Plan: Extend GitHub PR attribution with model + thinking level
Problem
Today, PR bodies often end with a simple attribution footer like:
_Generated with \mux`_`You want that attribution to also include:
Goals
Non-goals (for this iteration)
Recommended output format
Make the footer a single line (easy to scan) plus optional machine-readable comment (easy to parse later):
Notes:
modelString(e.g.openai:gpt-5.2-pro) to avoid ambiguity.thinkingLevelshould be mux’s unified values:off|low|medium|high|xhigh.Approach options
Approach A (recommended): Expose model/thinking context + update instructions (low risk)
Net LoC estimate (product code only): ~40–90 LoC
Implement:
modelString+thinkingLevelin the bash tool environment asMUX_MODEL_STRINGandMUX_THINKING_LEVELsogh pr ...workflows can reference them.AGENTS.mdguidance to require the richer attribution footer.thinkingLevelinto the system prompt (to avoid prompt-cache misses when switching e.g.high→medium).Why this is enough:
ghvia the bash tool.Approach B: Add a helper CLI/script for “create PR with mux attribution”
Net LoC estimate (product code only): ~0 LoC (mostly script/docs), or ~200–400 LoC if built into mux CLI/tools.
Implement a script (e.g.
scripts/gh_pr_create_with_mux_attribution.ts) that:gh pr create ....$MUX_MODEL_STRING/$MUX_THINKING_LEVEL.This makes “add the correct footer” a single command, but is more opinionated and adds surface area.
Detailed implementation plan (Approach A)
1) Preserve prompt caching behavior
thinkingLevel) into the system prompt.high→mediumwould reduce cache hits.buildSystemMessage(...).2) Add env vars for bash-based PR flows
getMuxEnv(...)to optionally accept{ modelString, thinkingLevel }.MUX_MODEL_STRING=<modelString>MUX_THINKING_LEVEL=<thinkingLevel||off>Files:
src/node/runtime/initHook.ts(extendgetMuxEnvsignature + implementation)src/node/services/aiService.ts(pass model/thinking intogetMuxEnvfor tool calls)Notes:
3) Update AGENTS.md guidance to require the richer footer
_Generated with \mux`_` with the new required template.$MUX_MODEL_STRINGand$MUX_THINKING_LEVELin bash (preferred; doesn’t affect prompt caching).Files:
AGENTS.mddocs/AGENTS.md(keep in sync if both are published)4) Tests
Add/update unit tests so this doesn’t regress:
getMuxEnvunit test (either new or in an existing runtime test file)getMuxEnvincludes the new env vars when options are provided.Validation steps (manual)
ghworkflow), run:echo "$MUX_MODEL_STRING"echo "$MUX_THINKING_LEVEL"high→medium) and confirm prompt caching behavior is unchanged (since the system prompt isn’t modified by this feature).Decisions (confirmed)
Thinking:(aligns with mux naming) and records the muxthinkingLevelvalue (off|low|medium|high|xhigh).Generated with
mux• Model:openai:gpt-5.2• Thinking:high