Skip to content

Conversation

@ThomasK33
Copy link
Member

Expose model + thinking level for PR attribution via bash env vars.

  • getMuxEnv(...) now supports optional { modelString, thinkingLevel }
  • AIService threads the current model + effective thinking level into tool env as:
    • MUX_MODEL_STRING
    • MUX_THINKING_LEVEL
  • docs/AGENTS.md updated to require the new attribution footer format.

Validation:

  • bun test src/node/runtime/initHook.test.ts
  • make static-check

📋 Implementation Plan

Plan: Extend GitHub PR attribution with model + thinking level

Problem

Today, PR bodies often end with a simple attribution footer like:

  • _Generated with \mux`_`

You want that attribution to also include:

  1. The model used to generate the changes.
  2. The thinking level used.

Goals

  • Provide a standard, copy/pasteable attribution footer format that includes model + thinking level.
  • Make the exact model + thinking level available to the agent at runtime so the footer can be accurate (no guessing).
  • Keep changes minimal; avoid introducing a full GitHub integration feature unless required.

Non-goals (for this iteration)

  • Automatically creating PRs from inside mux via a dedicated “Create PR” tool/UI.
  • Retroactively editing existing PRs server-side via GitHub Actions.

Recommended output format

Make the footer a single line (easy to scan) plus optional machine-readable comment (easy to parse later):

---
_Generated with `mux` • Model: `<modelString>` • Thinking: `<thinkingLevel>`_
<!-- mux-attribution: model=<modelString> thinking=<thinkingLevel> -->

Notes:

  • Use the full mux modelString (e.g. openai:gpt-5.2-pro) to avoid ambiguity.
  • thinkingLevel should be mux’s unified values: off|low|medium|high|xhigh.
  • No PR URL/number is included in the PR body attribution (it’s already on the PR).

Approach options

Approach A (recommended): Expose model/thinking context + update instructions (low risk)

Net LoC estimate (product code only): ~40–90 LoC

Implement:

  • Surface modelString + thinkingLevel in the bash tool environment as MUX_MODEL_STRING and MUX_THINKING_LEVEL so gh pr ... workflows can reference them.
  • Update repo AGENTS.md guidance to require the richer attribution footer.
  • Do not inject thinkingLevel into the system prompt (to avoid prompt-cache misses when switching e.g. highmedium).

Why this is enough:

  • mux doesn’t currently own PR creation; agents typically run gh via the bash tool.
  • Env vars are available exactly where PR bodies are usually authored (shell workflows) without changing the LLM prompt/caching behavior.

Approach B: Add a helper CLI/script for “create PR with mux attribution”

Net LoC estimate (product code only): ~0 LoC (mostly script/docs), or ~200–400 LoC if built into mux CLI/tools.

Implement a script (e.g. scripts/gh_pr_create_with_mux_attribution.ts) that:

  1. Creates the PR via gh pr create ....
  2. Ensures the PR body includes the attribution footer using $MUX_MODEL_STRING / $MUX_THINKING_LEVEL.

This makes “add the correct footer” a single command, but is more opinionated and adds surface area.

Detailed implementation plan (Approach A)

1) Preserve prompt caching behavior

  • Do not add per-request dynamic metadata (especially thinkingLevel) into the system prompt.
    • Prompt caching (e.g. Anthropic prompt caching) keys off the input; changing the system prompt when switching highmedium would reduce cache hits.
  • Therefore, this change intentionally avoids modifying buildSystemMessage(...).

2) Add env vars for bash-based PR flows

  • Extend getMuxEnv(...) to optionally accept { modelString, thinkingLevel }.
  • Populate:
    • MUX_MODEL_STRING=<modelString>
    • MUX_THINKING_LEVEL=<thinkingLevel||off>

Files:

  • src/node/runtime/initHook.ts (extend getMuxEnv signature + implementation)
  • src/node/services/aiService.ts (pass model/thinking into getMuxEnv for tool calls)

Notes:

  • Keep existing callers intact by making the new argument optional.
  • It’s OK that init hooks (which aren’t tied to a model invocation) won’t always set these vars.

3) Update AGENTS.md guidance to require the richer footer

  • Replace the existing instruction about _Generated with \mux`_` with the new required template.
  • Add a short note:
    • model/thinking values should be sourced from $MUX_MODEL_STRING and $MUX_THINKING_LEVEL in bash (preferred; doesn’t affect prompt caching).

Files:

  • AGENTS.md
  • docs/AGENTS.md (keep in sync if both are published)

4) Tests

Add/update unit tests so this doesn’t regress:

  • Add/extend a getMuxEnv unit test (either new or in an existing runtime test file)
    • getMuxEnv includes the new env vars when options are provided.
    • Existing env vars remain unchanged.

Validation steps (manual)

  • In a bash tool call (or any gh workflow), run:
    • echo "$MUX_MODEL_STRING"
    • echo "$MUX_THINKING_LEVEL"
  • Create/update a PR body footer and confirm it renders as expected.
  • Switch thinking level (e.g. highmedium) and confirm prompt caching behavior is unchanged (since the system prompt isn’t modified by this feature).

Decisions (confirmed)

  • Footer includes only model + thinking level (no PR URL/number).
  • Footer label uses Thinking: (aligns with mux naming) and records the mux thinkingLevel value (off|low|medium|high|xhigh).
  • Include the hidden HTML comment for machine parsing.
  • Use the current modelString/thinkingLevel at PR creation/update time (no aggregation across sessions).

Generated with mux • Model: openai:gpt-5.2 • Thinking: high

Change-Id: If4e0bd35b86bc63b35f15e39e9353c76ee259545
Signed-off-by: Thomas Kosiewski <tk@coder.com>
@ThomasK33 ThomasK33 added this pull request to the merge queue Dec 12, 2025
github-merge-queue bot pushed a commit that referenced this pull request Dec 12, 2025
Expose model + thinking level for PR attribution via bash env vars.

- `getMuxEnv(...)` now supports optional `{ modelString, thinkingLevel
}`
- `AIService` threads the current model + effective thinking level into
tool env as:
  - `MUX_MODEL_STRING`
  - `MUX_THINKING_LEVEL`
- `docs/AGENTS.md` updated to require the new attribution footer format.

Validation:
- `bun test src/node/runtime/initHook.test.ts`
- `make static-check`

---

<details>
<summary>📋 Implementation Plan</summary>

# Plan: Extend GitHub PR attribution with model + thinking level

## Problem
Today, PR bodies often end with a simple attribution footer like:

- `_Generated with \`mux\`_`

You want that attribution to also include:

1. The **model** used to generate the changes.
2. The **thinking level** used.

## Goals
- Provide a **standard, copy/pasteable attribution footer format** that
includes model + thinking level.
- Make the **exact model + thinking level available to the agent at
runtime** so the footer can be accurate (no guessing).
- Keep changes minimal; avoid introducing a full GitHub integration
feature unless required.

## Non-goals (for this iteration)
- Automatically creating PRs from inside mux via a dedicated “Create PR”
tool/UI.
- Retroactively editing existing PRs server-side via GitHub Actions.

## Recommended output format
Make the footer a single line (easy to scan) plus optional
machine-readable comment (easy to parse later):

```md
---
_Generated with `mux` • Model: `<modelString>` • Thinking: `<thinkingLevel>`_
<!-- mux-attribution: model=<modelString> thinking=<thinkingLevel> -->
```

Notes:
- Use the **full mux `modelString`** (e.g. `openai:gpt-5.2-pro`) to
avoid ambiguity.
- `thinkingLevel` should be mux’s unified values:
`off|low|medium|high|xhigh`.
- No PR URL/number is included in the PR body attribution (it’s already
on the PR).

## Approach options

### Approach A (recommended): Expose model/thinking context + update
instructions (low risk)
**Net LoC estimate (product code only): ~40–90 LoC**

Implement:
- Surface `modelString` + `thinkingLevel` in the **bash tool
environment** as `MUX_MODEL_STRING` and `MUX_THINKING_LEVEL` so `gh pr
...` workflows can reference them.
- Update repo `AGENTS.md` guidance to require the richer attribution
footer.
- **Do not** inject `thinkingLevel` into the system prompt (to avoid
prompt-cache misses when switching e.g. `high` → `medium`).

Why this is enough:
- mux doesn’t currently own PR creation; agents typically run `gh` via
the bash tool.
- Env vars are available exactly where PR bodies are usually authored
(shell workflows) without changing the LLM prompt/caching behavior.

### Approach B: Add a helper CLI/script for “create PR with mux
attribution”
**Net LoC estimate (product code only): ~0 LoC** (mostly script/docs),
or **~200–400 LoC** if built into mux CLI/tools.

Implement a script (e.g. `scripts/gh_pr_create_with_mux_attribution.ts`)
that:
1. Creates the PR via `gh pr create ...`.
2. Ensures the PR body includes the attribution footer using
`$MUX_MODEL_STRING` / `$MUX_THINKING_LEVEL`.

This makes “add the correct footer” a single command, but is more
opinionated and adds surface area.

## Detailed implementation plan (Approach A)

### 1) Preserve prompt caching behavior
- **Do not** add per-request dynamic metadata (especially
`thinkingLevel`) into the system prompt.
- Prompt caching (e.g. Anthropic prompt caching) keys off the input;
changing the system prompt when switching `high` → `medium` would reduce
cache hits.
- Therefore, this change intentionally avoids modifying
`buildSystemMessage(...)`.

### 2) Add env vars for bash-based PR flows
- Extend `getMuxEnv(...)` to optionally accept `{ modelString,
thinkingLevel }`.
- Populate:
  - `MUX_MODEL_STRING=<modelString>`
  - `MUX_THINKING_LEVEL=<thinkingLevel||off>`

Files:
- `src/node/runtime/initHook.ts` (extend `getMuxEnv` signature +
implementation)
- `src/node/services/aiService.ts` (pass model/thinking into `getMuxEnv`
for tool calls)

Notes:
- Keep existing callers intact by making the new argument optional.
- It’s OK that init hooks (which aren’t tied to a model invocation)
won’t always set these vars.

### 3) Update AGENTS.md guidance to require the richer footer
- Replace the existing instruction about `_Generated with \`mux\`_` with
the new required template.
- Add a short note:
- model/thinking values should be sourced from `$MUX_MODEL_STRING` and
`$MUX_THINKING_LEVEL` in bash (preferred; doesn’t affect prompt
caching).

Files:
- `AGENTS.md`
- `docs/AGENTS.md` (keep in sync if both are published)

### 4) Tests
Add/update unit tests so this doesn’t regress:
- Add/extend a `getMuxEnv` unit test (either new or in an existing
runtime test file)
  - `getMuxEnv` includes the new env vars when options are provided.
  - Existing env vars remain unchanged.

## Validation steps (manual)
- In a bash tool call (or any `gh` workflow), run:
  - `echo "$MUX_MODEL_STRING"`
  - `echo "$MUX_THINKING_LEVEL"`
- Create/update a PR body footer and confirm it renders as expected.
- Switch thinking level (e.g. `high` → `medium`) and confirm prompt
caching behavior is unchanged (since the system prompt isn’t modified by
this feature).

## Decisions (confirmed)
- Footer includes **only** model + thinking level (no PR URL/number).
- Footer label uses `Thinking:` (aligns with mux naming) and records the
mux `thinkingLevel` value (`off|low|medium|high|xhigh`).
- Include the hidden HTML comment for machine parsing.
- Use the **current** modelString/thinkingLevel at PR creation/update
time (no aggregation across sessions).

</details>

---
_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `high`_
<!-- mux-attribution: model=openai:gpt-5.2 thinking=high -->

Signed-off-by: Thomas Kosiewski <tk@coder.com>
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant