feat(tui): add experimental next-prompt suggestion#20309
feat(tui): add experimental next-prompt suggestion#20309
Conversation
Generate an ephemeral user-style next step suggestion after assistant responses and let users accept it with Right Arrow in the prompt. Keep suggestions out of message history and support NO_SUGGESTION refusal.
8c18db8 to
eb05287
Compare
|
Can we get this in beta? I wanted this for so long... |
| You are generating a suggested next user message for the current conversation. | ||
|
|
||
| Goal: | ||
| - Suggest a useful next step that keeps momentum. | ||
|
|
||
| Rules: | ||
| - Output exactly one line. | ||
| - Write as the user speaking to the assistant (for example: "Can you...", "Help me...", "Let's..."). | ||
| - Match the user's tone and language; keep it natural and human. | ||
| - Prefer a concrete action over a broad question. | ||
| - If the conversation is vague or small-talk, steer toward a practical starter request. | ||
| - If there is no meaningful or appropriate next step to suggest, output exactly: NO_SUGGESTION | ||
| - Avoid corporate or robotic phrasing. | ||
| - Avoid asking multiple discovery questions in one sentence. | ||
| - Do not include quotes, labels, markdown, or explanations. | ||
|
|
||
| Examples: | ||
| - Greeting context -> "Can you scan this repo and suggest the best first task to tackle?" | ||
| - Bug-fix context -> "Can you reproduce this bug and propose the smallest safe fix?" | ||
| - Feature context -> "Let's implement this incrementally; start with the MVP version first." | ||
| - Conversation is complete -> "NO_SUGGESTION" |
There was a problem hiding this comment.
| You are generating a suggested next user message for the current conversation. | |
| Goal: | |
| - Suggest a useful next step that keeps momentum. | |
| Rules: | |
| - Output exactly one line. | |
| - Write as the user speaking to the assistant (for example: "Can you...", "Help me...", "Let's..."). | |
| - Match the user's tone and language; keep it natural and human. | |
| - Prefer a concrete action over a broad question. | |
| - If the conversation is vague or small-talk, steer toward a practical starter request. | |
| - If there is no meaningful or appropriate next step to suggest, output exactly: NO_SUGGESTION | |
| - Avoid corporate or robotic phrasing. | |
| - Avoid asking multiple discovery questions in one sentence. | |
| - Do not include quotes, labels, markdown, or explanations. | |
| Examples: | |
| - Greeting context -> "Can you scan this repo and suggest the best first task to tackle?" | |
| - Bug-fix context -> "Can you reproduce this bug and propose the smallest safe fix?" | |
| - Feature context -> "Let's implement this incrementally; start with the MVP version first." | |
| - Conversation is complete -> "NO_SUGGESTION" | |
| Suggest the next message the user would naturally type. | |
| The test: Would they think "I was just about to type that"? | |
| Rules: | |
| - Write in the user's voice, not the assistant's ("run the tests", not "Let me run the tests") | |
| - Be specific — "run the tests" beats "continue" | |
| - Match the user's tone and language | |
| - If the conversation is new or vague, steer toward a concrete starter rather than staying silent | |
| - Output NO_SUGGESTION if the task is complete or no natural next step exists | |
| Never suggest: | |
| - Evaluative responses ("looks good", "that worked") | |
| - Questions — predict actions, not curiosity | |
| - New ideas the user didn't ask about | |
| - Multiple asks in one line | |
| Format: 2–12 words for in-flow tasks; one full sentence for cold-start or vague contexts. No quotes, labels, or explanation. |
clean room inspiration from tried & tested.
alternatively, add ability to change this prompt via settings/agents? i think this could be a separate "prediction" built in agent that can have a system prompt override
| if (!model) return | ||
|
|
||
| const msgs = yield* Effect.promise(() => MessageV2.filterCompacted(MessageV2.stream(input.sessionID))) | ||
| const history = msgs.slice(-8) |
There was a problem hiding this comment.
your approach gets the last 8 messages and sends it as a separate query - which will have 0% cache hit.
if we could follow Claude Code's approach and send the entire chat history (including the tools, toolChoice, maxOutputTokens, system prompt, basically everything), and append the prompt prediction instruction as a user prompt at the end, we will get a FULL cache hit.
This gives a super cheap request with really good context for the agent to make a smart prediction.
|
|
||
| const suggestion = line.length > 240 ? line.slice(0, 237) + "..." : line | ||
| if ((yield* status.get(input.sessionID)).type !== "idle") return | ||
| yield* status.set(input.sessionID, { type: "idle", suggestion }) |
There was a problem hiding this comment.
firing an idle hook for an auxilary generation is undesirable.
lots of plugins listen for session.idle hook and notify completion (beep sound, push notification), this will be conflated
|
Can its context awareness be integrated? For example, it might be helpful if it could automatically suggest some review recommendations during code reviews. (This could be implemented via a plugin, but I’d like to see this kind of functionality exposed.) |
Generate an ephemeral user-style next step suggestion after assistant responses and let users accept it with Right Arrow in the prompt. Keep suggestions out of message history and support NO_SUGGESTION refusal.