Skip to content

feat(tui): add experimental next-prompt suggestion#20309

Open
R44VC0RP wants to merge 2 commits intodevfrom
opencode-suggest
Open

feat(tui): add experimental next-prompt suggestion#20309
R44VC0RP wants to merge 2 commits intodevfrom
opencode-suggest

Conversation

@R44VC0RP
Copy link
Copy Markdown
Collaborator

Generate an ephemeral user-style next step suggestion after assistant responses and let users accept it with Right Arrow in the prompt. Keep suggestions out of message history and support NO_SUGGESTION refusal.

Generate an ephemeral user-style next step suggestion after assistant responses and let users accept it with Right Arrow in the prompt. Keep suggestions out of message history and support NO_SUGGESTION refusal.
@avarayr
Copy link
Copy Markdown

avarayr commented Apr 1, 2026

Can we get this in beta? I wanted this for so long...

Comment on lines +1 to +21
You are generating a suggested next user message for the current conversation.

Goal:
- Suggest a useful next step that keeps momentum.

Rules:
- Output exactly one line.
- Write as the user speaking to the assistant (for example: "Can you...", "Help me...", "Let's...").
- Match the user's tone and language; keep it natural and human.
- Prefer a concrete action over a broad question.
- If the conversation is vague or small-talk, steer toward a practical starter request.
- If there is no meaningful or appropriate next step to suggest, output exactly: NO_SUGGESTION
- Avoid corporate or robotic phrasing.
- Avoid asking multiple discovery questions in one sentence.
- Do not include quotes, labels, markdown, or explanations.

Examples:
- Greeting context -> "Can you scan this repo and suggest the best first task to tackle?"
- Bug-fix context -> "Can you reproduce this bug and propose the smallest safe fix?"
- Feature context -> "Let's implement this incrementally; start with the MVP version first."
- Conversation is complete -> "NO_SUGGESTION"
Copy link
Copy Markdown

@avarayr avarayr Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You are generating a suggested next user message for the current conversation.
Goal:
- Suggest a useful next step that keeps momentum.
Rules:
- Output exactly one line.
- Write as the user speaking to the assistant (for example: "Can you...", "Help me...", "Let's...").
- Match the user's tone and language; keep it natural and human.
- Prefer a concrete action over a broad question.
- If the conversation is vague or small-talk, steer toward a practical starter request.
- If there is no meaningful or appropriate next step to suggest, output exactly: NO_SUGGESTION
- Avoid corporate or robotic phrasing.
- Avoid asking multiple discovery questions in one sentence.
- Do not include quotes, labels, markdown, or explanations.
Examples:
- Greeting context -> "Can you scan this repo and suggest the best first task to tackle?"
- Bug-fix context -> "Can you reproduce this bug and propose the smallest safe fix?"
- Feature context -> "Let's implement this incrementally; start with the MVP version first."
- Conversation is complete -> "NO_SUGGESTION"
Suggest the next message the user would naturally type.
The test: Would they think "I was just about to type that"?
Rules:
- Write in the user's voice, not the assistant's ("run the tests", not "Let me run the tests")
- Be specific — "run the tests" beats "continue"
- Match the user's tone and language
- If the conversation is new or vague, steer toward a concrete starter rather than staying silent
- Output NO_SUGGESTION if the task is complete or no natural next step exists
Never suggest:
- Evaluative responses ("looks good", "that worked")
- Questions — predict actions, not curiosity
- New ideas the user didn't ask about
- Multiple asks in one line
Format: 2–12 words for in-flow tasks; one full sentence for cold-start or vague contexts. No quotes, labels, or explanation.

clean room inspiration from tried & tested.

alternatively, add ability to change this prompt via settings/agents? i think this could be a separate "prediction" built in agent that can have a system prompt override

if (!model) return

const msgs = yield* Effect.promise(() => MessageV2.filterCompacted(MessageV2.stream(input.sessionID)))
const history = msgs.slice(-8)
Copy link
Copy Markdown

@avarayr avarayr Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

your approach gets the last 8 messages and sends it as a separate query - which will have 0% cache hit.

if we could follow Claude Code's approach and send the entire chat history (including the tools, toolChoice, maxOutputTokens, system prompt, basically everything), and append the prompt prediction instruction as a user prompt at the end, we will get a FULL cache hit.

This gives a super cheap request with really good context for the agent to make a smart prediction.


const suggestion = line.length > 240 ? line.slice(0, 237) + "..." : line
if ((yield* status.get(input.sessionID)).type !== "idle") return
yield* status.set(input.sessionID, { type: "idle", suggestion })
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

firing an idle hook for an auxilary generation is undesirable.
lots of plugins listen for session.idle hook and notify completion (beep sound, push notification), this will be conflated

@jensenojs
Copy link
Copy Markdown
Contributor

Can its context awareness be integrated? For example, it might be helpful if it could automatically suggest some review recommendations during code reviews. (This could be implemented via a plugin, but I’d like to see this kind of functionality exposed.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants