Signals
Signals are hints. A thumbs-down doesn’t mean “this session failed.” It means “start looking here.” Kelet uses signals to focus its analysis — without them, it has no starting point.
More signals = more accurate root cause analysis.
Two categories
Section titled “Two categories”User signals — come from people interacting with your agent:
- Explicit: thumbs up/down via
VoteFeedback, free-text feedback - Implicit: edits to AI-generated content tracked via
useFeedbackState, coded behavioral hooks (retry, dismiss, abandon, escalate)
Synthetic signals — platform-run evaluators Kelet executes on every new session:
- LLM-as-judge (semantic quality, tone, goal completion)
- Code evaluators (latency, turn count, tool usage patterns)
- Webhooks (custom evaluation logic)
No app code required for synthetic signals. Configure them in the console.
Signal anatomy
Section titled “Signal anatomy”| Field | Description |
|---|---|
kind | feedback, edit, event, metric, or arbitrary |
source | human (user) or synthetic (platform evaluator) |
trigger_name | What caused this signal. Use source-action format: user-vote, user-edit, user-retry |
score | 0.0–1.0. For votes: 1.0 = positive, 0.0 = negative |
value | Text: feedback content, diff, reasoning |
Cold start
Section titled “Cold start”On day one, you have no user feedback. Activate synthetic evaluators immediately — they give Kelet enough signal to start finding patterns before real users provide feedback.
Finding signals for your agent
Section titled “Finding signals for your agent”You don’t have to audit your codebase manually. The AI coding skill reads your agent code, understands its failure modes, and proposes both synthetic evaluators and human feedback hooks tailored to what your agent does. See the quickstart.
Synthetic signals
Section titled “Synthetic signals”For a full guide on creating and managing evaluators, see Synthetic Signals.