Skip to content
XLinkedIn
Sign Up →

Signals

Signals are hints. A thumbs-down doesn’t mean “this session failed.” It means “start looking here.” Kelet uses signals to focus its analysis — without them, it has no starting point.

More signals = more accurate root cause analysis.

User signals — come from people interacting with your agent:

  • Explicit: thumbs up/down via VoteFeedback, free-text feedback
  • Implicit: edits to AI-generated content tracked via useFeedbackState, coded behavioral hooks (retry, dismiss, abandon, escalate)

Synthetic signals — platform-run evaluators Kelet executes on every new session:

  • LLM-as-judge (semantic quality, tone, goal completion)
  • Code evaluators (latency, turn count, tool usage patterns)
  • Webhooks (custom evaluation logic)

No app code required for synthetic signals. Configure them in the console.

FieldDescription
kindfeedback, edit, event, metric, or arbitrary
sourcehuman (user) or synthetic (platform evaluator)
trigger_nameWhat caused this signal. Use source-action format: user-vote, user-edit, user-retry
score0.0–1.0. For votes: 1.0 = positive, 0.0 = negative
valueText: feedback content, diff, reasoning

On day one, you have no user feedback. Activate synthetic evaluators immediately — they give Kelet enough signal to start finding patterns before real users provide feedback.

You don’t have to audit your codebase manually. The AI coding skill reads your agent code, understands its failure modes, and proposes both synthetic evaluators and human feedback hooks tailored to what your agent does. See the quickstart.

For a full guide on creating and managing evaluators, see Synthetic Signals.