Skip to main content
Trust FAQ

HOW THE SYSTEM WORKS

The short version: ClawClones mixes direct repository facts, AI-generated decision support, and public community signals. These answers explain where the line is.

Measured

Repo metadata, releases, commit activity, and structured public-source inputs.

AI-owned

Summaries, tradeoffs, compare verdicts, and recommendation framing.

Community

Reddit and public-web discussion used as supporting evidence, not ground truth.

How much of ClawClones is measured data versus AI-written analysis?

Each clone profile blends three lanes: measured data from repos and public feeds, AI-generated interpretation for summaries and recommendation blocks, and community-derived signals from places like Reddit and public search coverage. We try to keep that split visible instead of pretending every sentence is equally objective.

What counts as measured data?

Measured data is the part we can directly observe. That includes GitHub metadata, release activity, stars, commit history, language/runtime clues, and structured inputs pulled from public sources. These are the inputs the decision-support layer builds on top of.

What does the AI actually write?

The AI writes the parts that require synthesis rather than raw retrieval: clone summaries, Why choose this over OpenClaw? blocks, compare verdicts, tradeoffs, best-fit guidance, and confidence summaries. Those sections are generated from the evidence we have, then reviewed and refreshed on a defined cadence.

How often do profiles and recommendations refresh?

We use a mix of refresh, review, and rewrite cycles. Homepage entry points are typically reviewed daily, clone profiles are reviewed weekly, compare verdicts are checked more aggressively, and methodology pages move more slowly. The goal is to update quickly when the evidence changes, not to rewrite copy just for the sake of churn.

What causes an immediate refresh outside the normal cadence?

Security incidents, major releases, architecture changes, big momentum shifts, confidence drops, or a meaningful OpenClaw baseline change can all trigger a faster pass. We also treat credible community feedback about factual misses as a reason to revisit a profile.

What does evidence confidence mean?

Evidence confidence is a quick signal for how strong the backing evidence looks behind an AI-heavy claim. High confidence usually means the conclusion is supported by multiple direct signals. Lower confidence often means the repo is young, documentation is thin, sources conflict, or too much has to be inferred. Low-confidence content should be read as a directional nudge, not a hard verdict.

Why do some fields show unknown, mixed, or limited-evidence states?

Because unknown is more honest than filler. If a project does not clearly publish something like deployment posture, collaboration model, or privacy behavior, we prefer an explicit unknown or mixed state over polished speculation.

How are the 5 points of the Security Radar evaluated?

The Security Radar uses a 1-10 scale where 10 represents maximum safety/protection and 1 represents high risk.
  • Sandboxing: Measures isolation from the host OS. 10 means fully virtualized or containerized (e.g., Docker, Wasm); 1 means direct local execution.
  • API Security: Evaluates how external integrations are handled. 10 means scoped, encrypted, and multi-user safe; 1 means plaintext keys or over-privileged access.
  • Network Isolation: Measures outbound traffic control. 10 means air-gapped or local-first with strict whitelisting; 1 means unrestricted internet access.
  • Telemetry Safety: Focuses on privacy. 10 means zero telemetry or tracking; 1 means extensive data logging and reporting to external servers.
  • Shell Protection: Evaluates command execution safety. 10 means no unsupervised shell access or strict human-in-the-loop; 1 means raw, unmonitored shell access.

What external sources feed the analysis?

The main external streams today are GitHub for repository facts, Reddit for community discussion, and public search/news coverage for broader ecosystem signals. We treat them as complementary evidence, not as equal replacements for direct repo inspection.

Should I trust the compare verdict more than the raw metrics?

Use the verdict as a shortcut, not as the only truth. The compare page is meant to summarize the most decision-relevant differences quickly, while the underlying metrics and profile sections show the evidence behind that call. If the verdict and the evidence block disagree, the evidence should win.

Can I suggest a correction or a new clone?

Yes. Use the site submission flow or email us if a profile is materially wrong, out of date, or missing a strong candidate. We would rather queue a review than leave a misleading claim live.

Need a correction or a deeper review?

We are actively tuning prompts, cadence rules, and fallback handling. If a clone looks misread or stale, send the evidence and we will review it.

Contact Analysis Team
Live Data Partner OpenClaw Seismograph
Threat Level calm