The short version
ClaimBuster helps teams decide what deserves checking first. Its public pages describe automated live fact-checking, a fact-checker module, and API endpoints for scoring text, batch-scoring sentences, querying knowledge bases, matching against fact-check databases, and comparing claim similarity.
Use ClaimBuster for triage.
Start there when you have speeches, transcripts, debates, articles, or streams of text and need to identify factual claims that may be worth checking.
Use FactSentinel for the claim at hand.
Use it when a selected claim, citation, or source trail needs reasoning, caveats, model agreement, and linked evidence before it is shared or published.
What ClaimBuster does well
ClaimBuster is built around automated live fact-checking. Its API documentation says the score endpoint uses a ClaimSpotter algorithm to determine how check-worthy a text input is, and its sentence endpoint batch-scores many sentences more efficiently than scoring one sentence at a time.
The API also documents endpoints for querying knowledge bases, retrieving associated fact checks from a database of verified fact checks, and comparing similarity between two claims. That makes ClaimBuster a strong fit for researchers, data journalists, and technical teams building a claim-triage pipeline.
Where claim spotting stops
A check-worthiness score is not the same as a verdict. It can help prioritize attention, but the user still needs to read sources, inspect evidence quality, check context, and decide whether the exact wording is supported.
FactSentinel is different: it starts from the live claim or citation in front of the reviewer and keeps verdict, confidence, reasoning, source links, caveats, and model agreement visible in the browser or web checker.