Skip to content
Runs local · no upload

AI Text Detector — Written by Human or AI?

Paste text and find out if an AI wrote it. The classifier runs locally in your browser — useful for teachers, editors, and journalists.

0 / 2000

How It Works

  1. 01

    Paste text or code

    Paste your content into the input field or type directly.

  2. 02

    Instant processing

    The tool processes your content immediately and shows the result.

  3. 03

    Copy result

    Copy the result to your clipboard with one click.

Privacy

All calculations run directly in your browser. No data is sent to any server.

ChatGPT, Claude, and Gemini produce text increasingly hard to distinguish from human writing. This tool runs a statistical classifier in your browser to estimate the probability that text was LLM-generated — analyzing perplexity, burstiness, and token distribution. A useful first-pass screen, though no detector is infallible — combine with your own judgment.

01 — How to Use

How do you use this tool?

  1. Paste the text you want to analyze into the input field (minimum 150 words for reliable results).
  2. Click Analyze — the classifier processes the text locally in your browser.
  3. Read the probability score: higher percentages indicate a stronger AI-authorship signal.
  4. Check the per-sentence heatmap to see which sentences contributed most to the AI score.
  5. For borderline results, try re-analyzing with just the most suspicious paragraphs.

What This Tool Does

AI writing assistants have made LLM-generated text ubiquitous in classrooms, newsrooms, and content pipelines. This tool helps you screen text for machine authorship using a browser-based statistical classifier — no account, no server, no data leaving your device.

It is designed for teachers checking student submissions, editors reviewing freelance copy, journalists verifying sources, and anyone who needs a fast first-pass opinion on whether a piece of text was written by a human or an AI model.

How Does It Work?

AI text detectors use several statistical signals that differ between human and LLM writing:

SignalWhat It MeasuresHuman PatternLLM Pattern
PerplexityHow “surprising” the word choices areHigh variabilityLow — LLMs choose predictable tokens
BurstinessVariation in sentence length and complexityHigh burstsUniformly smooth
Token distributionWhich words and phrases appear, how oftenIdiosyncraticClose to training distribution
EntropyRandomness of the textHigherLower (over long runs)

The classifier combines these signals in a logistic regression or lightweight neural model, producing a probability between 0% (confidently human) and 100% (confidently AI).

What Are Common Use Cases?

  • Education: Teachers and professors screen essay submissions to flag potential AI assistance for further review.
  • Publishing: Book editors and magazine fact-checkers verify that freelance submissions are original human writing.
  • Journalism: Reporters check quotes and contributed articles before publication.
  • Content marketing: Agencies auditing content libraries for AI-generated posts that may violate platform policies.
  • Legal and compliance: Law firms reviewing AI-use policies screen documents for LLM authorship.
  • HR: Recruiters checking cover letters and work-sample submissions for AI generation.

Frequently Asked Questions

Is this useful if a student only used AI to “help” with their essay? Mixed human/AI text is the hardest case for any detector. If someone writes a draft and uses AI to improve specific sentences, the output may score anywhere from 20% to 70%. A high score is more meaningful when the text shows other signs: uniform sentence structure, generic arguments, missing personal voice.

What does “burstiness” mean in human writing? Human writers naturally vary their sentence length dramatically — a short punchy sentence followed by a complex multi-clause one. LLMs tend to produce text with much more uniform sentence length and complexity, making the rhythm feel “flat.” This statistical regularity is one of the strongest AI signals.

Can this detect AI writing in languages other than English? The classifier is primarily trained on English-language corpora. Detection accuracy in other languages is substantially lower and should not be relied upon. For non-English text, use a detector specifically trained on that language.

Why do different AI detectors give different scores for the same text? Each tool uses a different underlying model, training data, and decision threshold. Disagreement across tools is common, especially in the 40–75% range. When tools disagree significantly, treat the text as ambiguous rather than AI-authored.

Last updated:

You might also like