I had used a chatbot once for grammar, panicked, and ran every detector I could find. Each gave a different number. Their report had sentence-level highlights and a confidence column. That's the only one my supervisor accepted.
Find out where a document reads as AI — before your reviewer does.
Original AI-detection report on your draft, page by page. Tuned for academic English. Structured AI-probability output with confidence per flag, delivered to your inbox in under ten minutes.
- 01~10 minute turnaround. Faster than re-reading the chapter you're worried about.
- 02Sentence-level highlighting. Not just a number — the exact sentences flagged, with confidence per flag.
An AI percentage isn't what you think it is. Here's what to actually look at.
A single number on a cover page tells you almost nothing. The report that arrives in your inbox is built to be read in four passes — number, heatmap, confidence, distribution. In that order.
-
i.
The cover number is a sentence-share, not authorship
"24% AI" doesn't mean a fourth of your work was generated. It means 24% of sentences read as AI-like to the model. Your sentence cadence — long, even, hedged — can match that pattern even when no tool was used.
-
ii.
The heatmap matters more than the number
If 24% spreads thinly across the document, it's likely register. If it concentrates in two paragraphs, those two are worth a manual look. Read the highlighted sentences before you panic.
-
iii.
Confidence is on the right margin
Each flagged sentence has a confidence — high, medium, low. A 30% report made of low-confidence flags is a different conversation than a 12% report made of high-confidence ones.
-
iv.
Cross-reference confidence with concentration
The cover summary breaks down the score by confidence band and points to where flags concentrate in the document. If high-confidence flags cluster in a couple of paragraphs, those are the ones worth a manual look. If the score is spread thinly at low confidence, your style is the issue — not synthesis. Don't act on the number alone.
Two things make this report defensible — when a free detector won't be.
Every flag carries a confidence rating — not just a single number.
Most AI detectors output a percentage and stop. Ours layers a structured probability view on top of it: each flagged sentence is rated high, medium or low. A passage that looks human-written but pattern-matches as AI is the false-positive that ruins a viva — and the confidence column is what lets you distinguish stylistic register from likely synthetic prose at a glance.
- Sentence-level highlights with confidence ratings — high, medium, low — on every flag.
- So a 30% report made of low-confidence flags reads differently than a 12% report made of high-confidence ones.
- Cover summary explains the score distribution and where flags concentrate, so you act on signal — not on the headline number.
Tuned for academic English. Citations and equations excluded.
Most detectors are tuned on web text — blog posts, marketing copy. Methods chapters, citation-dense paragraphs and equations trip them constantly. Our pipeline excludes references, equations and direct quotes before scoring, the same way a Turnitin similarity report excludes the bibliography. The number you see is the number that matters.
You've read the report. Here's how the rest works.
What happens after upload — step by step.
-
i.
Upload
Drop your document. Pick the page slab. Submit. Takes a minute.
-
ii.
Quote & confirm
Price appears the moment you pick the slab. One payment, one click, one queue.
-
iii.
Report in inbox
PDF report with cover summary, sentence-level highlights, and a confidence breakdown by section. We email the ETA when the file enters the queue.
-
iv.
Clarifications, free for 24h
Got a question on a flag? Reply to the report email and we will clarify how the score and confidence ratings were composed. No additional fee for clarifications.
If your report comes back above your institution's threshold, we'll point you to AI Reduction — manual rewriting by 27 PhD editors that brings AI scores down while keeping voice, meaning, and citations intact. The check itself never auto-converts to a reduction. That's your call.
What scholars say after the AI report lands.
Wrote my methods chapter myself and got 32% on a free detector. Theirs came back at 6% — the confidence breakdown showed the formal register was the trigger, not synthesis. I needed that sanity check.
For a Springer submission I needed an independent AI report on the side. Twenty-minute turnaround, structured cover summary that the desk editor accepted at face value. Worth the fee.
Submitted at 11 PM, report at 11:14 with sentence-level highlighting. The two flagged paragraphs were exactly the ones I knew were ChatGPT. Used AI Reduction the next day.
Free detectors gave me numbers between 4% and 78% on the same document. This was the only report with sentence-level reasoning and a confidence column. The numbers were defensible.
We send batch AI checks for every M.Phil. cohort now. Two PhD desks rejected our cohort in 2024 over AI use — they haven't since we started this. The structured confidence breakdowns are the reason.
FAQ
-
An AI report analyzes your document to estimate the percentage of content that appears to have been generated by AI tools such as ChatGPT, Gemini, or similar software. This is different from a plagiarism report, which checks for textual similarity with existing published sources. A plagiarism report tells you if your text matches other documents; an AI report tells you how much of your writing resembles AI-generated output. Your document can score low on plagiarism but high on AI — or vice versa — because they measure entirely different things.
-
AI detection tools analyze patterns in writing such as sentence predictability, repetitive structures, and uniformity in phrasing. If your writing style is very structured, formal, or pattern-driven — even if written entirely by you — it can sometimes resemble AI-generated content and be flagged. Additionally, using grammar tools like Grammarly or paraphrasing tools like QuillBot can increase AI scores because these tools often produce AI-like writing patterns. If you believe your work was incorrectly flagged, you can share the report with us for guidance.
-
AI detection tools use algorithms trained on large datasets of both human-written and AI-generated text. They analyze writing patterns such as sentence predictability, perplexity (how surprising each word choice is), burstiness (variation in sentence length and complexity), and uniformity. Content generated by AI tends to be highly predictable and uniform, which these tools can detect. The result is expressed as a percentage representing the estimated likelihood of AI involvement.
-
We use trusted industry-standard tools such as Turnitin's AI detection feature and iThenticate for AI content analysis. We do not use Grammarly, QuillBot, or similar tools for any part of the process — these tools can actually increase AI detection scores by making writing appear more uniform and pattern-driven. Our detection process uses only established, reliable platforms.
-
For generating an AI report, we accept DOC, DOCX, and PDF formats. For AI reduction (rewriting to lower AI content), the document must be submitted in DOC or DOCX format since the file needs to be editable. Scanned or image-based PDFs are not accepted for either service.
-
AI reports are typically delivered within a few hours of submission. The maximum turnaround time is up to 24 hours. For urgent requests, you can contact our team for priority handling — in most cases, urgent reports are delivered within 1 to 3 hours depending on document size and availability.
-
The AI percentage indicates the estimated proportion of your document that appears to have been generated by AI. For example, a score of 40% suggests that approximately 40% of the content shows characteristics consistent with AI-generated writing. Higher percentages are more likely to cause issues with institutional submissions. Most universities and journals are now requiring minimal AI involvement — typically below 10% or 20%, though policies vary.
-
Yes. Modern AI detection tools can often identify content produced by paraphrasing tools. These tools create text with predictable sentence restructuring patterns that can still be flagged as AI-like. Even if the original source was human-written, running it through a paraphrasing tool may increase its AI detection score. The only reliable way to reduce AI content is through genuine manual rewriting by an experienced professional.
-
You can share your report or a screenshot with our team for review. We will help you understand the specific sections flagged and explain the possible reasons for the score. If you believe your institution has misapplied the tool or the result is genuinely incorrect, we can guide you on how to present your case. Each institution may have its own appeal or review process. We can only advise on the technical interpretation — the final decision rests with your institution.
-
No, we do not issue a separate certificate. The AI report itself — showing the updated percentage — serves as the primary proof of the analysis or reduction work. This is generally accepted by supervisors and institutions as documentation of the AI content level.
-
When Turnitin displays an asterisk (*) instead of a numeric AI percentage, it indicates that the AI-written content in your document falls below a certain threshold — typically less than 20%. Rather than showing a specific low percentage, Turnitin uses the asterisk symbol to represent this minimal AI presence. It is a good sign and generally indicates that your document's AI involvement is low.
-
Yes, in most cases a * value is considered safe. It indicates that AI involvement in your document is minimal — below the threshold where Turnitin considers it significant enough to quantify. Most universities and journals prefer minimal AI content, and a * result generally falls within acceptable ranges. However, policies vary by institution, so it is advisable to confirm with your supervisor if you are unsure.
-
No. The asterisk does not mean your document has zero AI content. It means the AI percentage is low — typically below 20% — but not necessarily zero. Some minor AI-like patterns may still be present but are below the level that Turnitin considers significant. The asterisk is generally viewed positively and does not indicate a problem for most institutional submissions.
-
Most universities that permit some level of AI content in academic work will consider a * result acceptable, as it represents minimal AI involvement. However, institutional policies on AI use in academic writing vary widely — some universities have strict zero-tolerance policies, while others set a threshold of 10% or 20%. Always verify your institution's specific AI policy with your supervisor or department before submitting.
Submit your document. Receive your AI report.
Tuned for academic English. Sentence-level highlights. Confidence rating on every flag. Delivered before you've made tea.