A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to ‘Humanize’ Chatbots

12 hours ago 1

On Saturday, tech entrepreneur Siqi Chen released an unfastened root plug-in for Anthropic’s Claude Code AI adjunct that instructs the AI exemplary to halt penning similar an AI model.

Called Humanizer, the elemental punctual plug-in feeds Claude a database of 24 connection and formatting patterns that Wikipedia editors person listed arsenic chatbot giveaways. Chen published the plug-in connected GitHub, wherever it has picked up much than 1,600 stars arsenic of Monday.

“It’s truly useful that Wikipedia went and collated a elaborate database of ‘signs of AI writing,’” Chen wrote connected X. “So overmuch truthful that you tin conscionable archer your LLM to … not bash that.”

The root worldly is simply a usher from WikiProject AI Cleanup, a radical of Wikipedia editors who person been hunting AI-generated articles since precocious 2023. French Wikipedia exertion Ilyas Lebleu founded the project. The volunteers person tagged implicit 500 articles for reappraisal and, successful August 2025, published a ceremonial database of the patterns they kept seeing.

Chen’s instrumentality is simply a “skill file” for Claude Code, Anthropic’s terminal-based coding assistant, which involves a Markdown-formatted record that adds a database of written instructions (you tin spot them here) appended to the punctual fed into the ample connection exemplary that powers the assistant. Unlike a mean strategy prompt, for example, the accomplishment accusation is formatted successful a standardized mode that Claude models are fine-tuned to construe with much precision than a plain strategy prompt. (Custom skills necessitate a paid Claude subscription with codification execution turned on.)

But arsenic with each AI prompts, connection models don’t ever perfectly travel accomplishment files, truthful does the Humanizer really work? In our constricted testing, Chen’s accomplishment record made the AI agent’s output dependable little precise and much casual, but it could person immoderate drawbacks: It won’t amended factuality and mightiness harm coding ability.

In particular, immoderate of Humanizer’s instructions mightiness pb you astray, depending connected the task. For example, the Humanizer accomplishment includes this line: “Have opinions. Don’t conscionable study facts—react to them. ‘I genuinely don’t cognize however to consciousness astir this’ is much quality than neutrally listing pros and cons.” While being imperfect seems human, this benignant of proposal would astir apt not bash you immoderate favors if you were utilizing Claude to constitute method documentation.

Even with its drawbacks, it’s ironic that 1 of the web’s astir referenced regularisation sets for detecting AI-assisted penning whitethorn assistance immoderate radical subvert it.

Spotting the Patterns

So what does AI penning look like? The Wikipedia usher is circumstantial with galore examples, but we’ll springiness you conscionable 1 present for brevity’s sake.

Some chatbots emotion to pump up their subjects with phrases similar “marking a pivotal moment” oregon “stands arsenic a testament to,” according to the guide. They constitute similar tourism brochures, calling views “breathtaking” and describing towns arsenic “nestled within” scenic regions. They tack “-ing” phrases onto the extremity of sentences to dependable analytical: “symbolizing the region’s committedness to innovation.”

To enactment astir those rules, the Humanizer accomplishment tells Claude to regenerate inflated connection with plain facts and offers this illustration transformation:

Before: “The Statistical Institute of Catalonia was officially established successful 1989, marking a pivotal infinitesimal successful the improvement of determination statistic successful Spain.”

After: “The Statistical Institute of Catalonia was established successful 1989 to cod and people determination statistics.”

Claude volition work that and bash its champion arsenic a pattern-matching instrumentality to make an output that matches the discourse of the speech oregon task astatine hand.

Why AI Writing Detection Fails

Even with specified a assured acceptable of rules crafted by Wikipedia editors, we’ve antecedently written astir wherefore AI penning detectors don’t enactment reliably: There is thing inherently unsocial astir quality penning that reliably differentiates it from LLM writing.

One crushed is that adjacent though astir AI connection models thin toward definite types of language, they tin besides beryllium prompted to debar them, arsenic with the Humanizer skill. (Although sometimes it’s precise difficult, arsenic OpenAI recovered successful its yearslong conflict against the em dash.)

Read Entire Article