The Fake Review Witch Hunt is Protecting the Wrong People

The Fake Review Witch Hunt is Protecting the Wrong People

Regulators are chasing ghosts.

The Competition and Markets Authority (CMA) is currently breathing down the necks of Autotrader, Just Eat, and others over "fake review failings." The narrative is predictable: big tech is lazy, consumers are being duped, and if we just audit the databases harder, the internet will become a bastion of truth. If you liked this piece, you might want to read: this related article.

It is a fantasy.

The obsession with purging fake reviews is a fundamental misunderstanding of how modern digital trust works. We are witnessing a regulatory "witch hunt" that ignores the underlying mechanics of the reputation economy. By forcing platforms to become the ultimate arbiters of truth, we aren't protecting consumers; we are actually making them more vulnerable to sophisticated manipulation while stifling the very competition that keeps these platforms honest. For another angle on this event, refer to the latest update from MarketWatch.

The Myth of the "Clean" Platform

The "lazy consensus" suggests that a platform can, with enough effort, achieve a 0% fraud rate. This is mathematically and socially impossible. Trust on the internet is not a binary state. It is a shifting probability.

When the CMA investigates a firm for failing to "do enough," they are operating on a 1990s definition of verification. They want a paper trail for every five-star rating. But in the real world, the line between a "fake" review and a "biased" one is non-existent.

Is a review fake if a restaurant owner asks a regular customer for a favor? Is it fake if a car dealer offers a $20 discount for a "honest" mention on Autotrader? Technically, yes, according to strict regulatory guidelines. In reality, these are the lubricants of small business. By cracking down on these "failings," regulators are essentially punishing the clumsy while the truly professional bot farms—the ones using rotating residential proxies and GPT-level LLMs—sail right through the filters.

Your Five-Star Rating is a Liability

If you are a consumer who still makes a purchase based solely on an aggregate star rating, you are the problem.

The industry insider truth is that "high-trust" platforms are the easiest to weaponize. When a platform like Just Eat or Autotrader is perceived as "clean" because of regulatory oversight, users lower their guard. This creates a "trust vacuum" that professional bad actors exploit.

I have seen companies spend six figures on reputation management firms that don't just "delete" bad reviews—they "drown" them with high-velocity, algorithmically perfect sentiment. These reviews pass every "robust" check the CMA demands. They have verified purchase histories. They have realistic dwell times on the page. They have "human-like" typos.

By demanding platforms police their content more strictly, we are simply raising the barrier to entry for the fraudsters. We are ensuring that only the most well-funded, sophisticated liars survive. The mom-and-pop shop gets banned for a few enthusiastic cousins, while the predatory chain with a "growth hacking" budget stays at the top of the search results.

The Verification Paradox

The push for mandatory ID verification or "verified purchaser" badges is the most dangerous "solution" currently on the table.

  1. Privacy Erosion: Do you really want to upload your passport to a pizza delivery app just so your review of a pepperoni melt is "trusted"?
  2. Centralization of Power: Verification turns platforms into state-level surveillance tools.
  3. The Black Market for Accounts: The moment you require "Verified" status, you create a massive black market for aged, verified accounts. A "verified" fake review is ten times more damaging than an anonymous one because the user's skepticism is deactivated.

Consider the $100 billion car market. If Autotrader is forced to implement draconian verification, the friction of leaving a review increases. Who leaves reviews when the friction is high? People who are angry. You don't verify your identity and jump through hoops to say, "The car was fine, and the dealer was polite." You do it when you want blood.

The result? "Clean" platforms don't become more accurate; they become reservoirs of extreme negativity, which is just as misleading as a wall of fake five-star praise.

Why We Should Stop Fixing Reviews

The obsession with "fixing" reviews is a distraction from the real solution: devaluation.

We need to stop treating reviews as a primary source of truth and start treating them as what they are: noisy, subjective metadata. Instead of more regulation, we need more transparency about how the sausage is made.

Instead of asking, "Is this review fake?" we should be asking, "Why does this platform's algorithm weight this review so heavily?"

The "People Also Ask" crowd wants to know: How can I tell if a review is fake?
The brutal answer: Assume they all are. If you want to know if a car is good, look at the service records and the independent mechanical inspection. If you want to know if the food is good, look at the turnover of the kitchen staff. Star ratings are a psychological shortcut for the lazy. Regulation that tries to make these shortcuts "safe" is just enabling that laziness.

The Superior Alternative: The Adversarial Model

Instead of a "Ministry of Truth" approach, we should move toward an adversarial reputation model.

  • Open the Data: Force platforms to allow third-party auditors to run their own "scam-detection" layers over the top of the reviews.
  • Weighted Negativity: Create a "skepticism score" for every user based on their historical behavior.
  • Liability Shift: If a platform explicitly promotes a "Verified" review that turns out to be fraudulent, they shouldn't just be fined—they should be liable for the consumer's loss.

This approach acknowledges that fraud is a permanent feature of the system, not a bug that can be patched out by a government agency.

The Cost of the Crackdown

Every time the CMA or the FTC goes on a crusade against "fake reviews," the cost of doing business for legitimate companies skyrockets. Compliance isn't free. Autotrader and Just Eat will hire hundreds of moderators and buy expensive AI tools to scan for "anomalies."

Who pays for that? You do.

The price of your car goes up. The delivery fee on your burger increases. And for what? A slightly "cleaner" list of reviews that the professional scammers have already figured out how to bypass?

It is a tax on the honest to provide a false sense of security for the gullible.

The "Five firms investigated" headline is a PR win for regulators, but it’s a net loss for the market. It reinforces the delusion that the internet can be curated into a risk-free environment. It can't. The moment a platform becomes "trusted," it becomes a target.

Stop looking for the government to tell you which used car dealer is honest. Stop expecting a platform to curate your reality. If you want the truth, look at the data the platform isn't showing you. Look at the return rates, the litigation history, and the churn.

The five-star rating is dead. Let it stay dead.

Get off the platforms. Talk to the mechanics. Visit the kitchens. Use your eyes, not your feed. The only person responsible for your "consumer protection" is you. Stop asking for a safer cage and start learning how to hunt in the wild.

BA

Brooklyn Adams

With a background in both technology and communication, Brooklyn Adams excels at explaining complex digital trends to everyday readers.