Follow on Substack
Follow on Medium
Facebook has evolved from a friendly social network into a sprawling, automated system where ordinary users, small businesses, and even nonprofits can lose access to their accounts overnight, while scammers and fraudulent advertisers often operate with shocking freedom and little accountability.
Recent privacy settlements and lawsuits show that these are not isolated glitches but symptoms of a deeper structural problem:
Facebook’s business model and governance consistently prioritize data extraction and ad revenue over user safety, privacy, and due process.
Facebook’s recently privacy settlement and a new book: What’s Wrong with Facebook: The Growing Problems of Facebook with Selfies, Scams, Ad Failures, and Frauds provide a harsh look at the problems with Facebook and what to do about it.
A platform built on privacy failures
Over the past several years, Facebook’s most visible “fixes” have come only after public scandal and intense legal pressure, not proactive responsibility. The landmark million dollar consumer privacy settlement — sparked by years of litigation over Facebook sharing user data with third parties like Cambridge Analytica — was granted final approval in 2023 and upheld on appeal in early 2025, making it the largest recovery ever in a dataprivacy class action and the largest amount Facebook has ever paid to resolve a private class action. Millions of users are now receiving small payouts, but the underlying story is stark: for more than a decade, Facebook allowed extensive data harvesting and profiling while insisting users were in control of their information.
This case is not an outlier. Facebook has repeatedly been accused by regulators and plaintiffs of violating its own promises to protect user privacy and comply with earlier FTC orders, leading to a patchwork of settlements and consent decrees that still have not fully stopped questionable datasharing and tracking practices.
Selfie “verification” and the human cost of lockouts
One of the most disturbing recent developments is Facebook’s growing reliance on automated “video selfie” verification systems that can lock people out of their accounts with little explanation and almost no meaningful path to appeal. In theory, these tools are supposed to fight scams and restore compromised accounts; in practice, many legitimate users report that their video selfies are rejected, their accounts remain suspended, and their attempts to reach support lead to silence.
Users describe being prompted to upload a facial recognition style video selfie after a routine login or after creating a new account, only to be told that the verification failed and their profile is under review indefinitely. Some report losing years of photos, contacts, and business pages when the system decides — often wrongly — that they are underage, a bot, or otherwise “inauthentic,” with multiple community posts noting that once a selfie is rejected, it is extremely rare to regain access without insider help or thirdparty services, so most users who become losers can’t regain their account.
For ordinary users, this means losing a digital scrapbook of their lives. For small businesses, nonprofits, and creators, it can mean losing customer lists, ongoing ad campaigns, and brand pages built over a decade — erased in an instant by an opaque biometric gatekeeper with no clear human oversight or due process.
Scam bots, fake ads, and the fraud economy
While legitimate users and advertisers struggle with lockouts, scammers and fraudsters often find Facebook surprisingly hospitable. Classaction suits from users who were tricked by fraudulent ads describe a familiar pattern: fake ecommerce shops, impersonated brands, bogus investment schemes, and deepdiscount offers that never ship. Yet these ads are routinely approved and heavily promoted.
Internal documents obtained by reporters show that Meta projected around 10% of its 2024 revenue could come from ads for scams and banned goods, suggesting that the company knowingly benefited from fraudulent advertising that harmed its own users. External commentary on those documents notes that Meta sometimes allowed “highvalue” advertiser accounts to rack up hundreds of violations without being shut down, effectively tolerating serial scammers because they were profitable.
In lawsuits filed by victims, Facebook users argue that the company broke explicit promises in its terms of service and community standards to “take appropriate action” against harmful and fraudulent content. But instead, it didn’t, so that a federal judge found that, if proven, those allegations could support claims that Meta breached its contractual duties and duty of good faith.
Meanwhile, some media coverage highlights how these scam ads mimic trusted brands and trick people into clicking dangerous links or handing over credit card details, turning the platform into a highrisk environment for anyone who assumes ads are vetted.
Broken appeals, opaque bans, and advertiser losses
Another core problem lies in how Facebook enforces its rules and how little recourse users have when automated systems get things wrong. Reports from business owners and group admins show accounts flagged or disabled for vague reasons like “account integrity” or “violations of community standards,” sometimes for posting ordinary content or even offering free items in local groups.
These users often find that:
Automated systems disable accounts with little explanation.
Appeal links lead to canned responses or no reply at all.
There is no clear way to escalate to a human decisionmaker.
In business and advertising contexts, the stakes are higher. Once an ad account is disabled — sometimes after a hack, sometimes after unexplained “policy violations” — advertisers report losing access to ongoing campaigns and outstanding balances, with limited success getting refunds or restoring access despite years of spending on the platform.
For many small businesses, Facebook and Instagram are their primary marketing channels, so a sudden lockout can mean losing a major share of revenue and customer contact overnight, with no equivalent way to reach their audiences elsewhere.
What needs to change
The pattern across privacy settlements, selfie lockouts, scam ads, and failed appeals points to a single underlying problem: Facebook has built an infrastructure where automation and revenue optimization dominate, and human accountability is an afterthought. Users are asked to trust black box systems with their identities, data, and livelihoods, but when those systems fail, the burden falls entirely on the individual—even though they have almost no information, rights, or remedies.
Several concrete changes are urgently needed:
Stronger enforcement of privacy and consumerprotection laws, with real penalties when platforms violate their own promises or enable largescale fraud.
Clear dueprocess rights for users and advertisers, including timely human review, transparent explanations for bans, and meaningful appeal mechanisms.
Limits on automated biometric checks — like video selfies — without robust accuracy standards, independent audits, and strict rules against repurposing biometric data.
Greater liability for hosting and profiting from scam ads, especially when internal documents show prior knowledge and financial dependence on that revenue.
Until regulators set these guardrails and Facebook redesigns its systems around safety and fairness — not just engagement and ad spend — users and advertisers will remain exposed. The stories behind the lawsuits and complaints make one thing clear: what is wrong with Facebook is not just a series of bugs, but a business model that treats human lives, memories, and livelihoods as disposable collateral in the pursuit of growth.
For more information, What’s Wrong with Facebook? is available on Amazon.
To set up interviews or speaking engagements with Gini Graham Scott, contact:
Karen Andrews
Executive Assistant
Changemakers Publishing and Writing
2145 San Ramon Valley Blvd., #4-366
San Ramon, CA 94583
(925) 804-6333


