
Much of the internet is completely fake, fake ad clicks, fake comments, fake traffic, fake followers, fake articles, fake polls, fake opinions, fake election campaigns, fake everything.
Public opinion (especially during elections) can be swayed easily via the use of the massive bot farms employed by state run operations.
China, Russia, N.Korea and Iran are the perpetrators of the majority of the fakery along with a plethora of organised criminal activity.
The highly treasonous actions of Keir Starmer have also seriously endangered the UK and its Western allies by okaying the massive Chinese Spy Embassy in the centre of London, as well as paying to give away the Chagos Islands to the CCP Belt and Road Initiative owned Mauritius. These stupid actions will, in the future, pose a fundamental military and security risk to the UK and its allies.
CHINA
The Chinese Communist Party (CCP) and its affiliated entities, including elements linked to the People’s Liberation Army (PLA), have been implicated in operating extensive bot farms—networks of automated or semi-automated social media accounts designed to spread disinformation, harass critics, and influence public opinion. These operations often employ virtual private networks (VPNs) or proxy servers to route traffic through IP addresses in Western countries like Canada, the US, and the UK, masking their Chinese origins and evading platform restrictions (since platforms like X/Twitter are blocked in mainland China).
This tactic helps the bots appear local and authentic, amplifying their reach in targeted regions.A prominent example is the “Spamouflage” (or “Dragonbridge”) campaign, identified by researchers and governments as the world’s largest known online disinformation operation, involving hundreds of thousands of fake accounts across platforms like X/Twitter, Facebook, YouTube, and TikTok.
This is what a bot farm controlled by a single bot looks like.
There’s no need for physical phones anymore to manage the accounts.
Thousands of comments, hundreds of shares that we interact with and get frustrated over all coming from a piece of code. pic.twitter.com/JXmuEduiUF— PaulC (@PaulConRO) March 9, 2025
Linked to China’s Ministry of Public Security (MPS)—a CCP-controlled entity with ties to intelligence and PLA cyber units—this network has harassed US residents, politicians, and businesses critical of Beijing, sometimes with threats of violence.
In 2023, the US Department of Justice charged 34 MPS officers with running a “912 Special Project Working Group” based in Beijing, involving hundreds of officers nationwide to target dissidents, discredit US politicians, and undermine companies at odds with CCP interests.
These bots often pivot topics en masse, such as shifting from pro-Russia narratives on Ukraine to attacking Western leaders, and have been traced to origins in China while using Western IPs to post in local languages.
In Canada, Global Affairs Canada detected a Spamouflage-linked bot network in August 2023 that accelerated in September, posting thousands of comments in English and French on politicians’ social media accounts, including those of Prime Minister Justin Trudeau.
The bots falsely claimed a CCP critic had accused politicians of crimes and ethical breaches, aiming to sow division.
Similar efforts have targeted Canadian MPs and been attributed to CCP/PLA operations using Canadian IPs to blend in.
In the US, bot farms have supported broader cyber espionage, such as APT31 (a group tied to China’s Ministry of State Security, with PLA connections), which targeted critics, journalists, and politicians via phishing emails that collected victims’ IP addresses and locations for further hacks.
In March 2024, the US indicted seven Chinese hackers for a “prolific global hacking operation” that compromised thousands, including by using US IPs to send over 10,000 malicious emails.
These efforts align with CCP goals like stealing intellectual property and undermining democratic processes, as seen in campaigns targeting US businesses in sectors outlined in “Made in China 2025.”
Bots have also flooded discussions on sensitive topics, such as harassing dissidents like billionaire Guo Wengui, using commercial bot networks from mainland China routed through US IPs.
The UK has faced similar intrusions, with APT31 hacking emails of politicians in the Inter-Parliamentary Alliance on China (IPAC), a group critical of the CCP.
In 2024, the UK imposed sanctions on groups linked to these operations, which used UK IPs for phishing and data collection to enable device compromises.
Overall, these bot farms support CCP/PLA objectives of global influence, with operations often traced to units in cities like Chengdu, using Western IPs to evade detection and amplify propaganda.
Governments and platforms continue to counter them through takedowns, but the scale remains vast, with over 5,000 accounts recently blocked in one instance alone.
Axis of Evil led by Russia
Russia, North Korea, and Iran are all heavily involved in state-sponsored bot farms, disinformation networks, and influence operations, often routing activity through Western IP addresses (US, UK, Canada, etc.) to make their efforts appear local, evade detection, and blend into target societies. These actors frequently share tactics, tools, or even indirect coordination in broader “axis” alignments against Western interests, though full alliances in bot operations are more opportunistic than tightly integrated.Russia’s Bot Farms and Use of Western IPsRussia remains the most prolific and sophisticated in this space, with operations like the Internet Research Agency (IRA) “troll factory” and newer AI-enhanced networks. In 2024, the US Justice Department (with Canada and the Netherlands) disrupted a major Russian bot farm run by RT (state media) and the FSB (security service). It used AI software called Meliorator to create nearly 1,000 fake profiles impersonating Americans, posting pro-Russia, anti-Ukraine content on X (formerly Twitter). Domains and infrastructure were US-based (e.g., .com registrations), and bots routed through Western IPs to mimic real US users. This scaled propaganda on Ukraine, elections, and divisions. Similar campaigns have targeted Europe and Israel, often using proxies or bought domains in the West for authenticity.Russia’s tactics include:Fake personas with AI-generated content. Amplification via bots reposting/liking each other. Spoofed sites mimicking local news.
These ops often appear organic from US/UK/Canada IPs, making takedowns harder.North Korea’s OperationsNorth Korea’s efforts lean more toward cybercrime and revenue generation than pure disinformation bot farms, but they overlap. The regime runs massive schemes where operatives pose as remote IT workers (using stolen/fake US identities) to get jobs at Western companies—often routing through “laptop farms” in the US (physical devices hosted stateside but controlled remotely from abroad, including China/Russia proxies). This evades sanctions, funnels millions back to Pyongyang for weapons programs, and sometimes steals data or plants malware.In 2025, the US DOJ indicted multiple North Koreans and facilitators, seizing laptop farms across states and fraudulent sites. While not classic “bot farms” like Russia’s, they use Western IPs extensively (US-based laptops/addresses) to appear legitimate. Disinformation is secondary—e.g., occasional anti-South Korea/US propaganda—but the infrastructure (proxies, stolen creds) supports influence when needed. No major 2025-2026 bot farm takedowns tied directly to Pyongyang, but the model enables covert ops.Iran’s EffortsIran’s IRGC (Revolutionary Guard) runs aggressive disinformation via bot networks, fake personas, and narrative laundering. Campaigns target US/UK politics, amplify anti-Israel/Western narratives, and exploit divisions (e.g., Scottish independence in 2024 UK ops, or Gaza-related content). They use fake UK/US accounts (often with Western-sounding personas) posting in English, sometimes routing through proxies to appear local.In 2024-2025, Iran-linked bots pushed election meddling (e.g., fake news sites, AI deepfakes), harassed critics, and coordinated with pro-Iran narratives. US seizures of IRGC-linked domains (older cases) showed use of Western infrastructure. During escalations (e.g., Israel-Iran tensions), bots flood with doctored images/videos via social media, often from IPs masked as Western to evade bans.Overlaps and “Alliances”These states—Russia, China (as we discussed earlier), Iran, North Korea—form a loose “axis” of authoritarian actors sharing cyber tactics, malware, or influence goals against the West. Examples:Coordinated disinformation (e.g., anti-Western narratives on Ukraine, elections). Shared tools/proxies for anonymity. Mutual support (e.g., Russian amplification of Iranian/Iran-backed content).
Not a formal bot-farm alliance, but convergence: Russia pioneers troll farms, others adapt (Iran for regional influence, North Korea for revenue masking ops). Western IPs/proxies are common across all to bypass platform blocks and seem grassroots.Platforms (X, Meta) and governments continue takedowns, but scale persists—often thousands of accounts before detection. If you’re seeing specific examples on social media, it’s likely part of these patterns.
Individual Bot Farms by Organised Criminals
Followers on social media are meaningless if they can be bought/farmed and created out of nothing.
Traffic stats for websites, can be manipulated to gain SEO traction on search engines.
Advertisements can be clicked by sophisticated bots raising vast amounts of revenue.
The internet is essentially fake and riddled with trillions and trillions of bots making money every millisecond for ruthless people with no quandaries about how they make their money.
Whilst state actors like the CCP, Russia, Iran, and North Korea run bot farms for propaganda, influence, espionage, or revenue generation (e.g., funding weapons programs), a huge chunk of the ecosystem is purely commercial and profit-driven. Private individuals, small crews, organized cybercrime groups, and even some shady companies treat bot farms as a business, often operating as “Bot-as-a-Service” (BaaS) or similar setups. They rent out their networks or sell services directly, turning automation into cash with low overhead and high margins.The main ways these private operators make serious money include:Ad fraud and click fraud — This is the biggest moneymaker. Bot farms simulate massive fake clicks on pay-per-click (PPC) ads (Google Ads, programmatic display, etc.), video views, or impressions. Advertisers pay thinking real people are engaging, but the money flows to the fraudster who controls the bots or the fake sites hosting the ads. Reports from 2025 show global ad fraud losses hitting tens of billions annually (e.g., estimates around $37-41 billion in recent years), with bot farms capturing a big slice. Some operations arbitrage this: spend a little on cheap traffic sources, inflate clicks massively, and pocket the difference—returns can hit 100x or more in extreme cases. Individuals or small teams run “phone farms” (arrays of cheap devices running bots) or use hijacked devices to scale this without much investment.Selling fake engagement and boosting services — Influencers, brands, politicians, or anyone wanting to look popular pays for fake followers, likes, comments, shares, or views on platforms like X/Twitter, Instagram, TikTok, YouTube, or Twitch. This is a massive underground market—people drop six or seven figures yearly on it. Operators charge per thousand engagements or offer monthly subscriptions. Some “engagement farming” even ties into platform monetization: on X, for instance, bots can artificially boost replies/likes from Premium users to trigger payouts under revenue-sharing programs. It’s turned into a “bot wars” arms race where farms redesign tactics to game the system for ad revenue or creator funds.Streaming and view botting — Streamers on Twitch, YouTube, or similar buy bot traffic to inflate viewer counts, which attracts real sponsors, higher ad rates, or platform bonuses. Farms provide this “view botting” as a service, often using residential proxies to look legit.Other cybercrime angles — Beyond ads, bots scrape data for resale (e.g., emails, credentials), send spam/phishing at scale, spread malware, or run credential stuffing. In crypto spaces, bot farms enable pump-and-dump schemes, fake trading volume to lure investors, or scam operations like pig butchering (building trust via fake romance/profiles then stealing crypto). Some farms even mine crypto or farm in-game rewards (e.g., in games like Counter-Strike or OSRS) for resale.Legit-adjacent or gray-area plays — Not everything is outright illegal. Some companies sell “bot services” for automation (e.g., social media management tools, trading bots for crypto arbitrage, or AI agents), charging subscriptions or commissions. Crypto arbitrage bots, for example, exploit price differences across exchanges and print money legally for users (and fees for bot creators). But the line blurs when these tools get repurposed for fraud.The economics are brutal for victims but sweet for operators: setup costs are low (cheap phones, VPS, proxies, open-source scripts), detection is asymmetric (cheap to attack, expensive to defend), and profits scale insanely. A single sophisticated farm can net hundreds of thousands to millions monthly, with some documented cases pulling $600k+ from game rewards alone or millions from ad fraud. Platforms and governments keep cracking down (takedowns, indictments), but new ones pop up fast—often in regions with lax enforcement.It’s a shadowy digital economy where state ops meet pure greed, and everyone from lone hackers to venture-backed startups (yes, some “AI bot” firms skirt edges) cashes in. The losers? Advertisers, real creators, and everyday users wading through noise.
Day 3 warming up my TikTok Shop account.
Listen to the video below. pic.twitter.com/8k40ZsqOqP
— WiFi Money Guy (@WiFiMoneyGuy) February 12, 2026
A creator hired 3 interns to spam AI-generated videos across 150 TikTok accounts…
Their current monthly income is $37,000. pic.twitter.com/soGdYX4Jup
— China pulse 🇨🇳 (@Eng_china5) February 14, 2026
bro
You literally CANT be lazy right now
This is your competition
HUNDREDS of AI agents working autonomously at once (thousands in revenue btw)
Lock tf in pic.twitter.com/B7lJ4KfJaR
— Miles Deutscher (@milesdeutscher) February 13, 2026
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: dailysquib.co.uk








