t is another evening in lockdown and you decide to absentmindedly browse the web for things you do not urgently need but find some joy in purchasing. You flippantly press the “I Accept Cookies” popup on each site you visit, browse for 15 minutes, before you proceed to close the window and revert back to social media. Only, the products and brands you were browsing are now being advertised on your Instagram home page and Google search results—with heavily discounted prices. You glance at the URL and register some level of familiarity. Shaking your suspicion, you take mental note of the brands on display, flick through the professional pictures and scroll down to the payment options where you see a sea of security seals. Reassured, you enter your card details and click “Buy Now” after verifying with your online bank—only to be shown an “error” message. The items are out of stock.
Or worse, the items are never delivered. In the cases they are, they are counterfeit. This could range from counterfeit Canada Goose parkas, or counterfeit personal protective equipment (PPE). Meanwhile, your payment has been processed by an anonymous and now increasingly dubious merchant who holds your card details and personal data. The funds likely route to a personal account as the payment mechanism is unsecured. Thanks to COVID-19 ravaging the high street, your money is now part of the total £16.6m lost to fake shops since March. Far from being every Boomer’s acute nightmare, “fake shops” are the reality of modern e-commerce cybercrime.
In many ways, deception is a tale about humanity as old as time. It is Shakespearean, Machiavellian, Biblical and even Babylonian—a crime acknowledged and accounted for in the Code of Hammurabi circa 1754 BC. It is no secret that we routinely deceive for perceived personal gain—financial, professional, social or sexual—at the expense of law, social codes and ephemeral moral compasses. Yet what may have once predominantly existed on a micro-level is no longer the case. In 2006, Britain’s Parliament passed its first cohesive piece of legislation criminalising the offence of fraud, 38 years after its predecessor on theft. The late introduction of the Fraud Act suggests fraud was not committed enough, or to the same scale and severity, as straightforward theft—hence its label of an “evolved offence”. Prior to the Act, one could only commit fraud by conspiring with another. Although there were forgery and counterfeiting offences back in the late 18th century, the defendant’s intention to deceive was never the focus of criminal law.
A climax in globalisation, major financial crisis and a digital revolution later, deception is a macro-scale phenomenon. Its hard exterior, fraud, is now the most prevalent crime in Britain—while its soft interior, fakery, has become a global cultural zeitgeist.
Counterfeits, for instance, are no longer confined to luxury designer products. Throughout the pandemic, fake pharmaceuticals—unregulated websites offering fake (either bogus or harmful) medicinal products at heavily discounted prices—were pervasive. In March, as the novel COVID-19 virus spread across the globe, so too did the number of platforms purporting to sell vaccines and cures. Not only did UK-based fake pharma not appear on the British Register of Authorised Online Sellers of Medicines, but several misleadingly advertised certificates were purportedly issued from numerous regulatory bodies, such as the Royal Pharmaceutical Society. But it is not merely a British problem. Internationally, Interpol’s pharmaceutical crime fighting unit made 121 arrests across 90 countries, all within one week, successfully seizing dangerous pharmaceuticals worth over £11m.
The risk of fake pharmaceuticals vary from death, to serious injury, to antibiotic resistance based on improper dosage advised. But the most damaging injury remains: vulnerability to the pandemic of deceptive exploitation for profit.
If commerce has traditionally been considered the bread and butter of capitalism, its virtual counterpart is the bedrock of cyber capitalism. Yet fake shops and its big brother, fake services, are growing thorns in our quest to be fully digital. In the tangible world, professional services rarely warrant our doubt. We trust in the process of several years of education, the legitimacy of degrees, the expense and time it takes for qualification. Or if we do not, we trust in regulatory bodies set up to oversee financial, legal and accountancy services. Only, cyberspace does not promise the same: it gives refuge to a minefield of bogus firms. From investment scams, to recovery funds and “fake law” firms, the online commercial space is rampant with fraudulent services.
Often, these are vastly sophisticated operations; first the fraudster entices victims to invest in non-existent financial products. This varies from traditional investment vehicles to cryptocurrency and fake cryptocurrency giveaways—where consumers are asked to send crypto to a wallet address in order to receive a return worth double. Purporting to be legitimate, the fraudulent “professional” hides behind the cover of a glossy website, enhanced by photoshopped celebrity testimonials and fake security seals, and too often rips off the layout of well-known news platforms. Second, the fraudster then asks for consumers to deposit money into their online accounts on the premise of receiving an unrealistic predetermined margin. According to Britain’s Action Fraud, victims of investment fraud lost at least £657m in the past year, which represents a rise of 28% from the year before. Consumers are not the only ones that lose out; the Investment Association claims fraudsters also clone legitimate fund managers’ websites, products and documents and have stolen £10m from UK investors in the last year. Likewise, they routinely create fake price comparison websites to add another layer of authenticity and promote their fraudulent sites.
The operation, however, does not stop there. Many fraudsters subsequently set up a follow-up “recovery room” scam that purports to recover the lost funds for these victims. This takes two forms: a recovery fund service affiliated with a bogus law firm, or a fraudulent claims management company. The latter falls within the highly regulated sector by the Financial Conduct Authority (UK) and, in the U.S., the Securities and Exchange Commission.
Take “John Reid”, a fraudulent investment professional on BinaryBook and BigOption trading platforms. John duped victims into investing $185m into online accounts, and then duped them with a second sum of money acting as a representative of Wealth Recovery International. According to the U.S. authorities, he demanded upfront payments of $5,000-45,000, an additional 20% cut of the total amount to be recovered, as well as $50,000 for “further investigations” for successful recovery. In one instance, this totalled a $125,000 commission. Except, John Reid turned out to be an alias for the Israeli American fraudster, Austin Smith, who has been sentenced to 12 months in U.S. federal prison.
Arguably, the financial services industry has flirted with fraud in the past—illustrated by the notorious ritual of traders cold-calling consumers and selling heavily inflated or exaggerated financial products—yet the calculated malice of fake services still proves unsettling. In particular, fake law firms show how fraudsters easily impersonate bastions of the law. True, unqualified lawyers in the real world are commonplace. But the key distinction is this: deception online is not just confined to the quality of their service; but the intention to serve in the first place. In contrast to their real-world counterparts, fake lawyers steal from their clients by providing non-existent services in the first place.
Of course, this strategy is only successful insofar as victims land in the mouse trap. The success of online fraud is largely attributable to strategic proliferation. Social media platforms ranging from Facebook and Instagram, to e-commerce behemoths like Amazon, e-Bay and MercadoLibre are shielded from intermediary liability due to safe harbour laws. Until fraudulent content is reported with sufficient detail, intermediaries are not obliged to take any action. The creation of a reactive, rather than proactive, model to combat fraudulent and infringing content prolongs their half-lives. Further, micro-targeted advertising allows fraudulent vendors to serve ads to consumers who specifically browse legitimate brands. While Google Ads’s policies prohibit fraudulent advertisers, lax oversight enables fake shops and services to easily pop up on search engine results related to the genuine product or service.
The fragmentation of the cyberspace infrastructure likewise plays an important role: the cross-border network of registrars, who register domain names, webmasters who host websites and domain-owners creates a culture of averting blame. “Bulletproof” hosts and registrars profit from being notoriously lax; only forwarding reports of fraud to the registrant without taking any action themselves. Consequently, the virtual world offers an edge: it allows fraudsters to hide under the cloak of cyber-anonymity and cyber-bureaucracy. Whereas consumers would otherwise rely on their intuitive judgment from person-to-person interaction, their judgment now hinges on a technical understanding of e-commerce. Deception thus permeates every porous thread of our social fabric, thriving in the cold, fractured conditions of a cross-border and decentralised cyberspace.
Nor will this change anytime soon: “deepfakes” will be the final nail in the coffin of authenticity. Once upon a time, the term “deepfakes” was synonymous with unpleasant and invasive editing of celebrities’ faces on nude bodies for pornographic purposes. Then, it became a gimmick for disgruntled techies to gain one over an unpopular politician or celebrity by editing their video with extremely lifelike touches – thanks to the AI technology known as Generative Adversarial Networks. Anyone could be a victim: Mark Zuckerburg, Rowan Atkinson, Matteo Renzi have all been featured in deepfakes. Now, increasingly accurate AI-generated pictures and videos open a floodgate of possibilities for sophisticated fraud no longer reliant on identity theft. As academic and commercial labs explore tools for non-pornographic application of deepfake technology, fake shops and services will grow more sophisticated and damaging.
Of course, there will be a push back. Machine learning is already being applied to detect fraudulent e-commerce based on false positives and anomalies. London-based start-up, Ravelin, currently helps e-commerce platforms and their payment service providers to minimise losses to fraud and improve customer confidence. In the near future, machine learning will be able to preemptively identify patterns of risk and will ultimately play cat and mouse with the technology behind fake shops and commercial deepfakes.
To be clear, the ascent of deepfakes goes beyond our politics. Undoubtedly, politicians will exploit these doubts for a free get-out-of-jail card when caught in a compromising scenario. But not only will engineered faces, footage and audio complicate the online battle to takedown fraudulent material, it will weaken our ability to decipher truth from falsity. It goes to the root of the problem: no longer being able to decide reality based on perception. The tragic death of George Floyd, and the global pouring of outrage that ensued, attests to video footage as our strongest currency of objective reality. The single video footage of Derek Chauvin’s knee pressing on George Floyd’s neck jettisoned the BLM movement and demands for police reform across the United States. Police brutality, political misconduct, and national scandals have been exposed through the phone recorder—but what happens when the presumption of reality no longer holds? In Ian Goodfellow’s words, a scientist at Apple, “manipulated video will ultimately destroy faith in our strongest remaining tether to the idea of common reality.”
But just how strong is that tether? In many ways, we have already become well acquainted with the assault on a common reality. “Fake news” has become the zeitgeist of our times, a propaganda term popularised by the disgraced former President Trump to undermine unfavourable information and affirm an ideological view. The consequential mass distrust in media is hardly profound; we have heard of its corrosive effect on our democracy for the past four years. What we haven’t acknowledged is that it is only the tip of the iceberg of fakery that lurks beneath the cracks of our civility.
The 2019 U.S. college admission scandal is a case in point. Sending shockwaves through the academic community, the FBI charged thirty-three wealthy parents for their meticulous gaming of the U.S. higher education system. In order to secure admission to the top universities including Georgetown, Stanford, Yale and the University of Southern California, parents have been hiring consultancy services specialising in deception since 2011. Enter William Singer, the key defendant and architect of the commercial “side door” entrance . Whereas over-privileged and under-qualified students traditionally gained entrance via family donations and alumni networks (i.e. the “back door”), the side door consisted of bribing SAT and ACT administrators to correct answer sheets of applicants, bribing coaches to vouch for applicants as athletes regardless of their athletic ability, and forging application credentials through photoshop. Transacting in fraud proved profitable: Singer raked in $25m for his bribing services and up to $75k for every test taken within the purview of his cheating apparatus.
“Deception thus permeates every porous thread of our social fabric, thriving in the cold, fractured conditions of a cross-border and decentralised cyberspace.”
When fraud infiltrates the sacred reserves of Western meritocracy, no facet of cultural life is immune. Of course, the deep-rooted cultural grip photoshop holds comes of no surprise to those well versed in social media: the poster boy of ritualised inauthenticity. Although far from algorithmic fakery, Instagram’s beauty and lifestyle aesthetic exudes deception, bolstered by a newly creamed caste of pseudo-celebrity “influencers” who monetise their online personas for an income. From their “single, cyborgian face” to their entourage of filters, editing and photoshop, fakery has been embedded into the code of the optimal Instagram post.
Then comes the faux wealth aesthetic. The travelling project in Los Angeles was built solely to replicate the ultimate flex for the uber rich: a casual shot inside a private jet. Finished with plush carpeting, faux leather seats and round plane windows, the fake private jet set symbolised the height of deceptive lifestyle relativism that successfully duped many. In the age of fakery, curating artificial stills of our lives to falsify indicators of wealth has become normalised and justified by a single phrase: “for the ‘gram”.
Perhaps the age of fakes is fitting in a time where augmented and virtual reality technologies toy with the idea of a common reality. No longer confined to the gaming industry, our capital and time are increasingly invested in the subversion of objective reality—reducing it to a mode that can be opted out of—in place of the elusively appealing cyberspace. But the key distinction remains: the age of fakes removes our ability to choose the truth.
Some of the implications are already being felt: counterfeiting hurts our economic growth, constituting 4% of global imports and resulting in losses of $323bn in 2017 alone. It has also been linked to funding terrorism by providing revenue streams to political assailants to buy the arms required for the attacks. The brothers who attacked Charlie Hebdo offices in France were partially funded by the sale proceeds of fake Chinese trainers. The 2004 explosion on a Madrid commuter train that led to the deaths of 191 people was also part-funded by the sale proceeds of pirate music CDs in the U.S. In the long term, the damages extend to the erosion of trust in competence and a distaste for the brand of expertise, in turn harming the reputations of government regulatory bodies like the Solicitor’s Regulation Authority and the Financial Conduct Authority. This equates to disillusionment with “the system” that defrauds rather than protects the masses – slowly cutting a deeper wound that ripens to the cause of future populism.
Above all, the age of fakes is an unintended consequence of our economic and technological growth. Unnaturally jettisoned by the pandemic, explosive digital growth has strained our social contract by embedding the additional unspoken condition of deception, fraud and fakery. As its sophistication will surpass the counterintelligence of national governments, we risk reverting to a new, covert state of nature. Rather than a war of man against man, it will be a permanent culture of distrust, defensiveness and cynicism, with no Sovereign to seek refuge from. The British Office for National Statistics estimate 80% of fraud incidents are not reported to the police—leaving the majority to fend for themselves. Without the comfort of our own “System 1” judgment, paranoia and a deep sense of unsettlement will rupture the remaining strands of our contract. In the end, we will be left in need of a new bargain fit for an age of fakes.