Preventing Fake Accounts

Verified Accounts

Big Tech companies like Twitter face an interesting set of incentives, especially in their early stages of growth. The more users they can show investors, the better their valuation, so were not too concerned about preventing fake accounts of spammers and bots. Later on, this has become a big deal:

Twitter had a blue checkmark where the people outsourced the trust to the company, but people were still able to “impersonate” others by copying their name and icon. The username, which is unique, can be spoofed by using similar-looking characters, since Twitter allows non-ASCII characters to be used in usernames

The “Official” account badge that was tested briefly was retired in less than 24 hours:

While both Robin Williams and Robbie Williams were famous, the assumption was that most “famous” people do not have the exact same name. But what should all those other people named Bill Gates do? They don’t get the blue checkmark because they’re not as famous as “the real” Bill Gates, but they are no less real.

Overkill Requirements

Facebook famously had their Real Names policies since before 2010, yet many people have mangled their names in order to be harder to find by strangers:

While Google’s shuttered “Google Plus” had it for three years before retiring it:

What is, after all, the value of your real name to strangers on the internet? So that they can find you easily and read your posts. While this may be cool for one-way communication, it becomes unwieldy when people try to write to you (want to handle Will Smith’s email?) or find out about your life (paparazzi) to report stuff you didn’t intend. And if you live under an oppressive government, having your real name behind your plans to do some peaceful thing they don’t like may land you in hot water with the authorities.

And those same authorities, in some countries, can now legally impersonate you!

Trusting Our Interfaces

WhatsApp and Signal often remind us that they use end-to-end encryption, but we just have to take their word for it. Since they are not open-source software with reproducible builds, we can’t know there isn’t a backdoor that makes exceptions in certain cases, for example, and doesn’t tell us.

Typically on Web3, people control private keys (in wallets) that they use to self-sign certain transactions or statements. The assumption is that if the same key was used to sign multiple statements, that was the same person doing it.

However, the key could be lost, or stolen. And moreover, if a person publishes their private key on the internet, all their signed transactions and posts since then become repudiable — meaning, they can have plausible deniability that someone else on the internet could have signed that stuff. (“We are all Spartacus.”)

Thus, for transactioms of serious value, identity is typically verified with multiple factors:

  1. something you have (a key, a dongle),
  2. something you know (a password that the interface pinky-swears it won’t phone home)
  3. something you are (biometrics, that the device maker also pinky-swears about)

Verifiable Credentials

On the open web, there is a standard by which entities on the Web can sign that they’ve verified something about someone:

The nice thing is that this doesn’t require a central authority, but allows any organization to sign claims about you, that you can choose to present. Examples of claims can include:

*Graduated in 2006 Magna Cum Laude in Physics from Columbia University
*Worked 2004-2008 as Chief Analyst at Earnst and Young

  • Won the 2017 prestigious Tchaikovsky Piano Competition
  • Was born before December 2018, according to Mount Sinai Hospital in New York

You can present these credentials and expect them to be recognized by a wide array of people, assuming that the entities in bold are, in fact, well-known and trusted. That trust may be enforced in a variety of ways, including self-regulatory organizations like FINRA.

Unlike people, organizations can grow and come to control a lot of value (money, power, branding, trust). That is why, in Qbix and Intercoin, we have chosen to require organizations, rather than people, to publicly earn and establish their identity and reputation over time. Much of the Internet’s security infrastructure works the same way:

Root of Trust

On the open web, we need to trust that the servers we are talking to are authorized to host websites at specific domains. To accomplish this, there are certificates people can sign with their private keys. A certificate chain goes from a root Certificate Authority, through other certificate authorities to sign that one of them (eg letsencrypt.org) has verified that a given web server was hosting a specific domain at a given time. The certificate may last a few months — after the domain is sold or transferred, it may be under new management, but the old owner would still have a valid certificate for a few months. Luckily, the domain registrar and other DNS servers would only be working with the new owner, so they could redirect requests away from the old servers, even if the old owner still had a valid certificate for a while.

Believable Fake Accounts

With the rise of bots online, our old assumptions about human accounts and CAPTCHAs may be overturned. Software like GPT-4 can generate believable text, and Deepfakes can generate believable photos. Resumés can be copied and remixed to make believable resumes — thus, entirely believable accounts can be created. This has already become a problem on professional site LinkedIn, and many others:

This means that we can no longer outsource our trust to Big Tech companies running large social networks (yes, even Microsoft). People are left to protect themselves within the social network — recruiters have to get on the phone with people, but sometimes people are hired to pretend to be someone else:

And things go the other way as well:

Believable Bullshit, Bots and GPT-3

However, bots generating content can make things far worse. Since most “interaction” with articles, posts and comments is passive, it doesn’t even act like Turing Test — and some of the new AI bots can already pass a Turing Test occasionally!

What this means is that the public can be convinced by bullshit arguments generated at scale. In 10 years, the proportion of human-generated content online may become vanishingly small. Those “friends” you added on Facebook, and those accounts you “followed” on Twitter and Telegram, may in fact have been taken over by bots.

The problem is when bot swarms get a signal to collude and push the public conversation in this or that direction. These “armies” online may subvert our public discourse entirely. And that is something we are ill-prepared for as a society. We need better models of trust and communication.