The AI war is not over yet

At the end of the last post about Facebook's hopefully failing AI plans, we decided to not rely on on hope. Hope isn’t good enough when the stakes are this high.

The only way to ensure that future AIs have incentives that are aligned with the individuals that interact with them, is by building an incentive structure. One powerful enough to force the organisations that build AI to have no other choice but to align with individuals.

How can you force Facebook, Google, Amazon, Twitter, Snap and TikTok to all swing their priorities away from serving advertisers to serving individuals?

You lower switching costs, making platform-exit a real option for individuals and therefore making anti-user behaviour extremely costly.

Forcing AI organisations to align incentives with individuals

We need to give individuals the power to enforce accountability on companies and AI systems that are not aligned with individuals.

Other than accountability through democracy, it seems like there aren’t many ways individuals can exercise power against the tech giants.

The way to give individuals that power is to build tools that help them to easily understand and control what data about them AIs/companies can access. Given that AIs need data to know us well & perform successfully, if individuals only give data where they benefit and can revoke access en-masse when they don’t, then AIs have to be aligned with users or they won’t be able to compete.

The ability for individuals to withdraw their data en-masse would be a powerful accountability mechanism & incentive to do what’s best for individuals. If we can build that future where individuals have fluid, granular control of the data companies collect about them & that AIs use, we’re 90% of the way to achieving AI-individual alignment, because we have created the right incentive-system.

The path to fluid, granular data control

But how could individuals control what AIs know about them? Most people don’t even know what companies like Facebook, and Google know about them. Just understanding what companies and their AI systems know about you is hard. Data isn’t easy to understand & companies don’t *want* to tell you.

But before you truly control *anything* you need to understand it. You need to know:

  • what data exists about you
  • who has it
  • what does it mean
  • what do they use it for

Then, once you understand it, you can:

  • reclaim data to keep for yourself
  • decide who gets to access it
  • decide what they can use it for
  • delete it from anywhere you want

That’s a start, and would force these companies to perk up in their respect for individuals as humans, not just as “users”.

But even then you still don’t have true autonomy over your data. You’re fighting with companies over what *they* do with it. There’s a fundamental constraint on the possibilities of what your data could be used for, which is the incentives of the companies that hold it.

When individuals have control

If there was a company that didn’t have outside incentives for they use your data, that only served you, then your incentives would be aligned. Imagine if your data was no longer constrained by the imagination and incentives of ad platforms like Facebook and Google. What happens when the primary purpose of your data existing isn’t to make money selling you ads? What happens when data is broken out of corporate silos where it sits providing only a partial picture of who you are? to represent the full you?

human hand holding plasma ball
Giving users control will unlock magic.

It’s hard to grasp the magnitude of this shift except by analogy. When computing was only for businesses, the question was asked: but how would this be useful to consumers? A personal computer is pointless, nothing to use it for. The software hadn’t been built to make them useful, because there were no personal computers or users to build it for. 🐔 & 🥚

Right now businesses use personal data for their own ends. There are no consumer use-cases other than ad-driven feeds, beyond a hobbyist fringe tinkering with quantified self. For students of technology/Apple history: a hobbyist fringe tinkering with a tool that businesses find extremely valuable & powerful. Sound familiar?

New tools: from businesses to consumers

With new and powerful tools, businesses come first. They have the willingness to pay & the urgent need. But: consumer companies grow bigger.

Computing: IBM (1911) < Apple (1976)

Social Networking: LinkedIn (2002) < Facebook (2004)

Data infrastructure: Segment (2011) < ???

When personal data gets centred around the user, and users are empowered to choose what it gets used for, the use-cases will explode. Data will become a tool for thought, introspection, self-improvement, health, finding friends and partners, and for building personalised AI. Data will make us superhuman.

Who will win the AI war

There is a huge amount of responsibility placed on the company entrusted with the user’s data. Sufficiently good and complete data will give that company the power to manipulate users even more than Facebook and Google can do now. Do you really want that power to be held by companies selling ads?

Trust becomes the key differentiator in this world. Individuals need to trust that the company behind the AI puts their goals first. That the company wins when they win.

The real question here is: if you *can* control what AIs/companies know about you, will you let any AI access all your data? An AI system with all your data that’s aligned perfectly with you sounds incredibly useful. But who could you trust to make one? It’s difficult to pick a company - only Apple is making a case for themselves. Really you could only trust a company that has no external incentives. A company built entirely around you.

  • If they are funded by advertisers targeting you with ads - NOPE!
  • If they haven’t been transparent with how they use your data - NOPE!

i.e. ultimately this is no company we currently associate with AI today.

This company would need to be built from the ground up to be focused around serving the human user. Its revenue would need to come directly from users themselves - giving it no outside incentives. It would need to be the most transparent and trusted company in the world. It would give users true control. By being the most user-focused ethical, trusted company, it would have an unassailable comparative advantage over companies with baggage.

The company would need to stake its reputation on its pro-user behaviour.

No platitudes, instead: extreme commitment. Not “Don’t be evil”, but “Be Good”. Prioritising individuals’ autonomy, joy, and self-actualisation above profit. By being so pro-user, ironically it could become the most valuable company in the world.

Other posts: