TL;DR: Facebook’s lack of alignment with individuals will slowly weaken their ability to win the AI war. As more people choose to not use Facebook, employees leave the company, and users choose to delete data from Facebook, they will lose their data advantage, and soon their talent advantage.
In the next decade, it will become painfully obvious that we should pay for any software that learns about us as individuals. If we don’t, that software will use knowledge about us to achieve the goals of the organisation that made it, rather than our goals.
Facebook is not on your side
Nobody believes that Facebook is on their side. This is a structural consequence of their business model, where they sell our attention and “the gradual, slight, imperceptible change in our own behaviour and perception” [Jaron Lanier]. Facebook serves their advertisers, not you, because that’s where they get their money from.
This means that there is an incentive misalignment between you, a single human individual, and Facebook. On the other side of your phone’s glowing screen, Facebook’s incentives have been encoded into a powerful AI system built and optimised by thousands of the world’s smartest software engineers.
Maximising Facebook’s profits means addicting us to feeds of junk-food information, and maximally surveilling every individual they can. They need to maximise daily time spent on platform, and to target you with the ads that you’re most likely to click. Unsurprisingly, this has lead to 85% of US adults thinking that social media companies know too much about them.
You, the user, are being farmed
We interact with AI systems that have their goals set by the companies that build them. We, as humans, try to use those systems to fulfil our needs and goals, while those systems try to use us to fulfil theirs. We interact with these semi-intelligent agents through our phones for hours every day. They control large portions of the information we see, and have different goals from us. This is nuts. It is making people’s lives worse every day, whether it’s their mental health, political polarisation, or productivity. This is not the good future.
We need to be building AIs that are aligned with the interests of their users. I call this “AI-Individual alignment”.
AI-individual alignment
AI alignment is about making sure AIs we build are good for humanity. But this alignment can’t just be between AI and humanity as a whole. Humanity too vague, there are just so many questions for which I don’t know if we have answers:
- Who gets to decide what humanity as a whole’s interests are?
- Who will build the AI which is aligned with humanity?
- How are the incentives of that organisation aligned with humanity?
- Are my interests in lock-step with humanity’s?
- Should there even be a centralised notion of humanity’s interests?
AI-alignment is a great thing to work on, but it can’t just be at the scale of the entire human race. A better approach, which also has more actionable next steps, is to aim for alignment between an AI and the individual humans that interact with it.
To nail down why this is, let’s look at two organisations building AI systems today.
Facebook and the NSA aren’t aligned with you
Consider AI built by Facebook or the NSA, perfectly aligned with those organisations. Would it be aligned with normal users/citizens? Certainly not. Facebook’s AI would be trying to profit maximise for Facebook, while the NSA’s would compromise individual freedoms in order to keep “the country” safe.
The NSA’s goals lead to 1984’s big brother. Facebook’s AI algorithms optimise for Engagement (time you spend scrolling), Growth (new users), and Monetisation (showing you more ads, having you click them). These are not your goals. Because Facebook as a company are wedded to the ad-driven business model, their incentives cannot be aligned with yours. They can’t build products with AI-Individual alignment.
AIs can only get to know us through data about us. The amount of data they collect about us, our friends, and our socio-economic demographics makes it trivial for them to target us so well that it feels like our phones are listening to us. Nobody wants to give Facebook more data, in fact people want to delete their data from Facebook, or even delete their entire account.
Facebook won’t win the AI war
Facebook’s lack of alignment with individuals will slowly weaken their ability to win the AI war. As more people choose to not use Facebook, employees leave the company, and users choose to delete data from Facebook, they will lose their data advantage, and soon their talent advantage.
Users move from the old and unfashionable ad-driven platforms to the new and delightful platforms. That means from Facebook to Instagram to Tiktok. In time, these new platforms also compromise on user experience and trust to serve their advertisers, and are in turn superseded by newer, more fashionable ones. Users will always fall out of love with companies who don’t first and foremost serve them.
Can we achieve AI-individual alignment?
What are our options here?
- Hope AI is developed by a benevolent organisation that aligns its interests with individuals & humanity
- Force AI organisations to align with individuals
Hope? Option 1 doesn’t cut it - hoping alone doth no future build. We need to make sure AI companies have to align with individuals if they want to survive.