Friday, January 15, 2021

The Public Debate On Social Networks We Aren't Having

Trump's recent de-platforming is of course controversial and has stirred up the usual furious "Is not! Is so!" yammering by one side shouting past the other, and vice versa.  FWIW, I'm glad Trump got de-platformed, at least for a while, but I'm pretty squeamish about allowing private companies with natural monopolies to wield this amount of power over public discourse.  I think I favor some kind of public utilities model, or at least a set of public regulations that codify what behavior will result in a user's ejection, and some kind of public appeal process.

That's an important discussion, but it's not the one I want to have today.

The interaction that users have with their social platforms represents the first large-scale experiment in the symbiosis between humans and AIs with a specific objective function.  The results are simply horrible, but there may be reason to be encouraged.

All social platform AIs have essentially the same objective function:  maximize engagement.  The more minutes a user spends interacting with it, the more data can be harvested, and the more valuable that user's digital persona becomes to advertisers.  To do this, the AI, which is a fairly straightforward machine learning system, recognizes patterns in what motivates a user to interact more strongly (i.e. longer, faster, with more posting, etc.), compares those patterns to those of other users with strong interactions, and feeds the user the same sort of content.

It works really well.  But the process is completely amoral, because the objective function is amoral.  The unanticipated consequence of this amorality is that the machine learning system optimizes for content that generates strong emotions because they generate the largest degree of engagement.  Unfortunately, for a particular type of person, the strongest emotion is rage, or self-righteousness, or some other emotion that leads to craziness and danger.

In the course of maximizing engagement for these people, the AI has essentially become the perfect vehicle to send them down the rabbit hole, spiraling tighter and tighter into a community of like-minded crazies who, because they've also been led down the same hole by the same AI, reinforce each other with progressively crazier and crazier ideas.  It's perfect for driving engagement levels even higher.

But it's terrible for the people in the hole, and it's even more terrible for civil society.  In the Olden Times (ca. 2010), people with these predilections were strongly discouraged by their friends, neighbors, and civic leaders.  But now those people don't have friends, neighbors, and leaders outside of the rabbit hole.

So the platform makes lots of money at the expense of civility.

There's a fairly famous thought experiment concerning poor choice of AI objective function.  In it, we give a super-intelligent AI the objective function to maximize the production of paperclips.  It starts out optimizing manufacturing and supply chains.  Then it goes looking for more iron supplies because it needs more steel, and it ignores the environmental damage--that's not part of the objective function.  Then it goes looking for more power for manufacturing and notices that people are consuming a lot of power that could otherwise go toward paperclip production, so it eliminates the people.  It ends with the AI spreading out over the galaxy, converting entire solar systems to paperclips.  Mind you, the AI is extremely smart.  But it still has a purpose, and it uses its intelligence in service to that purpose.

Compared to a super-intelligent AI, the social platforms' AIs are dumber than stumps.  But they've still managed to worm their ways into our behavior to an unprecedented degree.  If there were ever a better illustration of how a poor choice of objective function can have disastrous consequences, this one gives the paperclip AI a run for its money.

Up at the top, I mentioned that there might be something encouraging here:  You can change an objective function.  If, instead of "maximize engagement", you can choose "maximize engagement without allowing users to fall down a rabbit hole".  This is slightly more complex from a machine learning standpoint, but well within the state of the art.  In doing so, there's a possibility that a lot of the insanity might abate.

Note, however, that this objective function does not maximize revenue for the social platforms.  I think it doesn't do huge damage to their bottom lines, but if you gently guide people away from armed insurrection--or quilting fanaticism, or an unhealthy obsession with cat videos, or whatever--then you can still push maximum engagement across a sufficiently broad set of interests to mitigate the worst effects and still have a fine business model.  Who knows?  The platforms might even discover that by broadening the interests of their users, they found new advertising niches that could be more lucrative.  After all, there's only so much camo gear and 5.56mm ammunition you can sell.

If you're getting a creepy-crawly feeling that's whispering "miiiiiiiind controooolll" in the back of your head: good.  It's not a slam-dunk to engineer this sort of thing so that it's neutral about people's... enthusiasms.  But we're already seeing the results of a different, brute-force kind of mind control.  Doing better than that seems like it ought to be pretty easy.

No comments: