Andrew Yang has released a plan for Internet regulation. Some of it is good, some of it is anodyne, and some of it is terrible. You may wonder why I’ve chosen to write about this, since he’s not going to be the nominee. Two reasons: one, we should all be talking about this stuff more. Two, his plan is a great example of how Americans go astray when thinking about regulating tech companies.
From his website, we can start with the good:
Regulate the use of data and privacy by establishing data as a property right. The associated rights will enable individuals to retain ownership and share in the economic value generated by their data.
Wonderful! Big fan. I used to work at a startup that sought to enable customers to do this. Alas, it failed because investors thought that our user base would only be the Very Online™, and the sorts of people who want to hide things. Without widespread adoption (or government regulation), this doesn’t work.
Then we have the “sure, alright, I could see that working out”:
Minimize health impacts of modern tech on our people, particularly our children. I will create a Department of the Attention Economy that focuses on smartphones, social media, gaming, and apps, and how to responsibly design and use them, including age restrictions and guidelines.
He proposes things like “removing autoplay video for children under 16,” though personally I’d like to see that apply to children under 116. This one also contains stupider bullet points like “removing the queues that allow infinite scrolling,” which would be laughably easy to work around, if anybody can even figure out what it means.
And then we reach his dangerous ideas, which many politicians across the spectrum like, and which represent a particular failure mode in American regulatory thought.
There are essentially two ideas united by the same fallacy:
Social media platforms have catalyzed mass disinformation campaigns over the past decade, threatening not just our wellbeing but our democracy… Algorithms driving recommendations towards conspiracy theory content or other types of disinformation need to be reined in…
We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years. Facebook, Twitter, and other social media sites are using algorithms to make recommendations. These recommendations drive the majority of traffic, up to 70% for Google owned YouTube.
Section 230 of the Communications Decency Act absolves platforms from all responsibility for any content published on them. However, given the role of recommendation algorithms—which push negative, polarizing, and false content to maximize engagement—there needs to be some accountability.
I will leave aside the fact that Section 230 does no such thing and write instead about this inane focus on recommendation algorithms. (If you’re curious about my opinions on Section 230, there are upcoming EU regulations that touch on this, which I have written about here.)
Yang notes that large social networking sites are used to push massive amounts of misinformation. Further, such sites use recommendation algorithms that often reward bad-faith actors. Since these sites are so large and ubiquitous, this represents a threat to consumers and democracy. So far this is not controversial; even Facebook and Twitter acknowledge this.
However, Yang’s proposed remedy is to require said recommendation algorithms to be either posted in full online or pre-approved by the government. The former would needlessly stifle expression and innovation, and as for the latter, well, imagine Trump with that power.
So where is this faulty reasoning I’m talking about? It’s very simple. If there’s a tech company that’s so big and ubiquitous you feel it’s a danger to democracy, what you have isn’t a justification for an algorithm-policing bureaucracy—what you have is a company that’s too big. And the government already has the authority to address that.
Yang says some dumb shit about how antitrust won’t work in his proposal, which demonstrates a follow-up point: The concept of antitrust enforcement is so foreign to modern Americans that they often reach for bizarre & harmful over-regulation to mitigate the damages caused by huge companies instead of just banning huge companies.
While Yang is a sideshow, and I suspect he does not understand what a recommendation algorithm is, this mode of thinking is very common, so I wanted to start a conversation. I’m happy to get into more detail in the comments.