2019.07.05; Friday July 5th, 2019: Will California’s New Bot Law Strengthen Democracy? | The New Yorker

Will California’s New Bot Law Strengthen Democracy?

By

July 2, 2019

California is the first state to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter.

Photograph by Emma Innocenti / Getty

When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives. In other words, they are an especially useful tool, considering how politics is played today.

On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California—or rather, the people who deploy them—will have to level with their audience.

“It’s literally taking these high-end technological concepts and bringing them home to basic common-law principles,” Robert Hertzberg, a California state senator who is the author of the bot-disclosure law, told me. “You can’t defraud people. You can’t lie. You can’t cheat them economically. You can’t cheat ’em in elections. ”

California’s bot-disclosure law is more than a run-of-the-mill anti-fraud rule. By attempting to regulate a technology that thrives on social networks, the state will be testing society’s resolve to get our (virtual) house in order after more than two decades of a runaway Internet. We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government.

Regulating bots should be  low-hanging fruit when it comes to improving the Internet. The California law doesn’t even ban them outright but, rather, insists that they identify themselves in a manner that is “clear, conspicuous, and reasonably designed.”

But the path from bill to law was hardly easy. Initial versions of the legislation were far more sweeping: large platforms would have been required to take down bots that didn’t reveal themselves, and all bots were covered, not just explicitly political or commercial ones. The trade group the Internet Association and the digital-rights group the Electronic Frontier Foundation, among others, mobilized quickly in opposition, and those provisions were dropped from the draft bill.

Opposition to the bot bill came both from the large social-network platforms that profit from an unregulated public square and from adherents to the familiar libertarian ideology of Silicon Valley, which sees the Internet as a reservoir of unfettered individual freedom. Together, they try to block government encroachment. As John Perry Barlow, an early cyberlibertarian and a founder of E.F.F., said to the “Governments of the Industrial World” in his 1996 “Declaration of Independence of Cyberspace”: “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

The point where economic self-interest stops and libertarian ideology begins can be hard to identify. Mark Zuckerberg, of Facebook, speaking at the Aspen Ideas Festival last week, appealed to personal freedom to defend his platform’s decision to allow the microtargeting of false, incendiary information. “I do not think we want to go so far towards saying that a private company prevents you from saying something that it thinks is factually incorrect,” he said. “That to me just feels like it’s too far and goes away from the tradition of free expression.”

Video From The New Yorker

What the Notre-Dame Fire Means for Paris

In Aspen, Zuckerberg was responding to a question about why his platform declined to take down an altered video that was meant to fool viewers into thinking that Nancy Pelosi was slurring her speech. In an interview last year with Recode, he tried to  explain why Facebook allows Holocaust deniers to spread false conspiracy theories.

To be clear, Facebook isn’t the government (yet). As a private company, it can and does take down speech it doesn’t like—nude pictures, for example. What Zuckerberg was describing was the kind of political speech he believes the government should protect and the policy he wants Facebook to follow.