The Cyberlaw Podcast

Sen. Schumer (D-N.Y.) has announced an ambitious plan to produce a bipartisan AI regulation program in a matter of months. Jordan Schneider admires the project; I’m more skeptical. The rest of our commentators, Chessie Lockhart and Michael Ellis, also weigh in on AI issues. Chessie lays out the case against panicking over existential AI threats, this week canvassed in the MIT Technology Review. I suggest that anyone complaining that the EU or China is getting ahead of the U.S. in AI regulation (lookin’ at you, Sen. Warner!) doesn’t quite understand the race we’re running. Jordan explains the difficulty the U.S. faces in trying to keep China from surprising us in AI.

Michael catches us up on Canada’s ill-advised effort to force Google and Meta to pay Canadian media whenever a user links to a Canadian story. Meta has already said it would rather end such links. The end result could be that even more Canadian news gets filtered through American media, hardly a popular outcome north of the border.

Speaking of ill-advised regulatory initiatives, Michael and I comment on Australia’s threatening Twitter with a fine for allowing too much hate speech on the platform post-Elon.  

Chessie gives an overview of the Data Elimination and Limiting Extensive Tracking and Exchange Act or the DELETE Act, a relatively modest bipartisan effort to regulate data brokers’ control of personal data. Michael and I talk about the growing tension between EU member states with real national security tasks to complete and the Brussels establishment, which has enjoyed a 70-year holiday from national security history and expects the next 70 to be more of the same. The latest conflict is over how much leeway to give member states when they feel the need to plant spyware on journalists’ phones. Remarkably, both sides think the government should have such leeway; the fight is over how much.  

Michael and I are surprised that the BBC feels obliged to ask, “Why is it so rare to hear about Western cyber-attacks?” Because, BBC, the agencies carrying out those attacks are on our side and mostly respect rules we support.

In updates and quick hits:

Download 464th Episode (mp3)


You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-464.mp3
Category:general -- posted at: 10:49am EDT

Senator Ron Wyden (D-Ore.) is to moral panics over privacy what Andreessen Horowitz is to cryptocurrency startups. He’s constantly trying to blow life into them, hoping to justify new restrictions on government or private uses of data. His latest crusade is against the intelligence community’s purchase of behavioral data, which is generally available to everyone from Amazon to the GRU. He has launched his campaign several times, introducing legislation, holding up Avril Haines’s confirmation over the issue, and extracting a Director of National Intelligence report on the topic that has now been declassified. It was a sober and reasonable explanation of why commercial data is valuable for intelligence purposes, so naturally WIRED magazine’s headline summary was, “The U.S. Is Openly Stockpiling Dirt on All Its Citizens.” Matthew Heiman takes us through the story, sparking a debate that pulls in Michael Karanicolas and Cristin Flynn Goodwin.

Next, Michael explains IBM’s announcement that it has made a big step forward in quantum computing. 

Meanwhile, Cristin tells us, the EU has taken another incremental step forward in producing its AI Act—mainly by piling even more demands on artificial intelligence companies. We debate whether Europe can be a leader in AI regulation if it has no AI industry. (I think it makes the whole effort easier, pointing to a Stanford study suggesting that every AI model we’ve seen is already in violation of the AI Act’s requirements.)

Michael and I discuss a story claiming persuasively that an Amazon driver’s allegation of racism led to an Amazon customer being booted out of his own “smart” home system for days. This leads us to the question of how Silicon Valley’s many “local” monopolies enable its unaccountable power to dish out punishment to customers it doesn’t approve of.

Matthew recaps the administration’s effort to turn the debate over renewal of section 702 of FISA. This week, it rolled out some impressive claims about the cyber value of 702, including identifying the Colonial Pipeline attackers (and getting back some of the ransom). It also introduced yet another set of FBI reforms designed to ensure that agents face career consequences for breaking the rules on accessing 702 data. 

Cristin and I award North Korea the “Most Improved Nation State Hacker” prize for the decade, as the country triples its cryptocurrency thefts and shows real talent for social engineering and supply chain exploits. Meanwhile, the Russians who are likely behind Anonymous Sudan decided to embarrass Microsoft with a DDOS attack on its application level. The real puzzle is what Russia gains from the stunt. 

Finally, in updates and quick hits, we give deputy national cyber director Rob Knake a fond sendoff, as he moves to the private sector, we anticipate an important competition decision in a couple of months as the FTC tries to stop the Microsoft-Activision Blizzard merger in court, and I speculate on what could be a Very Big Deal – the possible breakup of Google’s adtech business.

Download 463rd Episode (mp3)


You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-463.mp3
Category:general -- posted at: 11:01am EDT

It was a disastrous week for cryptocurrency in the United States, as the Securities Exchange Commission (SEC) filed suit against the two biggest exchanges, Binance and Coinbase, on a theory that makes it nearly impossible to run a cryptocurrency exchange that is competitive with overseas exchanges. Nick Weaver lays out the differences between “process crimes” and “crime crimes,” and how they help distinguish the two lawsuits. The SEC action marks the end of an uneasy truce, but not the end of the debate. Both exchanges have the funds for a hundred-million-dollar defense and lobbying campaign. So you can expect to hear more about this issue for years (and years) to come.

I touch on two AI regulation stories. First, I found Mark Andreessen’s post trying to head off AI regulation pretty persuasive until the end, where he said that the risk of bad people using AI for bad things can be addressed by using AI to stop them. Sorry, Mark, it doesn’t work that way. We aren’t stopping the crimes that modern encryption makes possible by throwing more crypto at the culprits. 

My nominee for the AI Regulation Hall of Fame, though, goes to Japan, which has decided to address the phony issue of AI copyright infringement by declaring that it’s a phony issue and there’ll be no copyright liability for their AI industry when they train models on copyrighted content. This is the right answer, but it’s also a brilliant way of borrowing and subverting the EU’s GDPR model (“We regulate the world, and help EU industry too”). If Japan applies this policy to models built and trained in Japan, it will give Japanese AI companies at least an arguable immunity from copyright claims  around the world. Companies will flock to Japan to train their models and build their datasets in relative regulatory certainty. The rest of the world can follow suit or watch their industries set up shop in Japan. It helps, of course, that copyright claims against AI are mostly rent-seeking by Big Content, but this has to be the smartest piece of international AI regulation any jurisdiction has come up with so far.

Kurt Sanger, just back from a NATO cyber conference in Estonia, explains why military cyber defenders are stressing their need for access to the private networks they’ll be defending. Whether they’ll get it, we agree, is another kettle of fish entirely.

David Kris turns to public-private cooperation issues in another context. The Cyberspace Solarium Commission has another report out. It calls on the government to refresh and rethink the aging orders that regulate how the government deals with the private sector on cyber matters.

Kurt and I consider whether Russia is committing war crimes by DDOSing emergency services in Ukraine at the same time as its bombing of Ukrainian cities. We agree that the evidence isn’t there yet. 

Nick and I dig into two recent exploits that stand out from the crowd. It turns out that Barracuda’s security appliance has been so badly compromised that the only remedial measure involve a woodchipper. Nick is confident that the tradecraft here suggests a nation-state attacker. I wonder if it’s also a way to move Barracuda’s customers to the cloud. 

The other compromise is an attack on MOVEit Transfer. The attack on the secure file transfer system has allowed ransomware gang Clop to download so much proprietary data that they have resorted to telling their victims to self-identify and pay the ransom rather than wait for Clop to figure out who they’ve pawned.

Kurt, David, and I talk about the White House effort to sell section 702 of FISA for its cybersecurity value and my effort, with Michael Ellis, to sell 702 (packaged with intelligence reform) to a conservative caucus that is newly skeptical of the intelligence community. David finds himself uncomfortably close to endorsing our efforts.

Finally, in quick updates:

Download 462nd Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

 

Direct download: TheCyberlawPodcast-462_1.mp3
Category:general -- posted at: 11:31am EDT

This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers’ recent call for attention to the existential risks posed by AI; he thinks it’s a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I’m utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination.

Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children’s privacy and another over security failings at Amazon’s Ring doorbell operation. The bigger story is the FTC’s effort to issue a commercial death sentence on Meta’s children’s business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do.

 Paul flags a batch of China stories:

Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. 

Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). 

Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times’s stubborn refusal to let him cancel his subscription. He also previews California’s impending Journalism Preservation Act.

Download 461st Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-461.mp3
Category:general -- posted at: 9:12am EDT

In this bonus episode of the Cyberlaw Podcast, I interview Jimmy Wales, the cofounder of Wikipedia. Wikipedia is a rare survivor from the Internet Hippie Age, coexisting like a great herbivorous dinosaur with Facebook, Twitter, and the other carnivorous mammals of Web 2.0. Perhaps not coincidentally, Jimmy is the most prominent founder of a massive internet institution not to become a billionaire. We explore why that is, and how he feels about it. 

I ask Jimmy whether Wikipedia’s model is sustainable, and what new challenges lie ahead for the online encyclopedia. We explore the claim that Wikipedia has a lefty bias, whether a neutral point of view can be maintained by including only material from trusted sources, and I ask Jimmy about a concrete, and in my view weirdly biased, entry in Wikipedia on “Communism.”

We close with an exploration of the opportunities and risks posed for Wikipedia from ChatGPT and other large language AI models.  

Download 460th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-460.mp3
Category:general -- posted at: 4:12pm EDT

1