The Cyberlaw Podcast

Okay, yes, I promised to take a hiatus after episode 500. Yet here it is a week later, and I'm releasing episode 501. Here's my excuse. I read and liked Dmitri Alperovitch's book, "World on the Brink: How America Can Beat China in the Race for the 21st Century."  I told him I wanted to do an interview about it. Then the interview got pushed into late April because that's when the book is actually coming out.

So sue me. I'm back on hiatus.

The conversation  in the episode begins with Dmitri's background in cybersecurity and geopolitics, beginning with his emigration from the Soviet Union as a child through the founding of Crowdstrike and becoming a founder of Silverado Policy Accelerator and an advisor to the Defense Department. Dmitri shares his journey, including his early start in cryptography and his role in investigating the 2010 Chinese hack of Google and other companies, which he named Operation Aurora.

Dmitri opens his book with a chillingly realistic scenario of a Chinese invasion of Taiwan. He explains that this is not merely a hypothetical exercise, but a well-researched depiction based on his extensive discussions with Taiwanese leadership, military experts, and his own analysis of the terrain.

Then, we dive into the main themes of his book -- which is how to prevent his scenario from coming true. Dmitri stresses the similarities and differences between the US-Soviet Cold War and what he sees as Cold War II between the U.S. and China. He argues that, like Cold War I, Cold War II will require a comprehensive strategy, leveraging military, economic, diplomatic, and technological deterrence.

Dmitri also highlights the structural economic problems facing China, such as the middle-income trap and a looming population collapse. Despite these challenges, he stresses that the U.S. will face tough decisions as it seeks to deter conflict with China while maintaining its other global obligations.

We talk about diversifying critical supply chains away from China and slowing China's technological progress in areas like semiconductors. This will require continuing collaboration with allies like Japan and the Netherlands to restrict China's access to advanced chip-making equipment.

Finally, I note the remarkable role played in Cold War I by Henry Kissinger and Zbigniew Brzezinski, two influential national security advisers who were also first-generation immigrants.  I ask whether it's too late to nominate Dmitri to play the same role in Cold War II. You heard it here first!

Direct download: The_Cyberlaw_Podcast_501.mp3
Category:general -- posted at: 9:45am EDT

There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity.

 

Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports.  The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint.  Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings.  The signs for improvement are not good.  The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they’ve been questioned by the rest of government for decades.

 

With Microsoft down,  you might think open source would be up.  Think again, Nick Weaver tells us.  The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like.   And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from  being in every Linux distro when a Microsoft employee discovered the implant.  But the people who almost pulled this off seemed well-practiced and well-resourced. They’ve likely done this before, and will likely do it again.  Leaving all open source projects facing their own strategic vulnerability.

 

It wouldn’t be the Cyberlaw Podcast without at least one Baker rant about political correctness.  The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics.  To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court’s Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill.  Why would the Republicans who control the House go along with this?  I offer two reasons:  first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party.

 

Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world.  Nick is critical of Israel’s approach; I am less so. But there’s no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be.

 

Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we’ve heard since 2016 (when straight social media was the villain).  The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn’t seem to be having much effect.

 

Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.”  He invokes the Hapsburgs to explain what’s going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability,  or that hallucinations will lead to defamation liability.  Color me skeptical.

 

 Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC’s burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden’s inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security.

 

Faced with a security researcher who responded to a hacking attack by taking down North Korea’s internet, Adam acknowledges that maybe my advocacy of hacking back wasn’t quite as crazy as he thought when he was in government.

 

In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he’ll be expected to channel Max Schrems.  And Paul offers a summary of what has made the last 500 episodes so much fun for me, for our guests, and for our audience.  Thanks to you all for the gift of your time and your tolerance!

Direct download: The_Cyberlaw_Podcast_500.mp3
Category:general -- posted at: 4:00am EDT

This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere.  Having promised to take stock of the podcast when it reached episode 500, I’ve decided that I, the podcast, and the listeners all deserve a break.  So I’ll be taking one after the next episode.  No final decisions have been made, so don’t delete your subscription, but don’t expect a new episode any time soon.  It’s been a great run, from the dawn of the podcast age, through the ad-fueled podcast boom, which I manfully resisted, to the market correction that’s still under way.  It was a pleasure to engage with listeners from all over the world. Yes, even the EU! 

 

As they say, in the podcast age, everyone is famous for fifteen people.  That’s certainly been true for me, and I’ll always be grateful for your support – not to mention for all the great contributors who’ve joined the podcast over the years

 

Back to cyberlaw, there are a surprising number of people arguing that there’s no reason to worry about existential and catastrophic risks from proliferating or runaway AI risks.  Some of that is people seeking clever takes; a lot of it is ideological, driven by fear that worrying about the end of the world will distract attention from the dire but unidentified dangers of face recognition.  One useful antidote is the Gladstone Report, written for the State Department’s export control agency. David Kris gives an overview of the report for this episode of the Cyberlaw Podcast. The report explains the dynamic, and some of the evidence, behind all the doom-saying, a discussion that is more persuasive than its prescriptions for regulation.

 

Speaking of the dire but unidentified dangers of face recognition, Paul Stephan and I unpack a New York Times piece saying that Israel is using face recognition in its Gaza conflict. Actually, we don’t so much unpack it as turn it over and shake it, only to discover it’s largely empty.  Apparently the editors of the NYT thought that tying face recognition to Israel and Gaza was all we needed to understand that the technology is evil.

 

More interesting is the story arguing that the National Security Agency, traditionally at the forefront of computers and national security, may have to sit out the AI revolution. The reason, David tells us, is that NSA’s access to mass quantities of data for training is complicated by rules and traditions against intelligence agencies accessing data about Americans. And there are few training databases not contaminated with data about and by Americans.

 

While we’re feeling sorry for the intelligence community as it struggles with new technology, Paul notes that Yahoo News has assembled a long analysis of all the ways that personalized technology is making undercover operations impossible for CIA and FBI alike.

 

Michael Ellis weighs in with a review of a report by the Foundation for the Defence of Democracies on the need for a US Cyber Force to man, train, and equip fighting nerds for Cyber Command.  It’s a bit of an inside baseball solution, heavy on organizational boxology, but we’re both persuaded that the current system for attracting and retaining cyberwarriors is not working. In the spirit of “Yes, Minister,” we must do something, and this is something.

 

In that same spirit, it’s fair to say that the latest Senate Judiciary proposal for a “compromise” 702 renewal bill is nothing much – a largely phony compromise chock full of ideological baggage. David Kris and I are unimpressed, and surprised at how muted the Biden administration has been in trying to wrangle the Democratic Senate into producing a workable bill.

 

Paul and Michael review the latest trouble for TikTok – a likely FTC lawsuit over privacy. Michael and I puzzle over the stories claiming that Meta may have “wiretapped” Snapchat analytic data.  It comes from a trial lawyer suing Meta, and there are a lot of unanswered questions, such as whether users consented to the collection of the data.  In the end, we can’t help thinking that if Meta had 41 of its lawyers review the project, they found a way to avoid wiretapping liability.

 

The most intriguing story of the week is the complex and surprising three- or four-cornered fight in northern Myanmar over hundreds of thousands of women trapped in call centers to run romance and pig-butchering scams.  Angry that many of the women and many victims are Chinese, China fostered a warlord’s attack on the call centers that freed many women, and deeply embarrassed the current Myanmar ruling junta and its warlord allies, who’d been running the scams.  And we thought our southern border was a mess!

And  in quick hits:

·         Elon Musk's X Corp has lost lawsuit against the left-wing smear artists at CCDH

·         AT&T has lost millions of customer records in a data breach

·         Utah has passed an:  AI regulation bill

·         The US is still in the cyber sanctions business, tagging several Russian fintech firms and a collection of  Chinese state hackers.

·         The SEC isn’t done investigating SolarWinds; now it’s investigating companies harmed by the supply chain attack.

·         Apple’s reluctant compliance with EU law has attracted the expected EU investigation of its app store policies  App Store changes rejected: Apple could be fined 10% of global turnover

·         And in a story that will send chills through large parts of the financial and tech elite, it turns out that Jeffrey Epstein’s visitor records didn’t die with him.  Thanks to geolocation adtech, they can be reconstructed.

 

Direct download: The_Cyberlaw_Podcast_499_.mp3
Category:general -- posted at: 3:00am EDT

1