The Cyberlaw Podcast

This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it’s so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” Somehow, I can’t help suspecting that the lawyer’s claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you’re wondering whether AI poses existential risk, the answer for at least one lawyer’s license is almost certainly “yes.”

But the bigger story of the week was the cries from Google and Microsoft leadership for government regulation. Jeffery Atik and Richard Stiennon weigh in. Microsoft’s President Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. And they point out that, as usual, Smith is advocating for a process that Microsoft could master pretty easily. Google’s Sundar Pichai also joins the “regulate me” party, but a bit half-heartedly. I argue that the best way to judge Silicon Valley’s confidence in the accuracy of AI is by asking when Google and Apple will be willing to use AI to identify photos of gorillas as gorillas. Because if there’s anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes

Moving from policy to tech, Richard and I talk about Google’s integration of AI into search; I see some glimmer of explainability and accuracy in Google’s willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.

Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia’s CEO foresees. Jeffery and I agree that Invidia has much to fear from a Chinese effort to build a national champion to compete in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether its effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.

China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn’t be able to repay China’s infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the friend and the money. 

Finally, Richard and Jeffery both opine on Irelands imposing—under protest—of a $1.3 billion fine on Facebook for sending data to the United States despite the Court of Justice of the European Union’s (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their deal on a third effort to satisfy the CJEU that U.S. law is “adequate” to protect the rights of Europeans. Speaking of which, anyone who’s enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will  debate the latest privacy framework. If we can, we’ll release it as a bonus episode of this podcast, but listening live should be even more fun!

Download 459th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-459.mp3
Category:general -- posted at: 2:43pm EDT

This episode features part 1 of our two-part interview with Paul Stephan, author of The World Crisis and International Law—a deeper and more entertaining read than the title suggests. Paul lays out the long historical arc that links the 1980s to the present day. It’s not a pretty picture, and it gets worse as he ties those changes to the demands of the Knowledge Economy. How will these profound political and economic clashes resolve themselves?  We’ll cover that in part 2.

Meanwhile, in this episode of the Cyberlaw Podcast I tweak Sam Altman for his relentless embrace of regulation for his industry during testimony last week in the Senate.  I compare him to another Sam with a similar regulation-embracing approach to Washington, but Chinny Sharma thinks it’s more accurate to say he did the opposite of everything Mark Zuckerberg did in past testimony. Chinny and Sultan Meghji unpack some of Altman’s proposals, from a new government agency to license large AI models, to safety standards and audit. I mock Sen. Richard Blumenthal, D-Conn., for panicking that “Europe is ahead of us” in industry-killing regulation. That earns him immortality in the form of a new Cybertoon, left. Speaking of Cybertoonz, I note that an earlier Cybertoon scooped a prominent Wall Street Journal article covering bias in AI models was scooped – by two weeks. 

Paul explains the Supreme Court’s ruling on social media liability for assisting ISIS, and why it didn’t tell us anything of significance about section 230. 

Chinny and I analyze reports that the FBI misused its access to a section 702 database.  All of the access mistakes came before the latest round of procedural reforms, but on reflection, I think the fault lies with the Justice Department and the Director of National Intelligence, who came up with access rules that all but guarantee mistakes and don’t ensure that the database will be searched when security requires it. 

Chinny reviews a bunch of privacy scandal wannabe stories

Download the 458th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-458.mp3
Category:general -- posted at: 3:01pm EDT

Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models’ lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it’s nice to see someone trying. The other effort, Anthropic’s creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up.

The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to big institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that’s because Europe doesn’t have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT.

Anne-Gabrielle Haie is more friendly to the EU’s data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider.

Paul Rosenzweig introduces us to one of the U.S. government’s most impressive technical achievements in cyberdefense—tracking down, reverse engineering, and then killing Snake, one of Russia’s best hacking tools.

Paul and I chew over China’s most recent self-inflicted wound in attracting global investment—the raid on Capvision. I agree that it’s going to discourage investors who need information before they part with their cash. But I offer a lukewarm justification for China’s fear that Capvision’s business model encourages leaks.

Maury reviews Chinese tech giant Baidu’s ChatGPT-like search add-on. I ask whether we can ever trust models like ChatGPT for search, given their love affair with plausible falsehoods.

Paul reviews the technology that will be needed to meet what’s looking like a national trend to  require social media age verification. Maury reviews the ruling upholding the lawfulness of the UK’s interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy.

Finally, in quick hits:

Download the 457th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-457.mp3
Category:general -- posted at: 10:01am EDT

The “godfather of AI” has left Google, offering warnings about the existential risks for humanity of the technologyMark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There’s more agreement on the White House summit on AI risks, which seems to have followed Mark’s “let’s worry about tomorrow tomorrow” prescription. I think existential risks are a bigger concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I argue again that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas more widely, which provokes lively pushback from Jim Dempsey and Mark.

Other prospective AI regulators, from the Federal Trade Commission (FTC)’s Lina Khan to the Italian data protection agency, come in for commentary. I’m struck by the caution both have shown, perhaps due to their recognizing the difficulty of applying old regulatory frameworks to this new technology. It’s not, I suspect, because Lina Khan’s FTC has lost its enthusiasm for pushing the law further than it can be pushed. This week’s example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate ‘remedy” for what look like Facebook foot faults in complying with an earlier FTC order. 

Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to General Data Protection Regulation (GDPR) and California’s privacy law. Mark reviews Pornhub’s reaction to the Utah law on kids’ access to porn. He thinks age verification requirements are due for another look by the courts.  

Jim explains the state appellate court decision ruling that the NotPetya attack on Merck was not an act of war and thus not excluded from its insurance coverage.

Nate and I recommend Kim Zetter’s revealing story on the  SolarWinds hack. The details help to explain why the Cyber Safety Review Board hasn’t examined SolarWinds—and why it absolutely has to—because the full story is going to embarrass a lot of powerful institutions.

In quick hits, 

  • Mark makes a bold prediction about the fate of Canada’s law requiring Google and Facebook to pay when they link to Canadian media stories: Just like in Australia, the tech giants and the industry will reach a deal. 

  • Jim and I comment on the three-year probation sentence for Joe Sullivan in the Uber “misprision of felony” case—and the sentencing judge’s wide-ranging commentary. 

  • I savor the impudence of the hacker who has broken into Russian intelligence’s bitcoin wallets and burned the money to post messages doxing the agencies involved.

  • And for those who missed it, Rick Salgado and I wrote a Lawfare article on why CISOs should support renewal of Foreign Intelligence Surveillance Act (FISA) section 702, and Metacurity named it one of the week’s “Best Infosec-related Long Reads.” 

Download 456th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-456.mp3
Category:general -- posted at: 1:59pm EDT

We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That’s the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they’ve been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they’ve dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the victims of cyberattacks (makes sense: foreign hackers are often a national security concern, and their whole business model is to use U.S. infrastructure to communicate [in a very special way] with U.S. networks.) So it turns out that all those civil libertarians who want to make it hard for the government to search 702 for the names of Americans are proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks!

Justin Sherman covers China’s push to attack and even take over enemy (U.S.) satellites. This story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that the Defense Department has gotten a little too comfortable waging war against people who don’t really have an army, and that the Ukraine conflict shows how much tougher things get when there’s an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.)

Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. We can look forward to an argument next year about the constitutionality of public officials blocking people who post mean comments on the officials’ Facebook pages. 

Justin and I break down a story about whether Twitter is complying with more government demands under Elon Musk. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it’s much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don’t fight. But with criticism of Elon Musk’s Twitter already turned up to 11, that’s not likely to persuade him.

Adam and I are impressed by Citizen Labs’ report on search censorship in China. We’d both kind of like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less transparency. If you suspect that’s because there’s more censorship than U.S. companies want to admit, here’s a straw in the wind: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling political speech than China’s main search engine, Baidu. This fits with my discovery that Bing’s Image Creator refused to construct an image using Taiwan’s flag. (It was OK using U.S. and German flags, but not China’s.) I also credit Microsoft for fixing that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags. 

Adam covers the EU’s enthusiasm for regulating other countries’ companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American companies. 

I cover a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the Environmental Protection Agency’s legally questionable effort to impose cybersecurity requirements on public water systems, using an “interpretation” of a law that doesn’t say much about cybersecurity into a law that never had it before.

Michael Ellis and I cover the story detailing a former NSA director’s business ties to Saudi Arabia—and expand it to confess our unease at the number of generals and admirals moving from command of U.S. forces to a consulting gig with the countries they were just negotiating with. Recent restrictions on the revolving door for intelligence officers gets a mention.

Adam covers the Quebec decision awarding $500 thousand to a man who couldn’t get Google to consistently delete a false story portraying him as a pedophile and conman.

Justin and I debate whether Meta’s Reels feature has what it takes to be a plausible TikTok competitor? Justin is skeptical. I’m a little less so. Meta’s claims about the success of Reels aren’t entirely persuasive, but perhaps it’s too early to tell.

The D.C. Circuit has killed off the state antitrust case trying to undo Meta’s long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn’t apply the same way to the Federal Trade Commission (FTC), which will get to pursue a lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into battle like conscripts in Bakhmut, I ask, when will the commission start recruiting in Russian prisons?

That was fast. Adam tells us that the Brazil court order banning on Telegram because it wouldn’t turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn’t out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance.   

And in another regulatory walkback, Italy’s privacy watchdog is letting ChatGPT back into the country. I suspect the Italian government of cutting a deal to save face as it abandons its initial position on ChatGPT’s scraping of public data to train the model.

Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don’t see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education. 

Download 455th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-455.mp3
Category:general -- posted at: 10:18am EDT

1