The Cyberlaw Podcast

This bonus episode is an interview with Josephine Wolff and Dan Schwarcz, who along with Daniel Woods have written an article with the same title as this post. Their thesis is that breach lawyers have lost perspective in their no-holds-barred pursuit of attorney-client privilege to protect the confidentiality of forensic reports that diagnose the breach. Remarkably for a law review article, it contains actual field research. The authors interviewed all the players in breach response, from the company information security teams, the breach lawyers, the forensics investigators, the insurers and insurance brokers, and more. I remind them of Tracy Kidder’s astute observation that, in building a house, there are three main players—owner, architect, and builder—and that if you get any two of them in the room alone, they will spend all their time bad-mouthing the third. Wolff, Schwarcz, and Woods seem to have done that with the breach response players, and the bad-mouthing falls hardest on the lawyers. 

The main problem is that using attorney-client privilege to keep a breach forensics process confidential is a reach. So, the courts have been unsympathetic. Which forces lawyers to impose more and more restrictions on the forensic investigator and its communications in the hope of maintaining confidentiality. The upshot is that no forensics report at all is written for many breaches (up to 95 percent, Josephine estimates). How does the breached company find out what it did wrong and what it should do to avoid the next breach? Simple. Their lawyer translates the forensic firm’s advice into a PowerPoint and briefs management. Really, what could go wrong?

In closing, Dan and Josephine offer some ideas for how to get out of this dysfunctional mess. I push back. All in all, it’s the most fun I’ve ever had talking about insurance law.

Direct download: TheCyberlawPodcast-435.mp3
Category:general -- posted at: 7:32pm EDT

It’s been a news-heavy week, but we have the most fun in this episode with ChatGPT. Jane Bambauer, Richard Stiennon, and I pick over the astonishing number of use cases and misuse cases disclosed by the release of ChatGPT for public access. It is talented—writing dozens of term papers in seconds. It is sociopathic—the term papers are full of falsehoods, down to the made-up citations to plausible but nonexistent New York Times stories. And it has too many lawyers—Richard’s request that it provide his bio (or even Einstein’s) was refused on what are almost certainly data protection grounds. Luckily, either ChatGPT or its lawyers are also bone stupid, since reframing the question fools the machine into subverting the legal and PC limits it labors under. I speculate that it beat Google to a public relations triumph precisely because Google had even more lawyers telling their artificial intelligence what not to say.

In a surprisingly under covered story, Apple has gone all in on child pornography. Its phone encryption already makes the iPhone a safe place to record child sexual abuse material (CSAM); now Apple will encrypt users’ cloud storage with keys it cannot access, allowing customers to upload CSAM without fear of law enforcement. And it has abandoned its effort to identify such material by doing phone-based screening. All that’s left of its effort is a weak option that allows parents to force their kids to activate an option that prevents them from sending or receiving nude photos. Jane and I dig into the story, as well as Apple’s questionable claim to be offering the same encryption to its Chinese customers.

Nate Jones brings us up to date on the National Defense Authorization Act, or NDAA. Lots of second-tier cyber provisions made it into the bill, but not the provision requiring that critical infrastructure companies report security breaches. A contested provision on spyware purchases by the U.S. government was compromised into a useful requirement that the intelligence community identify spyware that poses risks to the government.

Jane updates us on what European data protectionists have in store for Meta, and it’s not pretty. The EU data protection supervisory board intends to tell the Meta companies that they cannot give people a free social media network in exchange for watching what they do on the network and serving ads based on their behavior. If so, it’s a one-two punch. Apple delivered the first blow by curtailing Meta’s access to third-party behavioral data. Now even first-party data could be off limits in Europe. That’s a big revenue hit, and it raises questions whether Facebook will want to keep giving away its services in Europe.  

Mike Masnick is Glenn Greenwald with a tech bent—often wrong but never in doubt, and contemptuous of anyone who disagrees. But when he is right, he is right. Jane and I discuss his article recognizing that data protection is becoming a tool that the rich and powerful can use to squash annoying journalist-investigators. I have been saying this for decades. But still, welcome to the party, Mike!

Nate points to a plea for more controls on the export of personal data from the U.S. It comes not from the usual privacy enthusiasts but from the U.S. Naval Institute, and it makes sense.

It was a bad week for Europe on the Cyberlaw Podcast. Jane and I take time to marvel at the story of France’s Mr. Privacy and the endless appetite of Europe’s bureaucrats for his serial grifting.

Nate and I cover what could be a good resolution to the snake-bitten cloud contract process at the Department of Defense. The Pentagon is going to let four cloud companies—Google, Amazon, Oracle And Microsoft—share the prize.

You did not think we would forget Twitter, did you? Jane, Richard, and I all comment on the Twitter Files. Consensus: the journalists claiming these stories are nothingburgers are more driven by ideology than news. Especially newsworthy are the remarkable proliferation of shadowbanning tools Twitter developed for suppressing speech it didn’t like, and some considerable though anecdotal evidence that the many speech rules at the company were twisted to suppress speech from the right, even when the rules did not quite fit, as with LibsofTikTok, while similar behavior on the left went unpunished. Richard tells us what it feels like to be on the receiving end of a Twitter shadowban. 

The podcast introduces a new feature: “We Read It So You Don’t Have To,” and Nate provides the tl;dr on an New York Times story: How the Global Spyware Industry Spiraled Out of Control.

And in quick hits and updates:

Direct download: TheCyberlawPodcast-434.mp3
Category:general -- posted at: 9:55am EDT

This episode of the Cyberlaw Podcast delves into the use of location technology in two big events—the surprisingly outspoken lockdown protests in China and the Jan. 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to the provocative question, what’s the difference? Jamil’s answer (and mine, for what it’s worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users’ information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded. 

Whether we should be relying on Google’s made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective penalties on companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is there to hose me down.

The U.K. is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests.

The Department of Homeland Security’s Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry’s complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to agree on a report format that doesn’t resemble the State of the Union address.

It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities.

Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of the U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That’s very likely a World Trade Organziation (WTO) violation, I offer, but one that pales in comparison to thirty years of WTO-violating threats to constrain European data exports to the U.S. When you think of it as retaliation for the use of General Data Protection Regulation (GDPR) to attack U.S. intelligence programs, the EV subsidy is easy to defend.

I ask Nick what we learned this week from Twitter coverage. His answer—that Elon Musk doesn’t understand how hard content moderation is—doesn’t exactly come as news. Nor, really, does most of what we learned from Matt Taibbi’s review of Twitter’s internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn’t come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D.-Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him.

Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose.  

Finally, in quick hits:

And I try to explain why the decision of the DHS cyber safety board to look into the Lapsus$ hacks seems to drawing fire.

Direct download: TheCyberlawPodcast-433.mp3
Category:general -- posted at: 10:17am EDT

1