The Cyberlaw Podcast

Geopolitics has always played a role in prosecuting hackers. But it’s getting a lot more complicated, as Kurt Sanger reports. Responding to a U.S. request, a Russian cybersecurity executive has been arrested in Kazakhstan, accused of having hacked Dropbox and Linkedin more than ten years ago. The executive, Nikita Kislitsin, has been hammered by geopolitics in that time. The firm he joined after the alleged hacking, Group IB, has seen its CEO arrested by Russia for treason—probably for getting too close to U.S. investigators. Group IB sold off all its Russian assets and moved to Singapore, while Kislitsin stayed behind, but showed up in Kazakhstan recently, perhaps as a result of the Ukraine war. Now both Russia and the U.S. have dueling extradition requests before the Kazakh authorities; Paul Stephan points out that Kazakhstan’s tenuous independence from Russia will be tested by the tug of war. 

In more hacker geopolitics, Kurt and Justin Sherman examine the hacking of a Russian satellite communication system that served military and civilian users. It’s reminiscent of the Viasat hack that complicated Ukrainian communications, and a bunch of unrelated commercial services, when Russia invaded. Kurt explores the law of war issues raised by an attack with multiple impacts. Justin and I consider the claim that the Wagner group carried it out as part of their aborted protest march on Moscow. We end up thinking that this makes more sense as the Ukrainians serving up revenge for Viasat at a time when it might complicate Russian’s response to the Wagner group.  But when it’s hacking and geopolitics, who really knows?

Paul outlines the legal theory—and antitrust nostalgia—behind the  FTC’s planned lawsuit targeting Amazon’s exploitation of its sales platform.  

We also ask whether the FTC will file the case in court or before the FTC’s own administrative law judge. The latter may smooth the lawsuit’s early steps, but it will also bring to the fore arguments that Lina Khan should recuse herself because she’s already expressed a view on the issues to be raised by the lawsuit. I’m not Chairman Khan’s biggest fan, but I don’t see why her policy views should lead to recusal; they are, after all, why she was appointed in the first place.

Justin and I cover the latest Chinese law raising the risk of doing business in that country by adopting a vague and sweeping view of espionage. 

Paul and I try to straighten out the EU’s apparently endless series of laws governing data, from General Data Protection Regulation (GDPR) and the AI Act to the Data Act (not to be confused with the Data Governance Act). This week, Paul summarizes the Data Act, which sets the terms for access and control over nonpersonal data. It’s based on a plausible idea—that government can unleash the value of data by clarifying and making fair the rules for who can use data in new businesses. Of course, the EU is unable to resist imposing its own views of fairness, thus upsetting existing commercial arrangements without really providing any certainty about what will replace them. The outcome is likely to reduce, not improve, the certainty that new data businesses want. 

Speaking of which, that’s the critique of the AI Act now being offered by dozens of European business executives, whose open letter slams the way the AI Act kludged the regulation of generative AI into a framework where it didn’t really fit. They accuse the European Parliament of “wanting to anchor the regulation of generative AI in law and proceeding with a rigid compliance logic [that] is as bureaucratic …  as it is ineffective in fulfilling its purpose.” And you thought I was the EU-basher. 

Justin recaps an Indian court’s rejection of Twitter’s lawsuit challenging the Indian government’s orders to block users who’ve earned the government’s ire. Kurt covers a matching story about whether Facebook should suspend Hun Sen’s Facebook account for threatening users with violence. I take us to Nigeria and question why social media thinks governments can be punished for threatening violence.

Finally, in two updates,

  • I note that Google has joined Facebook in calling Canada’s bluff by refusing to link to Canadian news media in order to avoid the Canadian link tax. 

  • And I do a victory lap for the Cyberlaw Podcast’s Amber Alert feature. One week after we nominated the Commerce Department’s IT supply chain security program for an Amber Alert, the Department answered the call by posting the supply chain czar position in USAJOBS.

Download 466th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

 

Direct download: TheCyberlawPodcast-466.mp3
Category:general -- posted at: 10:44am EDT

Max Schrems is the lawyer and activist behind two (and, probably soon, a third) legal challenge to the adequacy of U.S. law to protect European personal data. Thanks to the Federalist Society’s Regulatory Transparency Project, Max and I were able to spend an hour debating the law and policy behind Europe’s generation-long fight with the United States over transatlantic data flows.  It’s civil, pointed, occasionally raucous, and wide-ranging – a fun, detailed introduction to the issues that will almost certainly feature in the next round of litigation over the latest agreement between Europe and the U.S. Don’t miss it!

Download 465th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-465.mp3
Category:general -- posted at: 8:51am EDT

Sen. Schumer (D-N.Y.) has announced an ambitious plan to produce a bipartisan AI regulation program in a matter of months. Jordan Schneider admires the project; I’m more skeptical. The rest of our commentators, Chessie Lockhart and Michael Ellis, also weigh in on AI issues. Chessie lays out the case against panicking over existential AI threats, this week canvassed in the MIT Technology Review. I suggest that anyone complaining that the EU or China is getting ahead of the U.S. in AI regulation (lookin’ at you, Sen. Warner!) doesn’t quite understand the race we’re running. Jordan explains the difficulty the U.S. faces in trying to keep China from surprising us in AI.

Michael catches us up on Canada’s ill-advised effort to force Google and Meta to pay Canadian media whenever a user links to a Canadian story. Meta has already said it would rather end such links. The end result could be that even more Canadian news gets filtered through American media, hardly a popular outcome north of the border.

Speaking of ill-advised regulatory initiatives, Michael and I comment on Australia’s threatening Twitter with a fine for allowing too much hate speech on the platform post-Elon.  

Chessie gives an overview of the Data Elimination and Limiting Extensive Tracking and Exchange Act or the DELETE Act, a relatively modest bipartisan effort to regulate data brokers’ control of personal data. Michael and I talk about the growing tension between EU member states with real national security tasks to complete and the Brussels establishment, which has enjoyed a 70-year holiday from national security history and expects the next 70 to be more of the same. The latest conflict is over how much leeway to give member states when they feel the need to plant spyware on journalists’ phones. Remarkably, both sides think the government should have such leeway; the fight is over how much.  

Michael and I are surprised that the BBC feels obliged to ask, “Why is it so rare to hear about Western cyber-attacks?” Because, BBC, the agencies carrying out those attacks are on our side and mostly respect rules we support.

In updates and quick hits:

Download 464th Episode (mp3)


You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-464.mp3
Category:general -- posted at: 10:49am EDT

Senator Ron Wyden (D-Ore.) is to moral panics over privacy what Andreessen Horowitz is to cryptocurrency startups. He’s constantly trying to blow life into them, hoping to justify new restrictions on government or private uses of data. His latest crusade is against the intelligence community’s purchase of behavioral data, which is generally available to everyone from Amazon to the GRU. He has launched his campaign several times, introducing legislation, holding up Avril Haines’s confirmation over the issue, and extracting a Director of National Intelligence report on the topic that has now been declassified. It was a sober and reasonable explanation of why commercial data is valuable for intelligence purposes, so naturally WIRED magazine’s headline summary was, “The U.S. Is Openly Stockpiling Dirt on All Its Citizens.” Matthew Heiman takes us through the story, sparking a debate that pulls in Michael Karanicolas and Cristin Flynn Goodwin.

Next, Michael explains IBM’s announcement that it has made a big step forward in quantum computing. 

Meanwhile, Cristin tells us, the EU has taken another incremental step forward in producing its AI Act—mainly by piling even more demands on artificial intelligence companies. We debate whether Europe can be a leader in AI regulation if it has no AI industry. (I think it makes the whole effort easier, pointing to a Stanford study suggesting that every AI model we’ve seen is already in violation of the AI Act’s requirements.)

Michael and I discuss a story claiming persuasively that an Amazon driver’s allegation of racism led to an Amazon customer being booted out of his own “smart” home system for days. This leads us to the question of how Silicon Valley’s many “local” monopolies enable its unaccountable power to dish out punishment to customers it doesn’t approve of.

Matthew recaps the administration’s effort to turn the debate over renewal of section 702 of FISA. This week, it rolled out some impressive claims about the cyber value of 702, including identifying the Colonial Pipeline attackers (and getting back some of the ransom). It also introduced yet another set of FBI reforms designed to ensure that agents face career consequences for breaking the rules on accessing 702 data. 

Cristin and I award North Korea the “Most Improved Nation State Hacker” prize for the decade, as the country triples its cryptocurrency thefts and shows real talent for social engineering and supply chain exploits. Meanwhile, the Russians who are likely behind Anonymous Sudan decided to embarrass Microsoft with a DDOS attack on its application level. The real puzzle is what Russia gains from the stunt. 

Finally, in updates and quick hits, we give deputy national cyber director Rob Knake a fond sendoff, as he moves to the private sector, we anticipate an important competition decision in a couple of months as the FTC tries to stop the Microsoft-Activision Blizzard merger in court, and I speculate on what could be a Very Big Deal – the possible breakup of Google’s adtech business.

Download 463rd Episode (mp3)


You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-463.mp3
Category:general -- posted at: 11:01am EDT

It was a disastrous week for cryptocurrency in the United States, as the Securities Exchange Commission (SEC) filed suit against the two biggest exchanges, Binance and Coinbase, on a theory that makes it nearly impossible to run a cryptocurrency exchange that is competitive with overseas exchanges. Nick Weaver lays out the differences between “process crimes” and “crime crimes,” and how they help distinguish the two lawsuits. The SEC action marks the end of an uneasy truce, but not the end of the debate. Both exchanges have the funds for a hundred-million-dollar defense and lobbying campaign. So you can expect to hear more about this issue for years (and years) to come.

I touch on two AI regulation stories. First, I found Mark Andreessen’s post trying to head off AI regulation pretty persuasive until the end, where he said that the risk of bad people using AI for bad things can be addressed by using AI to stop them. Sorry, Mark, it doesn’t work that way. We aren’t stopping the crimes that modern encryption makes possible by throwing more crypto at the culprits. 

My nominee for the AI Regulation Hall of Fame, though, goes to Japan, which has decided to address the phony issue of AI copyright infringement by declaring that it’s a phony issue and there’ll be no copyright liability for their AI industry when they train models on copyrighted content. This is the right answer, but it’s also a brilliant way of borrowing and subverting the EU’s GDPR model (“We regulate the world, and help EU industry too”). If Japan applies this policy to models built and trained in Japan, it will give Japanese AI companies at least an arguable immunity from copyright claims  around the world. Companies will flock to Japan to train their models and build their datasets in relative regulatory certainty. The rest of the world can follow suit or watch their industries set up shop in Japan. It helps, of course, that copyright claims against AI are mostly rent-seeking by Big Content, but this has to be the smartest piece of international AI regulation any jurisdiction has come up with so far.

Kurt Sanger, just back from a NATO cyber conference in Estonia, explains why military cyber defenders are stressing their need for access to the private networks they’ll be defending. Whether they’ll get it, we agree, is another kettle of fish entirely.

David Kris turns to public-private cooperation issues in another context. The Cyberspace Solarium Commission has another report out. It calls on the government to refresh and rethink the aging orders that regulate how the government deals with the private sector on cyber matters.

Kurt and I consider whether Russia is committing war crimes by DDOSing emergency services in Ukraine at the same time as its bombing of Ukrainian cities. We agree that the evidence isn’t there yet. 

Nick and I dig into two recent exploits that stand out from the crowd. It turns out that Barracuda’s security appliance has been so badly compromised that the only remedial measure involve a woodchipper. Nick is confident that the tradecraft here suggests a nation-state attacker. I wonder if it’s also a way to move Barracuda’s customers to the cloud. 

The other compromise is an attack on MOVEit Transfer. The attack on the secure file transfer system has allowed ransomware gang Clop to download so much proprietary data that they have resorted to telling their victims to self-identify and pay the ransom rather than wait for Clop to figure out who they’ve pawned.

Kurt, David, and I talk about the White House effort to sell section 702 of FISA for its cybersecurity value and my effort, with Michael Ellis, to sell 702 (packaged with intelligence reform) to a conservative caucus that is newly skeptical of the intelligence community. David finds himself uncomfortably close to endorsing our efforts.

Finally, in quick updates:

Download 462nd Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

 

Direct download: TheCyberlawPodcast-462_1.mp3
Category:general -- posted at: 11:31am EDT

This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation. Mark MacCarthy dismisses AI researchers’ recent call for attention to the existential risks posed by AI; he thinks it’s a sci-fi distraction from the real issues that need regulation—copyright, privacy, fraud, and competition. I’m utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed. Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on reducing AI hallucination.

Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim over children’s privacy and another over security failings at Amazon’s Ring doorbell operation. The bigger story is the FTC’s effort to issue a commercial death sentence on Meta’s children’s business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do.

 Paul flags a batch of China stories:

Gus tells us that Microsoft has effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug my upcoming debate with Max Schrems over the Privacy Framework. 

Paul is surprised to find even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (“FISA"). 

Gus asks whether automated tip suggestions should be condemned as “dark patterns” and whether the FTC needs to investigate the New York Times’s stubborn refusal to let him cancel his subscription. He also previews California’s impending Journalism Preservation Act.

Download 461st Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-461.mp3
Category:general -- posted at: 9:12am EDT

In this bonus episode of the Cyberlaw Podcast, I interview Jimmy Wales, the cofounder of Wikipedia. Wikipedia is a rare survivor from the Internet Hippie Age, coexisting like a great herbivorous dinosaur with Facebook, Twitter, and the other carnivorous mammals of Web 2.0. Perhaps not coincidentally, Jimmy is the most prominent founder of a massive internet institution not to become a billionaire. We explore why that is, and how he feels about it. 

I ask Jimmy whether Wikipedia’s model is sustainable, and what new challenges lie ahead for the online encyclopedia. We explore the claim that Wikipedia has a lefty bias, whether a neutral point of view can be maintained by including only material from trusted sources, and I ask Jimmy about a concrete, and in my view weirdly biased, entry in Wikipedia on “Communism.”

We close with an exploration of the opportunities and risks posed for Wikipedia from ChatGPT and other large language AI models.  

Download 460th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-460.mp3
Category:general -- posted at: 4:12pm EDT

This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it’s so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” Somehow, I can’t help suspecting that the lawyer’s claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you’re wondering whether AI poses existential risk, the answer for at least one lawyer’s license is almost certainly “yes.”

But the bigger story of the week was the cries from Google and Microsoft leadership for government regulation. Jeffery Atik and Richard Stiennon weigh in. Microsoft’s President Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. And they point out that, as usual, Smith is advocating for a process that Microsoft could master pretty easily. Google’s Sundar Pichai also joins the “regulate me” party, but a bit half-heartedly. I argue that the best way to judge Silicon Valley’s confidence in the accuracy of AI is by asking when Google and Apple will be willing to use AI to identify photos of gorillas as gorillas. Because if there’s anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes

Moving from policy to tech, Richard and I talk about Google’s integration of AI into search; I see some glimmer of explainability and accuracy in Google’s willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.

Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia’s CEO foresees. Jeffery and I agree that Invidia has much to fear from a Chinese effort to build a national champion to compete in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether its effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.

China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn’t be able to repay China’s infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the friend and the money. 

Finally, Richard and Jeffery both opine on Irelands imposing—under protest—of a $1.3 billion fine on Facebook for sending data to the United States despite the Court of Justice of the European Union’s (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their deal on a third effort to satisfy the CJEU that U.S. law is “adequate” to protect the rights of Europeans. Speaking of which, anyone who’s enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will  debate the latest privacy framework. If we can, we’ll release it as a bonus episode of this podcast, but listening live should be even more fun!

Download 459th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-459.mp3
Category:general -- posted at: 2:43pm EDT

This episode features part 1 of our two-part interview with Paul Stephan, author of The World Crisis and International Law—a deeper and more entertaining read than the title suggests. Paul lays out the long historical arc that links the 1980s to the present day. It’s not a pretty picture, and it gets worse as he ties those changes to the demands of the Knowledge Economy. How will these profound political and economic clashes resolve themselves?  We’ll cover that in part 2.

Meanwhile, in this episode of the Cyberlaw Podcast I tweak Sam Altman for his relentless embrace of regulation for his industry during testimony last week in the Senate.  I compare him to another Sam with a similar regulation-embracing approach to Washington, but Chinny Sharma thinks it’s more accurate to say he did the opposite of everything Mark Zuckerberg did in past testimony. Chinny and Sultan Meghji unpack some of Altman’s proposals, from a new government agency to license large AI models, to safety standards and audit. I mock Sen. Richard Blumenthal, D-Conn., for panicking that “Europe is ahead of us” in industry-killing regulation. That earns him immortality in the form of a new Cybertoon, left. Speaking of Cybertoonz, I note that an earlier Cybertoon scooped a prominent Wall Street Journal article covering bias in AI models was scooped – by two weeks. 

Paul explains the Supreme Court’s ruling on social media liability for assisting ISIS, and why it didn’t tell us anything of significance about section 230. 

Chinny and I analyze reports that the FBI misused its access to a section 702 database.  All of the access mistakes came before the latest round of procedural reforms, but on reflection, I think the fault lies with the Justice Department and the Director of National Intelligence, who came up with access rules that all but guarantee mistakes and don’t ensure that the database will be searched when security requires it. 

Chinny reviews a bunch of privacy scandal wannabe stories

Download the 458th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-458.mp3
Category:general -- posted at: 3:01pm EDT

Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models’ lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it’s nice to see someone trying. The other effort, Anthropic’s creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up.

The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to big institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that’s because Europe doesn’t have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT.

Anne-Gabrielle Haie is more friendly to the EU’s data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider.

Paul Rosenzweig introduces us to one of the U.S. government’s most impressive technical achievements in cyberdefense—tracking down, reverse engineering, and then killing Snake, one of Russia’s best hacking tools.

Paul and I chew over China’s most recent self-inflicted wound in attracting global investment—the raid on Capvision. I agree that it’s going to discourage investors who need information before they part with their cash. But I offer a lukewarm justification for China’s fear that Capvision’s business model encourages leaks.

Maury reviews Chinese tech giant Baidu’s ChatGPT-like search add-on. I ask whether we can ever trust models like ChatGPT for search, given their love affair with plausible falsehoods.

Paul reviews the technology that will be needed to meet what’s looking like a national trend to  require social media age verification. Maury reviews the ruling upholding the lawfulness of the UK’s interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy.

Finally, in quick hits:

Download the 457th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-457.mp3
Category:general -- posted at: 10:01am EDT