The Cyberlaw Podcast (general)

This episode features part 1 of our two-part interview with Paul Stephan, author of The World Crisis and International Law—a deeper and more entertaining read than the title suggests. Paul lays out the long historical arc that links the 1980s to the present day. It’s not a pretty picture, and it gets worse as he ties those changes to the demands of the Knowledge Economy. How will these profound political and economic clashes resolve themselves?  We’ll cover that in part 2.

Meanwhile, in this episode of the Cyberlaw Podcast I tweak Sam Altman for his relentless embrace of regulation for his industry during testimony last week in the Senate.  I compare him to another Sam with a similar regulation-embracing approach to Washington, but Chinny Sharma thinks it’s more accurate to say he did the opposite of everything Mark Zuckerberg did in past testimony. Chinny and Sultan Meghji unpack some of Altman’s proposals, from a new government agency to license large AI models, to safety standards and audit. I mock Sen. Richard Blumenthal, D-Conn., for panicking that “Europe is ahead of us” in industry-killing regulation. That earns him immortality in the form of a new Cybertoon, left. Speaking of Cybertoonz, I note that an earlier Cybertoon scooped a prominent Wall Street Journal article covering bias in AI models was scooped – by two weeks. 

Paul explains the Supreme Court’s ruling on social media liability for assisting ISIS, and why it didn’t tell us anything of significance about section 230. 

Chinny and I analyze reports that the FBI misused its access to a section 702 database.  All of the access mistakes came before the latest round of procedural reforms, but on reflection, I think the fault lies with the Justice Department and the Director of National Intelligence, who came up with access rules that all but guarantee mistakes and don’t ensure that the database will be searched when security requires it. 

Chinny reviews a bunch of privacy scandal wannabe stories

Download the 458th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-458.mp3
Category:general -- posted at: 3:01pm EDT

Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models’ lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it’s nice to see someone trying. The other effort, Anthropic’s creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up.

The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to big institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that’s because Europe doesn’t have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT.

Anne-Gabrielle Haie is more friendly to the EU’s data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider.

Paul Rosenzweig introduces us to one of the U.S. government’s most impressive technical achievements in cyberdefense—tracking down, reverse engineering, and then killing Snake, one of Russia’s best hacking tools.

Paul and I chew over China’s most recent self-inflicted wound in attracting global investment—the raid on Capvision. I agree that it’s going to discourage investors who need information before they part with their cash. But I offer a lukewarm justification for China’s fear that Capvision’s business model encourages leaks.

Maury reviews Chinese tech giant Baidu’s ChatGPT-like search add-on. I ask whether we can ever trust models like ChatGPT for search, given their love affair with plausible falsehoods.

Paul reviews the technology that will be needed to meet what’s looking like a national trend to  require social media age verification. Maury reviews the ruling upholding the lawfulness of the UK’s interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy.

Finally, in quick hits:

Download the 457th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-457.mp3
Category:general -- posted at: 10:01am EDT

The “godfather of AI” has left Google, offering warnings about the existential risks for humanity of the technologyMark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There’s more agreement on the White House summit on AI risks, which seems to have followed Mark’s “let’s worry about tomorrow tomorrow” prescription. I think existential risks are a bigger concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I argue again that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas more widely, which provokes lively pushback from Jim Dempsey and Mark.

Other prospective AI regulators, from the Federal Trade Commission (FTC)’s Lina Khan to the Italian data protection agency, come in for commentary. I’m struck by the caution both have shown, perhaps due to their recognizing the difficulty of applying old regulatory frameworks to this new technology. It’s not, I suspect, because Lina Khan’s FTC has lost its enthusiasm for pushing the law further than it can be pushed. This week’s example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate ‘remedy” for what look like Facebook foot faults in complying with an earlier FTC order. 

Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to General Data Protection Regulation (GDPR) and California’s privacy law. Mark reviews Pornhub’s reaction to the Utah law on kids’ access to porn. He thinks age verification requirements are due for another look by the courts.  

Jim explains the state appellate court decision ruling that the NotPetya attack on Merck was not an act of war and thus not excluded from its insurance coverage.

Nate and I recommend Kim Zetter’s revealing story on the  SolarWinds hack. The details help to explain why the Cyber Safety Review Board hasn’t examined SolarWinds—and why it absolutely has to—because the full story is going to embarrass a lot of powerful institutions.

In quick hits, 

  • Mark makes a bold prediction about the fate of Canada’s law requiring Google and Facebook to pay when they link to Canadian media stories: Just like in Australia, the tech giants and the industry will reach a deal. 

  • Jim and I comment on the three-year probation sentence for Joe Sullivan in the Uber “misprision of felony” case—and the sentencing judge’s wide-ranging commentary. 

  • I savor the impudence of the hacker who has broken into Russian intelligence’s bitcoin wallets and burned the money to post messages doxing the agencies involved.

  • And for those who missed it, Rick Salgado and I wrote a Lawfare article on why CISOs should support renewal of Foreign Intelligence Surveillance Act (FISA) section 702, and Metacurity named it one of the week’s “Best Infosec-related Long Reads.” 

Download 456th Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-456.mp3
Category:general -- posted at: 1:59pm EDT

We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That’s the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they’ve been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they’ve dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the victims of cyberattacks (makes sense: foreign hackers are often a national security concern, and their whole business model is to use U.S. infrastructure to communicate [in a very special way] with U.S. networks.) So it turns out that all those civil libertarians who want to make it hard for the government to search 702 for the names of Americans are proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks!

Justin Sherman covers China’s push to attack and even take over enemy (U.S.) satellites. This story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that the Defense Department has gotten a little too comfortable waging war against people who don’t really have an army, and that the Ukraine conflict shows how much tougher things get when there’s an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.)

Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. We can look forward to an argument next year about the constitutionality of public officials blocking people who post mean comments on the officials’ Facebook pages. 

Justin and I break down a story about whether Twitter is complying with more government demands under Elon Musk. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it’s much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don’t fight. But with criticism of Elon Musk’s Twitter already turned up to 11, that’s not likely to persuade him.

Adam and I are impressed by Citizen Labs’ report on search censorship in China. We’d both kind of like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less transparency. If you suspect that’s because there’s more censorship than U.S. companies want to admit, here’s a straw in the wind: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling political speech than China’s main search engine, Baidu. This fits with my discovery that Bing’s Image Creator refused to construct an image using Taiwan’s flag. (It was OK using U.S. and German flags, but not China’s.) I also credit Microsoft for fixing that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags. 

Adam covers the EU’s enthusiasm for regulating other countries’ companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American companies. 

I cover a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the Environmental Protection Agency’s legally questionable effort to impose cybersecurity requirements on public water systems, using an “interpretation” of a law that doesn’t say much about cybersecurity into a law that never had it before.

Michael Ellis and I cover the story detailing a former NSA director’s business ties to Saudi Arabia—and expand it to confess our unease at the number of generals and admirals moving from command of U.S. forces to a consulting gig with the countries they were just negotiating with. Recent restrictions on the revolving door for intelligence officers gets a mention.

Adam covers the Quebec decision awarding $500 thousand to a man who couldn’t get Google to consistently delete a false story portraying him as a pedophile and conman.

Justin and I debate whether Meta’s Reels feature has what it takes to be a plausible TikTok competitor? Justin is skeptical. I’m a little less so. Meta’s claims about the success of Reels aren’t entirely persuasive, but perhaps it’s too early to tell.

The D.C. Circuit has killed off the state antitrust case trying to undo Meta’s long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn’t apply the same way to the Federal Trade Commission (FTC), which will get to pursue a lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into battle like conscripts in Bakhmut, I ask, when will the commission start recruiting in Russian prisons?

That was fast. Adam tells us that the Brazil court order banning on Telegram because it wouldn’t turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn’t out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance.   

And in another regulatory walkback, Italy’s privacy watchdog is letting ChatGPT back into the country. I suspect the Italian government of cutting a deal to save face as it abandons its initial position on ChatGPT’s scraping of public data to train the model.

Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don’t see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education. 

Download 455th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-455.mp3
Category:general -- posted at: 10:18am EDT

The latest episode of The Cyberlaw Podcast was not created by chatbots (we swear!). Guest host Brian Fleming, along with guests Jay Healey, Maury Shenk, and Nick Weaver, discuss the latest news on the AI revolution including Google’s efforts to protect its search engine dominance, a fascinating look at the websites that feed tools like ChatGPT (leading some on the panel to argue that quality over quantity should be goal), and a possible regulatory speed bump for total AI world domination, at least as far as the EU’s General Data Privacy Regulation is concerned. Next, Jay lends some perspective on where we’ve been and where we’re going with respect to cybersecurity by reflecting on some notable recent and upcoming anniversaries. The panel then discusses recent charges brought by the Justice Department, and two arrests, aimed at China’s alleged attempt to harass dissidents living in the U.S. (including with fake social media accounts) and ponders how much of Russia’s playbook China is willing to adopt. Nick and Brian then discuss the Securities and Exchange Commission’s complaint against Bittrex and what it could portend for others in the crypto space and, more broadly, the future of crypto regulation and enforcement in the U.S. Maury then discusses the new EU-wide crypto regulations, and what the EU’s approach to regulating this industry could mean going forward. The panel then takes a hard look at an alarming story out of Taiwan and debates what the recent “invisible blockade” on Matsu means for China’s future designs on the island and Taiwan’s ability to bolster the resiliency of its communications infrastructure. Finally, Nick covers a recent report on the Mexican government’s continued reliance on Pegasus spyware. To wrap things up in the week’s quick hits, Jay proposes updating the Insurrection Act to avoid its use as a justification for deploying military cyber capabilities against U.S. citizens, Nick discusses the dangers of computer generated swatting services, Brian highlights the recent Supreme Court argument that may settle whether online stalking is a “true threat” v. protected First Amendment activity, and, last but not least, Nick checks in on Elon Musk’s threat to sue Microsoft after Twitter is dropped from its ad platform.

Download 454th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-454.mp3
Category:general -- posted at: 11:33am EDT

Every government on the planet announced last week an ambition to regulate artificial intelligence. Nate Jones and Jamil Jaffer take us through the announcements. What’s particularly discouraging is the lack of imagination, as governments dusted off their old prejudices to handle this new problem. Europe is obsessed with data protection, the Biden administration just wants to talk and wait and talk some more, while China must have asked ChatGPT to assemble every regulatory proposal for AI ever made by anyone and translate it into Chinese law. 

Meanwhile, companies trying to satisfy everyone are imposing weird limits on their AI, such as Microsoft’s rule that asking for an image of Taiwan’s flag is a violation of its terms of service. (For the record, so is asking for China’s flag but not asking for an American or German flag.)

Matthew Heiman and Jamil take us through the strange case of the airman who leaked classified secrets on Discord. Jamil thinks we brought this on ourselves by not taking past leaks sufficiently seriously.

Jamil and I cover the imminent Montana statewide ban on TikTok. He thinks it’s a harbinger; I think it may be a distraction that, like Trump’s ban, produces more hostile judicial rulings.

Nate unpacks the California Court of Appeals’ unpersuasive opinion on law enforcement use of geofencing warrants.

Matthew and I dig into the unanimous Supreme Court decision that should have independent administrative agencies like the Federal Trade Commission and Securities and Exchange Commission trembling. The court held that litigants don’t need to wend their way through years of proceedings in front of the agencies before they can go to court and challenge the agencies’ constitutional status. We both think that this is just the first shoe to drop. The next will be a full-bore challenge to the constitutionality of agencies beholden neither to the executive or Congress. If the FTC loses that one, I predict, the old socialist realist statue “Man Controlling Trade” that graces its entry may be replaced by one that PETA and the Chamber of Commerce would like better. Bing’s Image Creator allowed me to illustrate that possible outcome. See attached.

 In quick hits: 

Download 453rd Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-453.mp3
Category:general -- posted at: 9:58am EDT

We do a long take on some of the AI safety reports that have been issued in recent weeks. Jeffery Atik first takes us through the basics of attention based AI, and then into reports from OpenAI and Stanford on AI safety. Exactly what AI safety covers remains opaque (and toxic, in my view, after the ideological purges committed by Silicon Valley’s “trust and safety” bureaucracies) but there’s no doubt that a potential existential issue lurks below the surface of the most ambitious efforts. Whether ChatGPT’s stochastic parroting will ever pose a threat to humanity or not, it clearly poses a threat to a lot of people’s reputations, Nick Weaver reports.

One of the biggest intel leaks of the last decade may not have anything to do with cybersecurity. Instead, the disclosure of multiple highly classified documents seems to have depended on the ability to fold, carry, and photograph the documents. While there’s some evidence that the Russian government may have piggybacked on the leak to sow disinformation, Nick says, the real puzzle is the leaker’s motivation. That leads us to the question whether being a griefer is grounds for losing your clearance.  

Paul Rosenzweig educates us about the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act, which would empower the administration to limit or ban TikTok. He highlights the most prominent argument against the bill, which is, no surprise, the discretion the act would confer on the executive branch. The bill’s authors, Sen. Mark Warner (D-Va.) and Sen. John Thune (R-S.D.), have responded to this criticism, but it looks as though they’ll be offering substantive limits on executive discretion only in the heat of Congressional action. 

Nick is impressed by the law enforcement operation to shutter Genesis Market, where credentials were widely sold to hackers. The data seized by the FBI in the operation will pay dividends for years.  

I give a warning to anyone who has left a sensitive intelligence job to work in the private sector: If your new employer has ties to a foreign government, the Director of National Intelligence has issued a new directive that (sort of) puts you on notice that you could be violating federal law. The directive means the intelligence community will do a pretty good job of telling its employees when they take a job that comes with post-employment restrictions, but IC alumni are so far getting very little guidance. 

Nick exults in the tough tone taken by the Treasury in its report on the illicit finance risk in decentralized finance.

Paul and I cover Utah’s bill requiring teens to get parental approval to join social media sites. After twenty years of mocking red states for trying to control the internet’s impact on kids, it looks to me as though Knowledge Class parents are getting worried for their own kids. When the idea of age-checking internet users gets endorsed by the UK, Utah, and the New Yorker, I suggest, those arguing against the proposal may have a tougher time than they did in the 90s. 

And in quick hits: 

Download 452nd Episode (mp3) 

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-452.mp3
Category:general -- posted at: 11:03am EDT

Dmitri Alperovitch joins the Cyberlaw Podcast to discuss the state of semiconductor decoupling between China and the West. It’s a broad movement, fed by both sides. China has announced that it’s investigating Micron to see if its memory chips should still be allowed into China’s supply chain (spoiler: almost certainly not). Japan has tightened up its chip-making export control rules, which will align it with U.S. and Dutch restrictions, all with the aim of slowing China’s ability to make the most powerful chips. Meanwhile, South Korea is boosting its chipmakers with new tax breaks, and Huawei is reporting a profit squeeze.

The Biden administration spent much of last week on spyware policy, Winnona DeSombre Berners reports. How much it actually accomplished isn’t clear. The spyware executive order restricts U.S. government purchases of surveillance tools that threaten U.S. security or that have been misused against civil society targets. And a group of like-minded nations have set forth the principles they think should govern sales of spyware. But it’s not as though countries that want spyware are going to have a tough time finding, I observe, despite all the virtue signaling. Case in point: Iran is getting plenty of new surveillance tech from Russia these days. And spyware campaigns continue to proliferate

Winnona and Dmitri nominate North Korea for the title “Most Innovative Cyber Power,” acknowledging its creative use of social engineering to steal cryptocurrency and gain access to U.S. policy influencers.

Dmitri covers the TikTok beat, including the prospects of the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act., which he still rates high despite some criticism from the right. Winnona and I debate the need for another piece of legislation given the breadth of CFIUS review and International Emergency Economic Powers Act sanctions. 

Dmitri and I note the arrival of GPT-4 cybersecurity, as Microsoft introduces “Security Copilot.” We question whether this will turn out to be a game changer, but it does suggest that bespoke AI tools could play a role in cybersecurity (and pretty much everything else.) 

In other AI news, Dmitri and I wonder at Italy’s decision to cut itself off from access to ChatGPT by claiming that it violates Italian data protection law. That may turn out to be a hard case to prove, especially since the regulator has no clear jurisdiction over OpenAI, which is now selling nothing in Italy. In the same vein, there may be a safety reason to be worried by how fast AI is proceeding these days, but the letter proposing a six-month pause for more safety review  is hardly persuasive—specially in a world where “safety” seems to mostly be about stamping out bad pronouns. 

In news Nick Weaver will kick himself for missing, Binance is facing a bombshell complaint from the Commodities Futures Trading Commission (CFTC) (the Binance response is here). The CFTC clearly had access to the suicidally candid messages exchanged among Binance’s compliance team. I predict criminal indictments in the near future and wonder if the CFTC’s taking the lead on the issue has given it a jurisdictional leg up on the SEC in the turf fight over who regulates cryptocurrency.

Finally, we close with a review of a  book arguing that pretty much anyone who ever uttered the words  “China’s peaceful rise” was the victim of a well-planned and highly successful Chinese influence operation.

Download 451st Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-451.mp3
Category:general -- posted at: 11:56am EDT

The Capitol Hill hearings featuring TikTok’s CEO lead off episode 450 of the Cyberlaw Podcast. The CEO handled the endless stream of Congressional accusations and suspicion about as well as could have been expected.  And it did him as little good as a cynic would have expected. Jim Dempsey and Mark MacCarthy think Congress is moving toward action on Chinese IT products—probably in the form of the bipartisan Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act. But passing legislation and actually doing something about China’s IT successes are two very different things.

The FTC is jumping into the arena on cloud services, Mark tells us, and it can’t escape its DNA—dwelling on possible industry concentration and lock-in and not asking much about the national security implications of knocking off a bunch of American cloud providers when the alternatives are largely Chinese cloud providers. The FTC’s myopia means that the administration won’t get as much help as it could from the FTC on cloud security measures. I reissue my standard objection to the FTC’s refusal to follow the FCC’s lead in deferring on national security to executive branch concerns. Mark and I disagree about whether the FTC Act forces the Commission to limit itself to consumer protection.

Jim Dempsey reviews the latest AI releases, including Google’s Bard, which seems to have many of the same hallucination problems as OpenAI’s engines. Jim and I debate what I consider the wacky and unjustified fascination in the press with catching AI engaging in wrong think. I believe it’s just a mechanism for justifying the imposition of left-wing values on AI’s output —which already scores left/libertarian on 14 of 15 standard tests for identifying ideological affiliation. Similarly, I question the effort to stop AI from hallucinating footnotes in support of its erroneous facts. If ever there were a case for generative AI correction of AI errors, the fake citation problem seems like a natural.

Speaking of Silicon Valley’s lying problem, Mark reminds us that social media is absolutely immune for user speech, even after it gets notice that the speech is harmful and false. He reminds us of his thoughtful argument in favor of tweaking section 230 to more closely resemble the notice and action obligations found in the Digital Millennium Copyright Act (DMCA). I argue that the DMCA has not so much solved the incentives for overcensoring speech as it has surrendered to them.  

Jim introduces us to an emerging trend in state privacy law: bills that industry supports. Iowa’s new law is the exemplar; Jim questions whether it will satisfy users in the long run.  

I summarize Hachette v. Internet Archive, in which Judge John G. Koeltl delivers a harsh rebuke to internet hippies everywhere, ruling that the Internet Archive violated copyright in its effort to create a digital equivalent to public library lending. The judge’s lesson for the rest of us: You might think fair use is a thing, but it’s not. Get over it.

In quick hits, 

Download 450th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Direct download: TheCyberlawPodcast-450_1.mp3
Category:general -- posted at: 10:46am EDT

GPT-4’s rapid and tangible improvement over ChatGPT has more or less guaranteed that it or a competitor will be built into most new and legacy information and technology (IT) products. Some applications will be pointless; but some will change users’ world. In this episode, Sultan Meghji, Jordan Schneider, and Siobhan Gorman explore the likely impact of GPT4 from Silicon Valley to China.  

Kurt Sanger joins us to explain why Ukraine’s IT Army of volunteer hackers creates political, legal, and maybe even physical risks for the hackers and for Ukraine. This may explain why Ukraine is looking for ways to “regularize” their international supporters, with a view to steering them toward defending Ukrainian infrastructure.

Siobhan and I dig into the Biden administration’s latest target for cybersecurity regulation: cloud providers.  I wonder if there is not a bit of bait and switch in operation here. The administration seems at least as intent on regulating cloud providers to catch hackers as to improve defenses.

Say this for China – it never lets a bit of leverage go to waste, even when it should.  To further buttress its seven-dashed-line claim to the South China Sea, China is demanding that companies get Chinese licenses to lay submarine cable within the contested territory. That, of course, incentivizes the laying of cables much further from China, out where they’re harder for the Chinese to deal with in a conflict. But some Beijing bureaucrat will no doubt claim it as a win for the wolf warriors. Ditto for the Chinese ambassador’s statement about the Netherlands joining the U.S. in restricting chip-making equipment sales to China, which boiled down to “We will make you pay for that. We just do not know how yet.” The U.S. is not always good at dealing with its companies and other countries, but it is nice to be competing with a country that is demonstrably worse at it.

The Security and Exchange Commission has gone from catatonic to hyperactive on cybersecurity. Siobhan notes its latest 48-hour incident reporting requirement and the difficulty of reporting anything useful in that time frame. 

Kurt and Siobhan bring their expertise as parents of teens and aspiring teens to the TikTok debate.

I linger over the extraordinary and undercovered mess created by “18F”—the General Service Administration’s effort to bring Silicon Valley to the government’s IT infrastructure. It looks like they brought Silicon Valley’s arrogance, its political correctness, and its penchant for breaking things but forgot to bring either competence or honesty.  18F lied to its federal customers about how or whether it was checking the identities of people logging in through login.gov. When it finally admitted the lie, it brazenly claimed it was not checking because the technology was biased, contrary to the only available evidence. Oh, and it refused to give back the $10 million it charged because the work it did cost more than that. This breakdown in the middle of coronavirus handouts undoubtedly juiced fraud, but no one has figured out how much. Among the victims: Sen. Ron Wyden (D.-Ore.), who used login.gov and its phony biometric checks as the “good” alternative that would let the Internal Revenue Service (IRS) cancel its politically inconvenient contract with ID.me. Really, guys, it’s time to start scrubbing 18F from your LinkedIn profiles.

The Knicks have won some games. Blind pigs have found some acorns. But Madison Square Garden (and Knicks) owner, Jimmy Dolan is still investing good money in his unwinnable fight to use facial recognition to keep lawyers he does not like out of the Garden. Kurt offers commentary, thereby saving himself the cost of Knicks tickets for future playoff games. 

Finally, I read Simson Garfinkel’s explanation of a question I asked (and should have known the answer to) in episode 448.

Direct download: TheCyberlawPodcast-449.mp3
Category:general -- posted at: 9:24am EDT