The Cyberlaw Podcast (general)

This episode of the Cyberlaw Podcast delves into the use of location technology in two big events—the surprisingly outspoken lockdown protests in China and the Jan. 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to the provocative question, what’s the difference? Jamil’s answer (and mine, for what it’s worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users’ information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded. 

Whether we should be relying on Google’s made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective penalties on companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is there to hose me down.

The U.K. is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests.

The Department of Homeland Security’s Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry’s complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to agree on a report format that doesn’t resemble the State of the Union address.

It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities.

Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of the U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That’s very likely a World Trade Organziation (WTO) violation, I offer, but one that pales in comparison to thirty years of WTO-violating threats to constrain European data exports to the U.S. When you think of it as retaliation for the use of General Data Protection Regulation (GDPR) to attack U.S. intelligence programs, the EV subsidy is easy to defend.

I ask Nick what we learned this week from Twitter coverage. His answer—that Elon Musk doesn’t understand how hard content moderation is—doesn’t exactly come as news. Nor, really, does most of what we learned from Matt Taibbi’s review of Twitter’s internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn’t come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D.-Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him.

Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose.  

Finally, in quick hits:

And I try to explain why the decision of the DHS cyber safety board to look into the Lapsus$ hacks seems to drawing fire.

Direct download: TheCyberlawPodcast-433.mp3
Category:general -- posted at: 10:17am EDT

We spend much of this episode of the Cyberlaw Podcast talking about toxified technology – new tech that is being demonized for a variety of reasons. Exhibit One, of course, is “spyware,” essentially hacking tools that allow governments to access phones or computers otherwise closed to them, usually by end-to-end encryption. The Washington Post and the New York Times have led a campaign to turn NSO’s Pegasus tool for hacking phones into radioactive waste. Jim Dempsey, though, reminds us that not too long ago, in defending end-to-end encryption, tech policy advocates insisted that the government did not need mandated access to encrypted phones because they could engage in self-help in the form of hacking. David Kris points out that, used with a warrant, there’s nothing uniquely dangerous about hacking tools of this kind. I offer an explanation for why the public policy community and its Silicon Valley funders have changed their tune on the issue: having won the end-to-end encryption debate, they feel free to move on to the next anti-law-enforcement campaign.

That campaign includes private lawsuits against NSO by companies like WhatsApp, whose lawsuit was briefly delayed by NSO’s claim of sovereign immunity on behalf of the (unnamed) countries it builds its products for. That claim made it to the Supreme Court, David reports, where the U.S. government recently filed a brief that will almost certainly send NSO back to court without any sovereign immunity protection.

Meanwhile, in France, Amesys and its executives are being prosecuted for facilitating the torture of Libyan citizens at the hands of the Muammar Qaddafi regime. Amesys evidently sold an earlier and less completely toxified technology—packet inspection tools—to Libya. The criminal case is pending.

And in the U.S., a whole set of tech toxification campaigns are under way, aimed at Chinese products. This week, Jim notes, the Federal Communications Commission came to the end of a long road that began with jawboning in the 2000s and culminated in a flat ban on installing Chinese telecom gear in U.S. networks. On deck for China are DJI’s drones, which several Senators see as a comparable national security threat that should be handled with a similar ban. Maury Shenk tells us that the British government is taking the first steps on a similar path, this time with a ban on some government uses of Chinese surveillance camera systems.

Those measures do not always work, Maury tells us, pointing to a story that hints at trouble ahead for U.S. efforts to decouple Chinese from American artificial intelligence research and development. 

Maury and I take a moment to debunk efforts to persuade readers that artificial intelligence (AI) is toxic because Silicon Valley will use it to take our jobs. AI code writing is not likely to graduate from facilitating coding any time soon, we agree. Whether AI can do more in human resources (HR) may be limited by a different toxification campaign—the largely phony claim that AI is full of bias. Amazon’s effort to use AI in HR, I predict, will be sabotaged by this claim. The effort to avoid bias will almost certainly lead Amazon to build race and gender quotas into its engine.

And in a few quick hits:

And we close with a downbeat assessment of Elon Musk’s chances of withstanding the combined hostility of European and U.S. regulators, the press, and the left-wing tech-toxifiers in civil society. He is a talented guy, I argue, and with a three-year runway, he could succeed, but he does not have three years.

Direct download: TheCyberlawPodcast-432.mp3
Category:general -- posted at: 10:39am EDT

The Cyberlaw Podcast leads with the legal cost of Elon Musk’s anti-authoritarian takeover of Twitter. Turns out that authority figures have a lot of weapons, many grounded in law, and Twitter is at risk of being on the receiving end of those weapons. Brian Fleming explores the apparently unkillable notion that the Committee on Foreign Investment in the U.S. (CFIUS) should review Musk’s Twitter deal because of a relatively small share that went to investors with Chinese and Persian Gulf ties. It appears that CFIUS may still be seeking information on what Twitter data those investors will have access to, but I am skeptical that CFIUS will be moved to act on what it learns. More dangerous for Twitter and Musk, says Charles-Albert Helleputte, is the possibility that the company will lose its one-stop-shop privacy regulator for failure to meet the elaborate compliance machinery set up by European privacy bureaucrats. At a quick calculation, that could expose Twitter to fines up to 120% of annual turnover. Finally, I reprise my skeptical take on all the people leaving Twitter for Mastodon as a protest against Musk allowing the Babylon Bee and President Trump back on the platform. If the protestors really think Mastodon’s system is better, I recommend that Twitter adopt it, or at least the version that Francis Fukuyama and Roberta Katz have described.

If you are looking for the far edge of the Establishment’s Overton Window on China policy, you will not do better than the U.S.-China Economic and Security Review Commission, a consistently China-skeptical but mainstream body. Brian reprises the Commission’s latest report. The headline, we conclude, is about Chinese hacking, but the recommendations does not offer much hope of a solution to that problem, other than more decoupling. 

Chalk up one more victory for Trump-Biden continuity, and one more loss for the State Department. Michael Ellis reminds us that the Trump administration took much of Cyber Command’s cyber offense decision making out of the National Security Council and put it back in the Pentagon. This made it much harder for the State Department to stall cyber offense operations. When it turned out that this made Cyber Command more effective and no more irresponsible, the Biden Administration prepared to ratify Trump’s order, with tweaks.

I unpack Google’s expensive (nearly $400 million) settlement with 40 States over location history. Google’s promise to stop storing location history if the feature was turned off was poorly and misleadingly drafted, but I doubt there is anyone who actually wanted to keep Google from using location for most of the apps where it remained operative, so the settlement is a good deal for the states, and a reminder of how unpopular Silicon Valley has become in red and blue states.

Michael tells the doubly embarrassing story of an Iranian hack of the U.S. Merit Systems Protection Board. It is embarrassing to be hacked with a log4j exploit that should have been patched. But it is worse when an Iranian government hacker gets access to a U.S. government network—and decided that the access is only good for mining cryptocurrency. 

Brian tells us that the U.S. goal of reshoring chip production is making progress, with Apple planning to use TSMC chips from a new fab in Arizona

In a few updates and quick hits:

  • I remind listeners that a lot of tech companies are laying employees off, but that overall Silicon Valley employment is still way up over the past couple of years.
  • I give a lick and a promise to the mess at cryptocurrency exchange FTX, which just keeps getting worse.
  • Charles updates us on the next U.S.-E.U. adequacy negotiations, and the prospects for Schrems 3 (and 4, and 5) litigation.

And I sound a note of both admiration and caution about Australia’s plan to “unleash the hounds” – in the form of its own Cyber Command equivalent – on ransomware gangs. As U.S. experience reveals, it makes for a great speech, but actual impact can be hard to achieve.

Direct download: TheCyberlawPodcast-431.mp3
Category:general -- posted at: 10:09am EDT

We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 midterm election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does nominations, the House does investigations, and neither does much legislation—which could leave renewal of the critically important intelligence authority, Section 702 of the Foreign Intelligence Surveillance Act (FISA), out in the cold. As supporters of renewal, we conclude that the best hope for the provision is to package it with trust-building measures to restore Republicans’ willingness to give national security agencies broad surveillance authorities.

I also note that foreign government cyberattacks on our election, which have been much anticipated in election after election, failed once again to make an appearance. At this point, election interference is somewhere between Y2K and Bigfoot on the “things we should have worried about” scale.

In other news, cryptocurrency conglomerate FTX has collapsed into bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details.

A new panelist on the podcast, Chinny Sharma, explains to a disbelieving U.S. audience the U.K. government’s plan to scan all the country’s internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the U.K. government does not use a private service for the task. 

Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he is determined to linger on the comic relief. Dunning-Krueger makes an appearance. 

Chinny and I speculate on what may emerge from the Biden administration’s plan to reconsider the relationship between the Cybersecurity and Infrastructure Security Agency (CISA) and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict turf wars and new authorities for CISA in response. The Obama administration’s egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the Federal Trade Commission will finally have to coordinate its privacy enforcement efforts with CISA’s cybersecurity standards and priorities. 

Adam reviews the European Parliament’s report on Europe’s spyware problems. He’s impressed (as am I) by the report’s willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition, a problem that was unthinkable for fifty years in the United States. This, we agree, is another reason that Congress needs to put guardrails against such abuse in place quickly.

Nick notes the U.S. government’s seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it is still worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money.  

Disinformation—or the appalling acronym MDM (for mis-, dis-, and mal-information)—has been in the news lately. A recent paper counted the staggering cost of “disinformation” suppression during coronavirus times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya—falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even when it sounds a lot like foreign leaders we do not like. 

Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and Artificial Intelligence (AI) that writes code. The short version is that GitHub has been training an AI engine on all the open source code on the site so that it can “autosuggest” lines of new code as you are writing the boring parts of your program. The upshot is that open source code that the AI strips off the license conditions, such as copyleft, that are part of some open source code. Not surprisingly, copyleft advocates are suing on the ground that important information has been left off their code, particularly the provision that turns all code that uses the open source into open source itself. I remind listeners that this is why Microsoft famously likened open source code to cancer. Nick tells me that it is really more like herpes, thus demonstrating that he has a lot more fun coding than I ever had. 

In updates and quick hits:

Adam and I flag the Department of Justice’s release of basic rules for what I am calling the Euroappeasement court: the quasi-judicial body that will hear European complaints that the U.S. is not living up to human rights standards that no country in Europe even pretends to live up to. 

Direct download: TheCyberlawPodcast-430.mp3
Category:general -- posted at: 3:45pm EDT

The war that began with the Russian invasion of Ukraine grinds on. Cybersecurity experts have spent much of 2022 trying to draw lessons about cyberwar strategies from the conflict. Dmitri Alperovitch takes us through the latest lessons, cautioning that all of them could look different in a few months, as both sides adapt to the others’ actions. 

David Kris joins Dmitri to evaluate a Microsoft report hinting that China may be abusing its recent edict requiring that software vulnerabilities be reported first to the Chinese government. The temptation to turn such reports into zero-day exploits may be irresistible, and Microsoft notes with suspicion a recent rise in Chinese zero-day exploits. Dmitri worried about just such a development while serving on the Cyber Safety Review Board, but he is not yet convinced that we have the evidence to prove the case against the Chinese mandatory disclosure law. 

Sultan Meghji keeps us in Redmond, digging through a deep Protocol story on how Microsoft has helped build Artificial Intelligence (AI) in China. The amount of money invested, and the deep bench of AI researchers from China, raises real questions about how the United States can decouple from China—and whether China may eventually decide to do the decoupling. 

I express skepticism about the White House’s latest initiative on ransomware, a 30-plus nation summit that produced a modest set of concrete agreements. But Sultan and Dmitri have been on the receiving end of deputy national security adviser Anne Neuberger’s forceful personality, and they think we will see results. We’d better. Baks reported that ransomware payments doubled last year, to $1.2 billion.  

David introduces the high-stakes struggle over when cyberattacks can be excluded from insurance coverage as acts of war. A recent settlement between Mondelez and Zurich has left the law in limbo. 

Sultan tells me why AI is so bad at explaining the results it reaches. He sees light at the end of the tunnel. I see more stealthy imposition of woke academic values. But we find common ground in trashing the Facial Recognition Act, a lefty Democrat bill that throws together every bad proposal to regulate facial recognition ever put forward and adds a few more. A red wave will be worth it just to make sure this bill stays dead.

Finally, Sultan reviews the National Security Agency’s report on supply chain security. And I introduce the elephant in the room, or at least the mastodon: Elon Musk’s takeover at Twitter and the reaction to it. I downplay the probability of CFIUS reviewing the deal. And I mock the Elon-haters who fear that scrimping on content moderation will turn Twitter into a hellhole that includes *gasp!* Republican speech. Turns out that they are fleeing Twitter for Mastodon, which pretty much invented scrimping on content moderation.

Direct download: TheCyberlawPodcast-429.mp3
Category:general -- posted at: 10:53am EDT

You heard it on the Cyberlaw Podcast first, as we mash up the week’s top stories: Nate Jones commenting on Elon Musk’s expected troubles running Twitter at a profit and Jordan Schneider noting the U.S. government’s creeping, halting moves to constrain TikTok’s sway in the U.S. market. Since Twitter has never made a lot of money, even before it was carrying loads of new debt, and since pushing TikTok out of the U.S. market is going to be an option on the table for years, why doesn’t Elon Musk position Twitter to take its place? 

It’s another big week for China news, as Nate and Jordan cover the administration’s difficulties in finding a way to thwart China’s rise in quantum computing and artificial intelligence (AI). Jordan has a good post about the tech decoupling bombshell. But the most intriguing discussion concerns China’s remarkably limited options for striking back at the Biden administration for its harsh sanctions.

Meanwhile, under the heading, When It Rains, It Pours, Elon Musk’s Tesla faces a criminal investigation over its self-driving claims. Nate and I are skeptical that the probe will lead to charges, as Tesla’s message about Full Self-Driving has been a mix of manic hype and lawyerly caution. 

Jamil Jaffer introduces us to the Guacamaya “hacktivist” group whose data dumps have embarrassed governments all over Latin America—most recently with reports of Mexican arms sales to narco-terrorists. On the hard question—hacktivists or government agents?—Jamil and I lean ever so slightly toward hacktivists. 

Nate covers the remarkable indictment of two Chinese spies for recruiting a U.S. law enforcement officer in an effort to get inside information about the prosecution of a Chinese company believed to be Huawei. Plenty of great color from the indictment, and Nate notes the awkward spot that the defense team now finds itself in, since the point of the operation seems to have been, er, trial preparation. 

To balance the scales a bit, Nate also covers suggestions that Google's former CEO Eric Schmidt, who headed an AI advisory committee, had a conflict of interest because he also invested in AI startups. There’s no suggestion of illegality, though, and it is not clear how the government will get cutting edge advice on AI if it does not get it from investors like Schmidt.

Jamil and I have mildly divergent takes on the Transportation Security Administration's new railroad cybersecurity directive. He worries that it will produce more box-checking than security. I have a similar concern that it mostly reinforces current practice rather than raising the bar. 

And in quick updates:

I offer this public service announcement: Given the risk that your Prime Minister’s phone could be compromised, it’s important to change them every 45 days.

Direct download: TheCyberlawPodcast-428.mp3
Category:general -- posted at: 4:04pm EDT

This episode features Nick Weaver, Dave Aitel and I covering a Pro Publica story (and forthcoming book) on the difficulties the FBI has encountered in becoming the nation’s principal resource on cybercrime and cybersecurity. We end up concluding that, for all its successes, the bureau’s structural weaknesses in addressing cybersecurity are going to haunt it for years to come.

Speaking of haunting us for years, the effort to decouple U.S. and Chinese tech sectors continues to generate news. Nick and Dave weigh in on the latest (rumored) initiative: cutting off China’s access to U.S. quantum computing and AI technology, and what that could mean for the U.S. semiconductor companies, among others.

We could not stay away from the Elon Musk-Twitter story, which briefly had a national security dimension, due to news that the Biden Administration was considering a Committee on Foreign Investment in the United States review of the deal. That’s not a crazy idea, but in the end, we are skeptical that this will happen.

Dave and I exchange views on whether it is logical for the administration to pursue cybersecurity labels for cheap Internet of things devices. He thinks it makes less sense than I do, but we agree that the end result will be to crowd the cheapest competitors from the market.

Nick and I discuss the news that Kanye West is buying Parler. Neither of us thinks much of the deal as an investment. 

And in updates and quick takes:

And in another platform v. press, story, TikTok’s parent ByteDance has been accused by Forbes of planning to use TikTok to monitor the location of specific Americans. TikTok has denied the story. I predict that neither the story nor the denial is enough to bring closure. We’ll be hearing more.

Direct download: TheCyberlawPodcast-427.mp3
Category:general -- posted at: 11:54am EDT

David Kris opens this episode of the Cyberlaw Podcast by laying out some of the massive disruption that the Biden Administration has kicked off in China’s semiconductor industry—and its Western suppliers. The reverberations of the administration’s new measures will be felt for years, and the Chinese government’s response, not to mention the ultimate consequences, remains uncertain.

Richard Stiennon, our industry analyst, gives us an overview of the cybersecurity market, where tech and cyber companies have taken a beating but cybersecurity startups continue to gain funding

Mark MacCarthy reviews the industry from the viewpoint of the trustbusters. Google is facing what looks like a serious AdTech platform challenge from several directions—the EU, the Justice Department, and several states. Facebook, meanwhile, is lucky to be a target of the Federal Trade Commission, which rather embarrassingly had to withdraw claims that the acquisition of Within would remove an actual (as opposed to hypothetical) competitor from the market. No one seems to have challenged Google’s acquisition of Mandiant, meanwhile. Richard suspects that is because Google is not likely to do anything with the company. 

David walks us through the new White House national security strategy—and puts it in historical context. 

Mark and I cross swords over PayPal’s determination to take my money for saying things Paypal doesn’t like. Visa and Mastercard are less upfront about their ability to boycott businesses they consider beyond the pale, but all money transfer companies have rules of this kind, he says. We end up agreeing that transparency, the measure usually recommended for platform speech suppression, makes sense for Paypal and its ilk, especially since they’re already subject to extensive government regulation.  

Richard and I dive into the market for identity security. It’s hot, thanks to zero trust computing. Thoma Bravo is leading a rollup of identity companies. I predict security troubles ahead for the merged portfolio.  

In updates and quick hits:

And I predict much more coverage, not to mention prosecutorial attention, will result from accusations that a powerful partner at the establishment law firm, Dechert, engaged in hack-and-dox attacks on adversaries of his clients.

Direct download: TheCyberlawPodcast-426.mp3
Category:general -- posted at: 12:00pm EDT

It’s been a jam-packed week of cyberlaw news, but the big debate of the episode is triggered by the White House blueprint for an AI Bill of Rights. I’ve just released a long post about the campaign to end “AI bias” in general, and the blueprint in particular. In my view, the bill of rights will end up imposing racial and gender (and intersex!) quotas on a vast swath of American life. Nick Weaver argues that AI is in fact a source of secondhand racism and sexism, something that will not be fixed until we do a better job of forcing the algorithm to explain how it arrives at the outcomes it produces. We do not agree on much, but we do agree that lack of explainability is a big problem for the new technology.

President Biden has issued an executive order meant to resolve the U.S.-EU spat over transatlantic data flows. At least for a few years, until the anti-American EU Court of Justice finds it wanting again. Nick and I explore some of the mechanics. I think it’s bad for the privacy of U.S. persons and for the comprehensibility of U.S. intelligence reports, but the judicial system the order creates is cleverly designed to discourage litigant grandstanding.

Matthew Heiman covers the biggest CISO, or chief information security officer, news of the week, the month, and the year—the criminal conviction of Uber’s CSO, Joe Sullivan, for failure to disclose a data breach to the Federal Trade Commission. He is less surprised by the verdict than others, but we agree that it will change the way CISO’s do their job and relate to their fellow corporate officers.

Brian Fleming joins us to cover an earthquake in U.S.-China tech trade—the sweeping new export restrictions on U.S. chips and technology. This will be a big deal for all U.S. tech companies, we agree, and probably a disaster for them in the long run if U.S. allies don’t join the party. 

I go back to dig a little deeper on two cases we covered with just a couple of hours’ notice last week—the Supreme Court’s grant of review in two cases touching on Big Tech’s liability for hosting the content of terror groups. It turns out that only one of the cases is likely to turn on Section 230. That’s Google’s almost laughable claim that holding YouTube liable for recommending terrorist videos is holding it liable as a publisher. The other case will almost certainly turn on when distribution of terrorist content can be punished as “material assistance” to terror groups.

Brian walks us through the endless negotiations between TikTok and the U.S. over a security deal. We are both puzzled over the partisanization of TikTok security, although I suggest a reason why that might be happening.  

Matthew catches us up on a little-covered Russian hack and leak operation aimed at former MI6 boss Richard Dearlove and British Prime Minister Boris Johnson. Matthew gives Dearlove’s security awareness a low grade.

Finally, two updates: 

  • Nick catches us up on the Elon Musk-Twitter fight. Nick's gloating now, but he is sure he'll be booted off the platform when Musk takes over.
  • And I pass on some very unhappy feedback from a friend at the Election Integrity Partnership (EIP), who feels we were too credulous in commenting on a JustTheNews story that left a strong impression of unseemly cooperation in suppressing election integrity misinformation. The EIP’s response makes several good points in its own defense, but I remain concerned that the project as a whole raises real concerns about how tightly Silicon Valley embraced the suppression of speech “delegitimizing” election results.
Direct download: TheCyberlawPodcast-425.mp3
Category:general -- posted at: 3:48pm EDT

We open today’s episode by teasing the Supreme Court’s decision to review whether section 230 protects big platforms from liability for materially assisting terror groups whose speech they distribute (or even recommend). I predict that this is the beginning of the end of the house of cards that aggressive lawyering and good press have built on the back of section 230. Why? Because Big Tech stayed out of the Supreme Court too long. Now, just when section 230 gets to the Court, everyone hates Silicon Valley and its entitled content moderators. Jane Bambauer, Gus Hurwitz, and Mark MacCarthy weigh in, despite the unfairness of having to comment on a cert grant that is two hours old.

Just to remind us why everyone hates Big Tech’s content practices, we do a quick review of the week’s news in content suppression. 

  • A couple of conservative provocateurs prepared a video consisting of Democrats being “election deniers.” The purpose was to show the hypocrisy of those who criticize the GOP for a meme that belonged mainly to Dems until two years ago. And it worked. YouTube did a manual review before it was even released and demonetized the video because, well, who knows? An outcry led to reinstatement, too late for YouTube’s reputation. Jane has the story.
  • YouTube also steps in the same mess by first suppressing then restoring a video by Giorgia Meloni, the biggest winner of Italy’s recent election. She’s on the right, but you already knew that from how YouTube dealt with her.
  • Mark covers an even more troubling story, in which government officials point to online posts about election security that they don’t like, NGOs that the government will soon be funding take those complaints to Silicon Valley, and the platforms take a lot of the posts down. Really, what could possibly go wrong?
  • Jane asks why Facebook is “moderating” private messages by the wife of an FBI whistleblower. I suspect that this is related to the government and big tech’s hyperaggressive joint pursuit of anything related to January 6. But it definitely requires investigation.
  • Across the Atlantic, Jane notes, the Brits are hating Facebook for the content it let 14-year-old Molly Russell read before her suicide. Exactly what was wrong with the content is a little obscure, but we agree that the material served to minors is ripe for more regulation, especially outside the United States.

For a change of pace, Mark has some largely unalloyed good news. The International Telecommunication Union will not be run by a Russian; instead it elected an American, Doreen Bodan-Martin to lead it.  

Mark tells us that all the Sturm und Drang over tougher antitrust laws for Silicon Valley has wound down to a few modestly tougher provisions that have now passed the House. That may be all that can get passed this year, and perhaps in this Administration.

Gus gives us a few highlights from FTCland:

Jane unpacks a California law prohibiting cooperation with subpoenas from other states without an assurance that the subpoenas aren’t investigating abortions that would be legal in California. I again nominate California as playing the role in federalism for the twenty-first century that South Carolina played in the nineteenth and twentieth centuries and predict that some enterprising red state attorney general is likely to enjoy litigating the validity of California’s law – and likely winning.

Gus notes that private antitrust cases remain hard to win, especially without evidence, as Amazon and major book publishers gain the dismissal of antitrust lawsuits over book pricing.

Finally, in quick hits and updates:

I also note a large privacy flap Down Under, as the exposure of lots of personal data from a telco database seems likely to cost the carrier, and its parent dearly.

Russian botmasters have suddenly discovered that extradition to the U.S. may be better than going home and facing mobilization.

Direct download: TheCyberlawPodcast-424.mp3
Category:general -- posted at: 10:07am EDT