There are decades when nothing happens (as Lenin is – wrongly – supposed to have said) and weeks when decades happen. We’ve just lived through a few weeks like that. We’ve known for decades that some American tech companies were problematic for democracy because they were fragmenting the public sphere and fostering polarisation. They were a worrying nuisance, to be sure, but not central to the polity.
And then, suddenly, those corporations were inextricably bound into government, and their narrow sectional interests became the national interest of the US. Which means that any foreign government with ideas about regulating, say, hate speech on X, may have to deal with the intemperate wrath of Donald Trump or the more coherent abuse of JD Vance.
The panic that this has induced in Europe is a sight to behold. Everywhere you look, political leaders are frantically trying to find ways of “aligning” with the new regime in Washington. Here in the UK, the Starmer team has been dutifully doing its obeisance bit. First off, it decided to rename Rishi Sunak’s AI Safety Institute as the AI Security Institute, thereby “shifting the UK’s focus on artificial intelligence towards security cooperation rather than a ‘woke’ emphasis on safety concerns”, as the Financial Times put it.
As an engineer who has sometimes thought of IP law as a rabbit hole masquerading as a profession, I am in no position to assess the rights and wrongs of this disagreement. But I have academic colleagues who are, and last week they published a landmark briefing paper, concluding: “The unregulated use of generative AI in the UK economy will not necessarily lead to economic growth, and risks damaging the UK’s thriving creative sector.”
The second is that any changes to UK IP law in response to the arrival of AI need to be carefully researched and thought through, and not implemented on the whims of tech bros or of ministers anxious to “align” the UK with the oligarchs now running the show in Washington.
The third comes from watching Elon Musk’s goons mess with complex systems that they don’t think they need to understand: never entrust a delicate clock to a monkey. Even if he is as rich as Croesus.
after newsletter promotion
What I’ve been reading
Table of Contents
- 1. What I’ve been reading
- 2. How might the recent rename of Rishi Sunak’s AI Safety Institute to the AI Security Institute in the UK reflect broader political and economic shifts in the relationship between governments and tech corporations?
- 3. Exclusive Interview: Navigating Tech Giants and AI Regulation with Dr. Amelia Hart, Intellectual Property Law Expert
- 4. Navigating the New Political Landscape
- 5. AI and Intellectual Property: A Delicate Balance
- 6. Lessons from Elon musk and AI Regulation
The man who would be king
Trump As Sovereign Decisionist is a perceptive guide by Nathan Gardels to how the world has suddenly changed.
Technical support
Tim O’Reilly’s The End of Programming As We Know It is a really knowledgable summary of AI and software development.
Computer says yes
The most thoughtful essay I’ve come across on the potential upsides of AI by a real expert is Machines of Loving Grace by Dario Amodei.
How might the recent rename of Rishi Sunak’s AI Safety Institute to the AI Security Institute in the UK reflect broader political and economic shifts in the relationship between governments and tech corporations?
Exclusive Interview: Navigating Tech Giants and AI Regulation with Dr. Amelia Hart, Intellectual Property Law Expert
In the wake of recent political shifts, the interplay between tech corporations, governments, and intellectual property laws has come under scrutiny. We sat down with Dr.Amelia Hart,a renowned intellectual property law expert and author of the landmark briefing paper “AI and the Future of Creative industries,” to discuss these complex issues.
Navigating the New Political Landscape
archyde (A): Dr. Hart, in yoru paper, you mentioned that tech corporations have suddenly become entwined with government interests. How are policy makers and regulators around the world responding to this?
Dr. Amelia Hart (AH): It’s been quite a sea change. Governments are scrambling to find ways to align their policies with the new regime in Washington.Here in the UK,we’ve seen the Starmer team rename Rishi Sunak’s AI Safety Institute to the AI Security institute. But there’s a lot of concern about this ‘alignment.’ Many fear it will lead to policies that cater to sectional tech interests at the expense of broader public concerns, like data privacy and AI ethics.
AI and Intellectual Property: A Delicate Balance
A: your paper also highlighted the potential risks of unregulated AI use on the creative sector. Can you elaborate on that?
AH: Of course. Current copyright laws are predicated on human creativity.AI, though, can generate content without human intervention. If not properly regulated, this could lead to a ‘free for all,’ where AI systems generate countless works without attribution or compensation for the original human creators. This could substantially damage the creative sector, stifle innovation, and discourage human creativity.
A: But isn’t AI also creating new jobs and industries? Isn’t that worth some potential disruption?
AH: Absolutely, AI has tremendous potential. But we must avoid a ‘race to the bottom’ where we prioritize short-term gains over long-term innovation and fairness. That’s where thoughtful, evidence-based policy comes in. It’s about striking the right balance between encouraging innovation and protecting creators’ rights.
Lessons from Elon musk and AI Regulation
A: You’ve mentioned the importance of careful research and thought in AI policy. Doesn’t that risk slowing down innovation?
AH: Not if done right. It’s like the parable of the ceremonial clock – a delicate mechanism that can’t be entrusted to just anyone. We shouldn’t entrust our AI regulations to those who don’t understand the system’s complexity. We need thorough research, public consultation, and measured policy-making. Yes,it takes time,but it’s worth it to get it right.