Skip to content
All Newsletters

FLI March 2022 Newsletter

Published
14 April, 2022
Author
Will Jones

Contents

FLI March 2022 Newsletter

Max Tegmark Addresses EU Parliament on AI Act


Last week, the Future of Life Institute (FLI) made its debut appearance at the European Parliament. Addressing the parliamentary hearing on the EU AI Act on Monday 21st March by video link, FLI President Max Tegmark emphasised the need to make the Act more ‘future proof’, given how fast the power of this technology is growing.

He explained that the main way to do this was to require that general purpose AI systems abide by Article 15 on accuracy, robustness and cybersecurity. Currently, the draft text of the Act suggests exempting these technologies from regulation altogether. This has far-flung consequences, like shifting liability onto European companies that use these systems in specific applications – rather than, for instance, the American or Chinese firms that own the AI systems themselves. Max stressed that general purpose AI is ‘where the future of AI technology is going’. Only by regulating this can we ‘harness this power’ for the good of humanity. The FLI President was quoted in this Science Business piece, which came out the following day: ‘Is the act in its current draft future proof enough? I think my answer is clear: no.’ The French website, Contexte Numerique, likewise summarised the hearing here: ‘Experts judge the AI Act too unsustainable over time’.

Indeed, there was broad agreement among the expert panelists present, as well as the EU co-rapporteurs, about how the AI Act should be improved. In the first panel, Catelijne Muller, President of ALLAI, argued that, just because general purpose AI systems might not have a specific ‘intended purpose’, that does not mean they should be excluded from the Act; she drew the analogy with chemicals: ‘you might not always know what the purpose of the chemical is, what it’s going to be used for. But that doesn’t mean that you cannot be held liable and that you don’t have any responsibility’. Andrea Renda, Head of the Centre for European Policy Studies, agreed, pointing out that ‘the AI Act is conceived for a one AI system, one type of risk, one user, a very linear process. But reality will not be like this’. Co-rapporteur Dragos Tudorache seemed to endorse these points when he commented that the AI Act needed to be ‘truly horizontal’, with ‘as few exclusions as possible’.

Professor Stuart Russell of the University of California, Berkeley, speaking in the second panel, added, ‘
Many AI researchers find the exemption of general purpose AI systems puzzling’. Russell also explained that ‘It makes sense to assess their accuracy, fairness etc., at the source; that is, the large scale vendor of general purpose systems, rather than at a large number of presumably smaller European integrations’. Sarah Chander, of European Digital Rights (EDRi), concurred, emphasising that,
 ‘we need governance obligations on deployers of high risk AI, and this should include an obligation on deployers to conduct and publish a fundamental rights impact assessment before each use.’ You can watch the complete hearing here, or a much shortened highlights video on our social media platforms (here, here, or here). To keep up to date with AI Act developments, sign up to this dedicated newsletter by FLI EU Policy Researcher Risto Uuk.

Other Policy and Outreach Efforts


Ongoing U.S. Policy Advocacy for AI Safety

Richard Mallah, FLI Director of AI Projects, participated in a National Institute of Standards and Technology (NIST) AI Risk Management Framework workshop panel on framing AI risk management, and co-organized an SafeAI workshop aiming to explore ‘new ideas on AI safety engineering, ethically aligned design, regulation and standards for AI-based systems’.

FLI U.S. Policy Director Jared Brown continues to advocate for an increase in the quantity and quality of U.S. Research and Development funding for AI safety and ethics research. Avenues include appropriations and the forthcoming COMPETES – USICA conference legislation in Congress.

FLI Worldbuilding Contest Deadline Fast Approaching

The FLI Worldbuilding Contest deadline is in less than 3 weeks, on April 15th. It might seem harder than ever to think positively about the future of the world, but when we specified that the worldbuilds must be positive, we were under no illusions about the problems humanity faces, or how unlikely it can seem that in 20 years’ time things will somehow be any better.  Certainly, a major part of the challenge of envisioning a scientifically and geopolitically realistic world, which we could actually reach in the next 20 years, is to reckon with all the evils and struggles which afflict mankind today, including the recurrent problems of war, disease and the nuclear risk.

Y
ou certainly have your work cut out for you, but don’t forget that this contest has a prize purse of up to $100,000! Equally, your world will need to feature Artificial General Intelligence, which poses plenty of its own dangers, of course, but also offers many potential answers to these questions. We can’t wait to see your worldbuilds, and be filled with hope for brighter futures which humanity can work towards. Your entries are more important than ever. Enter here, and find everything else you need to know about the contest at this site.

FLI Podcast

Anthropic Founders on their AI Safety and Research Company

In the latest episode of the Future of Life Institute Podcast – the last to be hosted by long-time Podcast Host Lucas Perry – Daniela Amodei and Dario Amodei lay out the vision and strategy behind their new AI safety and research company, Anthropic, of which they are President and CEO, respectively.

‘We’re building steerable, interpretable and reliable AI systems’, Daniela explains. Anthropic does this by training large scale models, and carrying out safety research on those models; ‘We’re aiming to make systems that are helpful, honest and harmless’. To find out more about Anthropic, you can find the full podcast episode here, or, if you prefer, on YouTube.

News & Reading


Russian KUB-LAR drone sighting in Ukraine

It was perhaps only a matter of time. This month, as covered by Will Knight in Wired magazine, a Russian ‘suicide drone’ with the ability to identify targets using artificial intelligence has been spotted in images from the Ukrainian invasion. As Knight mentions, Ukrainian forces have already been using remotely operated Turkish-made drones (TB2s, right) against their Russian adversaries, and the Biden administration also pledged to send Ukraine small US-made loitering munitions called Switchblades. But whereas the autonomy of those drones has been, so far, limited, these new KUB-BLA drones exemplify a slippery slope towards AI weaponry with eventually no human involvement. Even if the KUB-BLA itself is still only partially autonomous, FLI President Max Tegmark is quoted in the article explaining that proliferation of these weapons will only continue ‘unless more Western nations start supporting a ban on them’.

Earlier in the month, this video by VICE News gave an overview of the rise of autonomous drone warfare. The clip featured FLI’s Director of Lethal Autonomous Weapons Advocacy, Emilia Javorsky, explaining the broader implications of slaughterbots coming into use, namely algorithms deciding whom to kill.

New Report on Uncertainty and Complexity in Nuclear Decision-Making

This timely new report by Beyza Unal, Julia C., Calum Inverarity and Yasmin Afina at Chatham House explains how the nuclear policy community can better ‘navigate complexity and mitigate uncertainty’ by learning from past mistakes and employing a broader range of skills. The report argues that decision-makers should not ‘accept unacceptable levels of risk when such risks could be mitigated’. This might seem self-evident, but high levels of risk are still tolerated. With renewed fears surrounding this issue, now is the time to make the necessary improvements.

AI Model Invents 40,000 chemical weapons in just 6 hours

This Interesting Engineering piece highlights how even an AI built to find ‘helpful drugs’, when tweaked just a little, can find things that are rather less helpful. Collaborations Pharmaceuticals carried out a simple experiment to see what would happen if the AI they had built was slightly altered to look for chemical weapons, rather than medical treatments. According to a paper they published in Nature Machine Intelligence journal, the answer was not particularly reassuring. When reprogrammed to find chemical weapons, the machine learning algorithm found 40,000 possible options in just six hours.

These researchers had ‘spent decades using computers and A.I. to improve human health’, yet they admitted, after the experiment, that they had been ‘naive in thinking about the potential misuse of [their] trade’. As Interesting Engineering puts it, the researchers were ‘were blissfully unaware of the damage they could inflict’. Collaborations Pharmaceuticals concluded that the results provided 
‘A clear indication of why we need to monitor A.I. models more closely and really think about the consequences of our work.’ As FLI has often commented in the past, we cannot afford to learn all these mistakes the hard way; we must improve our pre-emptive modelling, rather than finding out we have gone too far, only after the fact.

FLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States.
If you need our organisation number (EIN) for your tax return, it’s 47-1052538.

FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
2 December, 2024

Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
1 November, 2024

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram