Skip to content
Grant

Paradigms of Artificial General Intelligence and Their Associated Risks

Amount recommended
$220,000.00
Grant program
Primary investigator
Jose Hernandez-Orallo, University of Cambridge
Technical abstract

Many paradigms exist, and more will be created, for developing and understanding AI. Under these paradigms, the key benefits and risks materialise very differently. One dimension pervading all these paradigms is the notion of generality, which plays a central role, and provides the middle letter, in AGI, artificial general intelligence. This project explores the safety issues of present and future AGI paradigms from the perspective of measures of generality, as a complementary dimension to performance. We investigate the following research questions:
1. Should we define generality in terms of tasks, goals or dominance? How does generality relate to capability, to computational resources, and ultimately to risks?
2. What are the safe trade-offs between general systems with limited capability or less general systems with higher capability? How is this related to the efficiency and risks of automation?
3. Can we replace the monolithic notion of performance explosion with breadth growth? How can this help develop safe pathways for more powerful AGI systems?
These questions are analysed for paradigms such as reinforcement learning, inverse reinforcement learning, adversarial settings (Turing learning), oracles, cognition as a service, learning by demonstration, control or traces, teaching scenarios, curriculum and transfer learning, naturalised induction, cognitive architectures, brain-inspired AI, among others.

Published by the Future of Life Institute on 1 February, 2023

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram