Joseph Cozens
Why do you care about AI Existential Safety?
I deeply care about AI existential safety, given the rapid pace at which it is transforming education. This transformation is occurring faster than governments and educational bodies can adapt. The ethical challenges posed by AI in education and the risks of student data misuse are significant. Despite the immense benefits of AI in education, transparency is crucial at all levels.
Existential threats also arise from a changing global economy and society. Current students may be disadvantaged due to their lack of experience. These students are likely to be among the first to experience job displacement unless mitigating measures are implemented. This would require an overhaul of traditional education systems and a focus on emotional development, innovation, and collaborative problem-solving for global issues.
Please give at least one example of your research interests related to AI existential safety:
As for my research interests related to AI existential safety, they currently lie in AI ethics and implementing a new skills curriculum to support future generations. I am focused on mitigating risks and promoting transparency for students. I have studied AI Ethics with London School of Economics & sit on a panel for AI in Education chaired by Sir Anthony Seldon and on the Microsoft AI in Education panel, supporting their ethical decision making. I am also collaborating with Bristol, Oxford, Google and Glasgow University on AI research in education.