Aysajan Eziz
Why do you care about AI Existential Safety?
AI safety fascinates me because it’s where cutting-edge technology meets profound ethical questions. I’m drawn to this field not just as a tech enthusiast, but as someone who sees both the wonder and the potential pitfalls of AI. My background in business has shown me how powerful AI can be, but also how unpredictable. I’ve seen firsthand how even small oversights in AI systems can lead to unexpected outcomes. This experience makes me eager to contribute to ensuring AI remains a positive force as it grows more advanced. I’m particularly intrigued by the challenge of aligning AI with human values. How do we encode concepts like fairness or empathy into algorithms? These aren’t just abstract problems to me – they’re puzzles I’d love to help solve. Joining this community isn’t about saving humanity single-handedly. It’s about being part of a group that’s tackling one of the most interesting and important challenges of our time. I want to learn, contribute my ideas, and be part of shaping a technology that I believe will define our future.
Please give at least one example of your research interests related to AI existential safety:
As an interdisciplinary researcher bridging management science, statistics, and AI/ML, I’m particularly interested in developing frameworks for ethical AI implementation that consider both technical constraints and real-world applications. This interest directly relates to AI existential safety in several ways.
1. One specific research interest is investigating the potential long-term impacts of advanced AI systems, particularly Large Language Models (LLMs), on society and developing strategies for their beneficial deployment. This aligns with my current project at Ivey Business School, where we’re exploring the ethical implementation and societal impacts of LLMs in business and society. For example, we’re using agent-based modeling to simulate LLM adoption in business workflows, forecasting impacts on labor, productivity, and the broader economy. This research is crucial for understanding potential existential risks from widespread AI deployment, such as large-scale economic disruption or the concentration of power in the hands of those controlling advanced AI systems.
2. Additionally, we’re investigating LLMs’ potential to spread misinformation and developing strategies for responsible AI use. This work is essential for mitigating risks associated with AI-powered disinformation campaigns that could destabilize societies or manipulate decision-making processes at a global scale.
3. Another aspect of my research focuses on aligning AI systems with human values and societal welfare. I’m particularly intrigued by the challenge of encoding concepts like fairness or empathy into algorithms. This work is fundamental to ensuring that as AI systems become more advanced, they remain aligned with human interests and don’t pursue goals that could be existentially threatening to humanity.
4. My background in optimization and complex systems analysis also positions me to contribute to the development of robust and scalable approaches to AI alignment, focusing on ensuring AI systems reliably pursue intended goals even as they become more sophisticated.
Through this research, I aim to contribute to the critical work of ensuring that as AI systems grow more powerful, they remain beneficial and aligned with human values, thereby helping to mitigate existential risks associated with advanced AI.