Newsletter 10

Newsletter 10

🙋🏻‍♀️ Editor’s Note:

Our website, aisafetyturkiye.org, is currently under development. In the meantime, you can reach us at aisafetyturkiye@gmail.com.

ANNOUNCEMENTS 🔊

Economics of Transformative AI

This 9-week course from AI Safety Fundamentals (Bluedot Impact) is designed for economists who want to develop their understanding of transformative AI and its economic impacts.

There is also a 5 day version of this course, ran by the same team.

🗓️ Register by: 13 February for the multi-week course, rolling applications for fast-track, one-week course

Source

Introduction to Transformative AI

Bluedot Impact’s 5-day “Intro to Transformative AI” course, provides a focused curriculum and expert-led discussions to help individuals from diverse backgrounds swiftly develop the knowledge and network needed to contribute to shaping the future of AI.

🗓️ Register by: Rolling applications

Cooperative AI Summer School

This summer school aims to provide students and early-career professionals in AI, computer science, and related disciplines – such as sociology and economics – with a firm grounding in the emerging field of cooperative AI.

🗓️ Register by: March 7th

ARENA 5.0

The Alignment Research Engineer Accelerator (ARENA) is a 4–5 week ML bootcamp with a focus on AI safety. Participants work through ML programming exercises in pairs under the guidance of expert teaching assistants to develop their skills.

🗓️ Register by: February 15th

Apart Research Hardware Challenge

This Apart Sprint brings together hardware experts, security researchers, and AI safety enthusiasts to develop and test verification systems for detecting AI training activities through side-channel analysis. Participants will work with state-of-the-art hardware setups to create innovative solutions for monitoring compute usage.

🗓️ Register by: March 6th

Women in AI Safety Hackathon

This Apart Sprint encourages women and underrepresented groups to contribute their perspectives to critical areas in AI safety, including alignment, governance, security, and evaluation. No prior AI safety experience required.

🗓️ Register by: March 14th

Global Challenges Project AI Safety Workshop

3-day residential workshop for university students who want to learn more about what they can do to have a large social impact with their careers. Participants explore the case for improving safety measures for AI and biotechnology, and hear first-hand about the career journeys of professionals working in the fields.

🗓️ Register by: February 16th

TOP PICKS 📑 🎧

Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development

Many readers of this newsletter may know or have heard about the arguments for why AI may disempower humanity. In a new paper, Jan Kulveit, Değer Turan and their co-authors provide a detailed analysis of how AI progress could gradually lead to this outcome. They argue that as AI systems become increasingly competitive alternatives to humans across key societal domains like the economy, culture, and governance, the alignment of these systems with human interests, which has relied on the necessity of human participation, may be displaced. The researchers caution that misalignment in one area can exacerbate misalignment in others, potentially culminating in an existential catastrophe and permanent loss of power for humanity.

The International AI Safety Report Published Ahead of the AI Action Summit

Before the AI Action Summit, the International AI Safety Report, written by leading scientists in the field, including Yoshua Bengio, was published, with each country forming an official delegation. In addition to the previous report, this edition includes a comprehensive section on AI risk management and is expected to serve as a scientific guide for discussions at the summit.

NEWS 📰

China’s DeepSeek Disrupts the Global AI Race:

  • China-based DeepSeek has made an unexpected breakthrough in artificial intelligence, challenging the U.S.’s dominance in the sector.
  • This development has increased interest in the upcoming meeting, which will be attended by representatives from 80 countries ahead of the AI Action Summit in Paris.
  • DeepSeek’s success was achieved despite U.S. restrictions on advanced chip exports, thanks to its innovative resource optimization strategies.
  • Experts note that DeepSeek’s achievement challenges assumptions about the resource and infrastructure requirements for AI development.

Major Shift in U.S. AI Policy:

  • The U.S. is undergoing a significant change in AI governance, with fundamental modifications to federal AI security testing requirements.
  • The previous framework had established mandatory disclosure protocols for high-risk AI systems.
  • While these changes are seen as an effort to accelerate AI innovation, they have also sparked debates on the importance of security frameworks.
  • This policy shift could influence global AI governance standards and set a precedent for other countries.

The UK’s AI Safety Institute (AISI):

  • The UK’s AI Safety Institute (AISI), with a budget of £100 million, stands out as the world’s most advanced government program for assessing AI risks.
  • The institute has secured agreements with leading AI companies such as OpenAI, Google DeepMind, and Anthropic, allowing it to access and test the latest models before their release.
  • While AISI does not have access to model weights, it has developed advanced testing methodologies for biological, chemical, and cyber risks.
  • Its work promotes international cooperation on AI safety and serves as a model for other countries.
  • By balancing technical expertise with diplomatic skills in managing relationships with powerful tech firms, AISI represents a crucial experiment in government oversight of AI development.

JOB POSTINGS 👩🏻‍💻

You can check out 80,000 Hours’ job board

to explore new opportunities in AI Safety!

Take me there