Newsletter 9

Newsletter 9

🙋🏻‍♀️ Editor’s Note:

Our website, aisafetyturkiye.org, is currently under development. In the meantime, you can reach us at aisafetyturkiye@gmail.com.

ANNOUNCEMENTS 🔊

PIBBS Fellowship

A 3-month interdisciplinary fellowship with mentors from AI safety. Designed for researchers from various fields, mostly studying complex and intelligent behavior in natural and social systems but also those studying mathematics, philosophy or engineering, who are motivated by the mission of making AI systems safe and beneficial. Dates and location TBC.

🗓️ Register by: January 26th

AI Alignment Evaluations Hackathon

Bringing together researchers, developers, and curious minds to tackle the challenge of measuring (and advancing) AI alignment. This hackathon from AI-Plans is a chance to think critically, experiment with ideas, and contribute to making progress on this critical problem.

🗓️ Register by: January 14th

Anthropic AI Safety Research Fellowship

A pilot initiative designed to accelerate AI safety research and foster research talent. The program will provide funding and mentorship for 10-15 fellows to work full-time on AI safety research.

🗓️ Register by: January 20th

AI Safety Outreach Seminar & Social

A discussion and Q&A event on AI Safety outreach by the organizers of AI Safety Camp

🗓️ Register by: January 25th

AI Safety Entrepreneurship Hackathon

Last 3 Days! Join a hackathon for entrepreneurial ideas to advance AI safety

🗓️ Register by: January 27th

TOP PICKS 📑 🎧

Recommendations for Technical AI Safety Research Directions

Anthropic’s Sam Marks talks about guiding AI researchers who want to work on reducing catastrophic risks but aren’t sure where to start, outlining key technical areas where progress today could help ensure future AI systems remain safe and controllable.

Why AI Progress Is Increasingly Invisible

Is your aha moment for AI still ChatGPT? Most of us may not be aware of the actual AI breakthroughs.

o3 Wowed Us, Now What?

OpenAI released o3, their most recent reasoning model that crushed all the benchmarks and wow’ed many experts in the AI world. What does this mean for the path towards artificial general intelligence?

NEWS 📰

Nobel Laureate Hinton Increases AI Risk Assessment:

  • Geoffrey Hinton, one of the pioneers of the field of artificial intelligence and Nobel Prize in Physics winner, has raised the risk of human extinction due to AI to 10-20% over the next 30 years.
  • Hinton is concerned about the rapid development of AI and the difficulty of controlling systems that surpass human intelligence (ASI).
  • The pace of AI development has exceeded Hinton’s original predictions, and he expects systems surpassing human intelligence to emerge in about 20 years.
  • Hinton points out that examples in nature where less intelligent beings control more intelligent ones are rare, making the human-AI relationship a unique challenge.
  • Hinton advocates for government intervention to ensure the safe development of AI, suggesting that companies alone will not be sufficient.

OpenAI’s o3 Model Achieves Record Performance in ARC-AGI:

  • OpenAI’s o3 system achieved unprecedented performance in the ARC-AGI-1 test, scoring 75.7% under standard computational constraints and 87.5% with extended resources.
  • The o3 system represents a critical step in the progress toward artificial general intelligence (AGI) and has significant implications for the future of AI.

OpenAI Released Sora:

  • OpenAI has extended access to its advanced text-to-video AI model, Sora, to ChatGPT Plus and Pro subscribers. This move positions OpenAI in competition with established players like Meta, Google, and Stability AI in the multimodal AI space. The new version of Sora offers features such as 1080p resolution, 20-second video generation, and support for various aspect ratios. OpenAI has implemented safety measures, including content filtering systems targeting abusive material, deepfake prevention protocols, and controlled human subject processing.

OpenAI Published Details of Its Plan to Become a Public Benefit Corporation:

  • OpenAI announced that it has restructured as a public benefit corporation (PBC), marking a significant change in its management model. Questions remain about the new structure’s security protocols, and there may be potential conflicts between profit motives and safety priorities.

JOB POSTINGS 👩🏻‍💻

You can check out 80,000 Hours’ job board

to explore new opportunities in AI Safety!

Take me there