
🙋🏻♀️ Editor’s Note:
Our website, aisafetyturkiye.org, is currently under development. In the meantime, you can reach us at aisafetyturkiye@gmail.com.
ANNOUNCEMENTS 🔊
Pivotal Research Fellowship
A 9-week program designed to enable promising researchers to produce impactful research and accelerate their careers in AI safety, AI governance, and biosecurity. Fellows will collaborate with experienced mentors, engage in workshops and seminars, and build strong networks within the AI safety research community in London and beyond.
🗓️ Register by: November 21st
Reprogramming AI Models Hackathon
This Apart Sprint will involve diving deep into the world of AI observability and model behavior modification, offering the chance to explore and innovate in the cutting-edge field of mechanistic interpretability.
🗓️ Register by: November 22nd
Bayesian oracles and safety bounds – Yoshua Bengio
Yoshua Bengio will be speaking at this seminar, discussing Bayesian Oracles in the context of AI Safety.
🗓️ Register by: November 14th
AI Safety Camp – Virtual 2025
A 3-month online research program where participants form teams to work on pre-selected projects.
🗓️ Register by: November 17th
AI Policy Clinic
Centre for AI and Digital Policy has opened up applicants for their introductory and advanced policy clinic programs, which aims to train future leaders in the AI policy field skills in policy analysis, research, evaluation, team management, and policy formation.
🗓️ Register by: November 29th
TOP PICKS 📑 🎧
Machines of Loving Grace: How AI Could Transform the World for the Better
In “Machines of Loving Grace”, Anthropic CEO Dario Amodei envisions how responsible AI development could compress a century’s progress into a decade - from eliminating major diseases and to enabling unprecedented economic growth in developing nations. Readers should consider that Amodei’s position may incline him toward optimistic predictions and may want to take a look at this response to his optimistic position on potential progress in biotech.
Ex-Policy Lead At OpenAI Explains: Why I’m Leaving OpenAI
Miles Brundage, one of OpenAI’s leading policy researchers, explains the company’s shifting priorities and his reasons for deciding to leave with full transparency.
NEWS 📰
AI Researchers Sweep Nobel Prizes
- The 2024 Nobel Prizes in Physics and Chemistry have been awarded to pioneering scientists in the field of artificial intelligence.
- John Hopfield and Geoffrey Hinton received the Physics prize for laying the foundations of neural networks.
- David Baker won half of the Chemistry prize for his work in protein design, with the other half going to Demis Hassabis and John Jumper, developers of AlphaFold2, for revolutionizing protein structure prediction.
White House AI Security Directive
- The Biden administration has issued a comprehensive directive regarding the use of AI in national security.
- The directive aims to ensure that federal agencies protect civil rights, privacy, and democratic values while utilizing AI technologies.
Microsoft Reopens Nuclear Plant for AI Energy Needs
- Microsoft is reopening the Three Mile Island nuclear power plant in Pennsylvania to power its data centers.
- The $1.6 billion deal with Constellation Energy marks a significant step in reviving the long-shuttered facility.
- Under a 20-year purchase agreement, Microsoft will pay at least $100 per megawatt-hour.
Chinese Military Uses Meta’s AI Model
- The Chinese military is utilizing Meta’s open-source AI model, Llama, for defense purposes.
- Dubbed “ChatBIT,” the system will aid in intelligence analysis and military decision-making.
- Meta has stated that this usage violates its terms of service.
Claude AI Can Now Control Computers
- Anthropic’s Claude 3.5 Sonnet now allows users to directly control their computers, performing tasks like taking screenshots, searching information, and filling out forms.
- This new feature is accessible via API and is equipped with security safeguards.
JOB POSTINGS 👩🏻💻
You can check out 80,000 Hours’ job board
to explore new opportunities in AI Safety!