
On this page
- ANNOUNCEMENTS 🔊
- TOP PICKS 📑 🎧
- Is Superintelligent AI Becoming a National Strategy?
- How Do We Solve the Alignment Problem?
- AI Regulations Discussed at the Paris Summit
- New AI Usage Restrictions from the EU
- Eric Schmidt Launches $10 Million Fund for AI Safety
- Joint Security Initiative by Tech Companies: Roost
- New Optimizer from Moonshot: Muon
🎉 Editor’s Note:
Our website is now live! For more information and updates, you can visit aisafetyturkiye.org.
ANNOUNCEMENTS 🔊
Ongoing AI Safety and Governance Courses by Bluedot Impact
BlueDot Impact runs 1-month-long fast-track courses on Introduction to Transformative AI and AI Alignment. See if you can catch the next application window!
🗓️ Rolling applications
Introduction to Transformative AI
Bluedot Impact’s 5-day “Intro to Transformative AI” course, provides a focused curriculum and expert-led discussions to help individuals from diverse backgrounds swiftly develop the knowledge and network needed to contribute to shaping the future of AI.
🗓️ Rolling applications
New Hackathons by Apart Research
Join 2 hackathons in March, one on hardware security for AI and another on developing governance solutions.
🗓️ Register by: March 14th
Women in AI Safety Hackathon
This Apart Sprint encourages women and underrepresented groups to contribute their perspectives to critical areas in AI safety, including alignment, governance, security, and evaluation. No prior AI safety experience required.
🗓️ Register by: March 14th
MAISU - Minimal AI Safety Unconference
A flexible, participant-driven unconference for those working to prevent AI catastrophe, organized by the AI Safety Camp team.
🗓️ Register by: April 18th
TOP PICKS 📑 🎧
Is Superintelligent AI Becoming a National Strategy?
While AI development and safety gain a national security framing, this recent piece by Dan Hendrycks, Eric Schmidt, and Alexandr Wang advises the US government to build a national strategy on artificial superintelligence, beyond merely recognizing the possibility of this technology.
How Do We Solve the Alignment Problem?
In “What is it to solve the alignment problem?", Joe Carlsmith from Open Philanthropy presents a framework defining AI Alignment as balancing safety (avoiding scenarios where AI systems “go rogue”) with accessing super intelligence benefits. For readers struggling to conceptualize what solving alignment would or could actually mean, this readable essay may provide conceptual clarity.
AI Regulations Discussed at the Paris Summit
- The AI Summit, hosted by France, highlighted the differences in regulatory approaches between the US and Europe.
- US Vice President J.D. Vance described Europe’s strict regulations as “innovation-stifling” and “authoritarian censorship.”
- The joint statement released at the end of the summit was not signed by the US and the UK, as disagreements made concrete decisions difficult.
New AI Usage Restrictions from the EU
- The U.S. is undergoing a significant change in AI governance, with fundamental modifications to federal AI security testing requirements.
- The previous framework had established mandatory disclosure protocols for high-risk AI systems.
- While these changes are seen as an effort to accelerate AI innovation, they have also sparked debates on the importance of security frameworks.
- This policy shift could influence global AI governance standards and set a precedent for other countries.
Eric Schmidt Launches $10 Million Fund for AI Safety
- Former Google CEO Eric Schmidt has set up a $10 million fund to support AI safety research.
- The program will provide grants of up to $500,000 and advanced computational resources to researchers worldwide.
- Among the first grantees is Yoshua Bengio; the project will focus on AI risk-reduction technologies.
Joint Security Initiative by Tech Companies: Roost
- Companies like Google and OpenAI are supporting Roost (Robust Open Online Safety Tools), a nonprofit initiative for AI safety.
- Roost aims to facilitate the detection of harmful online content by developing open-source security tools.
- Founded at Columbia University, Roost has raised $27 million in funding.
New Optimizer from Moonshot: Muon
- The Moonshot initiative has released Muon, an optimizer that works twice as efficiently in AI training, as open-source.
- Muon significantly reduces the training costs of large language models while boosting performance.
- In tests, Muon achieved the same performance as the AdamW optimizer using only 52% of the processing power.