
On this page
ANNOUNCEMENTS 🔊
🌟 BlueDot Impact Biosecurity course
Join an intensive course to protect the world against AI-engineered pandemics
🗓️ Register by: December 14
🌟 CBAI Spring Research Fellowship 2026
Fully-funded, 10-week program run by the Cambridge Boston Alignment Initiative (CBAI), covering both technical and governance research. Fellows work closely with mentors, participate in workshops and seminars, and gain research experience and networking opportunities.
🗓️ Register by: December 14
🌟 Non-Trivial Fellowship: 2026
Fellowship helping people ages 14–20 launch research projects tackling the world’s most pressing problems – including AI safety.
🗓️ Register by: December 22
🌟 Tarbell Fellowship: 2026
1-year program for journalists interested in covering AI. Comprised of a 10-week course covering AI and journalism fundamentals, a week-long journalism summit in the Bay Area, and a 9-month placement at a major newsroom.
🗓️ Register by: January 7
TOP PICKS 📑 🎧
Alignment Remains a Hard, Unsolved Problem
Evan Hubinger from Anthropic argues that alignment remains fundamentally unsolved, despite current AI models behaving in a fairly aligned way. While current models behave well enough, Hubinger, who leads “Alignment Stress-Testing” at Anthropic, warns that hard challenges of supervising superhuman systems and training on long-horizon real-world tasks still lie ahead.
AI orchestrating full scale cyber attacks is now tomorrow’s news
Anthropic reports the first documented AI-orchestrated cyber espionage campaign, attributed with high confidence to a Chinese state-sponsored group. The attackers manipulated Anthropic’s AI model Claude Code to autonomously infiltrate roughly thirty global targets, AI performing 80-90% of the operation. Anthropic suspended involved accounts and informed impacted entities after detecting the activity, and used Claude itself to analyze the attack data for defensive insights.
NEWS 🗞️
White House Launches “Genesis Mission” via Executive Order
The Trump Administration issued a new Executive Order launching the “Genesis Mission,” a national mobilization to use AI for scientific discovery.
The order directs the Department of Energy to build a secure, “closed-loop” AI platform integrating federal data and supercomputers.
It explicitly prioritizes “American dominance” in AI science and mandates strict security clearances for researchers to prevent data leakage.
HHS Unveils “Try-First” AI Strategy for Drug Development
The Department of Health and Human Services (HHS) released an aggressive strategy to accelerate AI in drug discovery, promoting a “try-first” regulatory culture.
The policy aims to reduce bureaucracy but has sparked alarm among safety experts.
Critics warn that rapid deployment on sensitive patient data—without strict safety testing—could lead to algorithmic bias and privacy violations.
EU Commission Proposes “Digital Omnibus” to Delay High-Risk AI Rules
The European Commission introduced the “Digital Omnibus” package, effectively delaying compliance deadlines for “high-risk” AI systems under the AI Act until late 2027.
Citing a lack of harmonized technical standards, the proposal pauses enforcement to reduce bureaucratic burdens.
Safety advocates warn this leaves European infrastructure vulnerable to untested AI systems for an additional 18 months.

