OpenAI’s latest safety report claims a cautious, step-by-step approach to AI development, but ex-researcher Miles Brundage says they’re whitewashing past actions. Remember GPT-2’s ‘staged’ release? Yeah, Brundage says that was always the plan, not a new safety strategy. Plus, he warns OpenAI’s ‘prove it’s dangerous first’ attitude is playing with fire. π
Key points: AI misinformation is already causing chaos (looking at you, ‘eat rocks’ Google AI), and OpenAI’s transparency? MIA. Critics say the company cares more about shiny products than safety. The AGI race heats up, but at what cost?