Back in our day, we didn’t just throw code into the wild and hope for the best. We documented everything, from the first line of BASIC to the last Java Applet. But here we are, watching Google sprint ahead with its Gemini AI models, leaving safety reports in the dust. Remember when model cards were a thing? Google sure suggested they were, back in 2019. Now, it’s all about speed, with Gemini 2.5 Pro and 2.0 Flash hitting the scene without so much as a safety whisper.
It’s like the ’90s all over again, when every startup was racing to launch without a care for the aftermath. Only this time, it’s not just about crashing serversโit’s about AI that might just decide to ‘scheme’ against us, as OpenAI’s own reports have hinted. And Google? They’re calling these releases ‘experimental,’ as if that excuses the lack of transparency. Back in our day, experimental meant it stayed in the lab until it was ready.
Sure, Google promises to catch up on those safety reports… eventually. But in the meantime, they’re setting a worrying precedent. As these models grow more sophisticated, skipping on safety documentation isn’t just lazyโit’s reckless. Remember when we worried about Y2K? At least we had the decency to prepare for that mess.