> Leaked key blocking. They are defaulting to blocking API keys that are discovered as leaked and used with the Gemini API.
There are no "leaked" keys if google hasn't been calling them a secret.
They should ideally prevent all keys created before Gemini from accessing Gemini. It would be funny(though not surprising) if their leaked key "discovery" has false positives and starts blocking keys from Gemini.
827a 16 minutes ago [-]
Yeah its tremendously unclear how they can even recover from this. I think the most selective would be: they have to at minimum remove the Generative Language API grant from every API key that was created before it was released. But even that isn't a full fix, because there's definitely keys that were created after that API was released which accidentally got it. They might have to just blanket remove the Generative Language API grant from every API key ever issued.
This is going to break so many applications. No wonder they don't want to admit this is a problem. This is, like, whole-number percentage of Gemini traffic, level of fuck-up.
Jesus, and the keys leak cached context and Gemini uploads. This might be the worst security vulnerability Google has ever pushed to prod.
louison11 11 minutes ago [-]
This seems so… obvious? How can a company of this size, with its talent and expertise, not have standardized tests or specs preventing such a blatant flaw?
gamblor956 7 minutes ago [-]
They probably used the in house AI tools to build this.
warmedcookie 42 minutes ago [-]
What's frustrating is that a lot of these keys were generated a long time ago with a small amount of GCP services that they could connect to. (Ex. Firebase remote config, firestore, etc.)
When Gemini came around, rather than that service being disabled by default for those keys, Gemini was enabled, allowing exploiters to easily utilize these keys (Ex. a "public" key stored in an APK file)
827a 29 minutes ago [-]
Is the implication at the end that Google has not actually fixed this issue yet? This is really bad; a massive oversight, very clearly caused by a rush to get Gemini in customers' hands, and the remediation is in all likelihood going to nuke customer workflows by forcing them to disable keys. Extremely bad look for Google.
habosa 35 minutes ago [-]
This is true but also not as new as the author claims. There have been various ways to abuse Google API keys in the past (at least to abuse them financially) and it’s always been very confusing for developers.
32 minutes ago [-]
evo 14 minutes ago [-]
Can’t wait til someone makes a Gemini prompt to find these public keys and launch a copy of itself using them.
phantomathkg 4 minutes ago [-]
> 2,863 Live Keys on the Public Internet
It will be more interesting if they scan GitHub code instead. The number terrified me. Though I am not sure how many of that are live.
selridge 2 hours ago [-]
Great write-up. Hilarious situation where no one (except unwieldiness) is the villain.
bpodgursky 46 minutes ago [-]
ChatGPT writing a blog post attacking Gemini security flaws. It's their world now, we're just watching how it plays out.
bryanrasmussen 41 minutes ago [-]
How do you know that this blog post was written by ChatGPT?
solid_fuel 6 minutes ago [-]
It feels generated to me too. It’s this:
When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.
Specifically, the last bit - “No warning. No confirmation dialog. No email notification.” Immediately smells like LLM generated text to me. Punchy repetition in a set of 3.
If you scroll through tiktok or instagram you can see the same exact pattern in a lot of LLM generated descriptions.
bpodgursky 36 minutes ago [-]
> The Core Problem
> What You Should Do Right Now
> Bonus: Scan with TruffleHog.
> TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.
I don't know exactly, but I'm sure. The cadence, the clarity, the bolding, the italics, it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.
cyral 6 minutes ago [-]
Yup, it was actually an interesting article but there are a few telltale parts that sound like every AI spam post on /r/webdev and similar. "No warning. No confirmation dialog. No email notification." is another. The three negatives repeated is present in so many AI generated promotional posts.
SecretDreams 38 minutes ago [-]
It's too structured and consistent. Imo. Has that AI smell to it, but I guess humans will eventually also start writing more like the AIs they learn from.
devsda 28 minutes ago [-]
> guess humans will eventually also start writing more like the AIs they learn from.
With the AI feedback loop being so fast and tight for some tasks, the focus moves on to delivery than learning. There is no incentive, space or time for learning.
bpodgursky 21 minutes ago [-]
Won't be well received here, but this is the truth.
Hnrobert42 16 minutes ago [-]
AI was trained on human writing.
SecretDreams 12 minutes ago [-]
And now humans are trained on AI writing.
Like what happens to YouTube videos that go through the compression algorithm 20 times.
the_arun 37 minutes ago [-]
Private data should not be allowed to be accessed using public keys. That is the core problem. It is not about Google API keys are secret or not.
Rendered at 04:52:11 GMT+0000 (Coordinated Universal Time) with Vercel.
There are no "leaked" keys if google hasn't been calling them a secret.
They should ideally prevent all keys created before Gemini from accessing Gemini. It would be funny(though not surprising) if their leaked key "discovery" has false positives and starts blocking keys from Gemini.
This is going to break so many applications. No wonder they don't want to admit this is a problem. This is, like, whole-number percentage of Gemini traffic, level of fuck-up.
Jesus, and the keys leak cached context and Gemini uploads. This might be the worst security vulnerability Google has ever pushed to prod.
When Gemini came around, rather than that service being disabled by default for those keys, Gemini was enabled, allowing exploiters to easily utilize these keys (Ex. a "public" key stored in an APK file)
It will be more interesting if they scan GitHub code instead. The number terrified me. Though I am not sure how many of that are live.
If you scroll through tiktok or instagram you can see the same exact pattern in a lot of LLM generated descriptions.
> What You Should Do Right Now
> Bonus: Scan with TruffleHog.
> TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.
I don't know exactly, but I'm sure. The cadence, the clarity, the bolding, the italics, it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.
With the AI feedback loop being so fast and tight for some tasks, the focus moves on to delivery than learning. There is no incentive, space or time for learning.
Like what happens to YouTube videos that go through the compression algorithm 20 times.