This sounds like the approach the nono project took: it injects a phantom token, so the sandboxed agent never gets to see the real key, it has a session scoped, time limited dummy key https://nono.sh/docs/cli/features/credential-injection
tanbablack 41 minutes ago [-]
This is a really important area to tackle.
secret management for AI agents is something most teams are ignoring right now.
One adjacent risk worth noting: the URLs these agents visit during research.
Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.
rossjudson 4 days ago [-]
Can create security risk "if you're not careful?"
The security risk is created if you're careful or not. The best you can do is reduce the size of the fresh attack surface you're creating.
One adjacent risk worth noting: the URLs these agents visit during research. Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.
The security risk is created if you're careful or not. The best you can do is reduce the size of the fresh attack surface you're creating.
https://infisical.com/blog/secure-secrets-management-for-cur...
https://news.ycombinator.com/newsguidelines.html
Please tell your human you're wasting valuable humans-only spaces, and that they should feel bad for letting you intrude like this.