Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
Developers with decades of experience still make basic security holes. The general public are screwed once they start hosting their own apps and serving on the Internet.
cube00 4 hours ago [-]
There's something so innocent about the early days when even Microsoft thought we'd be running Personal Web Servers and hosting our own websites in a peer-to-peer fashion.
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
I always held the belief that we (as programmers and industry) failed the initial premise of the "distributed internet". On one hand, the core of the internet (whether its arpanet or even tcp/ip) was designed to be fully distributed, trustless, selfhostable, etc. The idea that you if you want an email you do a `pkg_add email`, want a file server, `pkg_add file-server`, want remote access, `pkg_add openssh` and you're done. But what we have today is [1].
Securing all that got very technical and nuanced with hundreds of complex scenarios and tools and protocols. Tech companies raced to produce services the mass public can use, hiring hordes of very smart, expensive and technical developers to develop and secure, and they still get it wrong frequently. While the FOSS community adopted the "get good or gtfo" approach as in [1].
The average person has no chance. That's why closed wall-gardened platforms like iOS and Android are winning.
> Developers with decades of experience still make basic security holes.
You see this type of template response copy pasted basically under any post/comment of this kind.
I think at the end of the day we’ll be able to look back and see what/who fared better, based on actual data.
carlgreene 4 hours ago [-]
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
627467 3 hours ago [-]
The frequency with which I see contemporary apps updating (sometimes multiple times a day) says there's a change in culture that also makes professionals prone to mistakes.
I get that we'll never ship a perfect release, but if you have to push fixes once a day it seems you've lost perspective.
Vibe coding slopiness is more acceptable now because we've lowered our standards
cosmic_cheese 3 hours ago [-]
Devs' newfound ability to patch on the fly is absolutely being overleveraged. It's a wonderful capability to have that can do wonders in terms of disaster mitigation, but it's clearly become a crutch and has resulted in a situation where software has become a horrific amalgamation of haphazardly-developed panic-patches, taking the classic "ball of mud" problem and putting it into overdrive.
yoyohello13 4 hours ago [-]
Yeah, my trust for new open source projects is in the toilet. Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
dizhn 3 hours ago [-]
This applies to all software not just open source.
esseph 4 hours ago [-]
> Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
Companies don't take security seriously now (and predating vibe coding)
ctoth 4 hours ago [-]
I'm sorry, what?
> Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things
When was this? What world? Did I skip worldlines? Is this a new Universe?
The world I remember is that anybody could write a program and put it on the Internet. Is this not the world you remember?
Further, when those engineers were "trained" ... were there no data breaches before 2022?
carlgreene 4 hours ago [-]
Of course there were. Don't be pedantic. Anybody could write a program and put it on the internet. But to get a reasonably polished version with decent features and an enjoyable enough UX for someone to sign up and even pay money more, it generally took people who kind of knew what they were doing.
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
kimixa 4 hours ago [-]
Yes - AI has completely destroyed the set of "Signals" people used to judge quality of much software. They weren't ever 100% accurate, sure, but they were often pretty good heuristics for "level of care", what the devs considered important (or didn't consider important) and similar.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Flashtoo 2 hours ago [-]
This is true beyond software. It used to be that the proof of the thinking process was in the resulting artifact. No longer can you estimate from the existence of a piece of text and the level of polish behind it that the apparent author has put at least a reasonable amount of thought into it. This applies to comments, blogs, emails, and most troublingly I've seen this happen at my job with things like requirement specs. Now, the veneer of quality makes it much harder to know what is the appropriate amount of skepticism to judge the contents with. And it's too tiring to be maximally skeptical about everything.
melecas 4 hours ago [-]
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
shimman 3 hours ago [-]
I don't think you know what democracy means, democracy means that users can reject poorly made apps. If you can't reject or destroy something, it's not a democratic process.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
SetTheorist 3 hours ago [-]
One of the meanings of the word "democratization" is "the action of making something accessible to everyone", which is clearly the sense meant here.
dizhn 3 hours ago [-]
It has a more broad meaning of sharing like when a factory is dumping waste in a river, they are democratizing pollution. (i.e they get the benefits but everybody pays the cost.)
warkdarrior 3 hours ago [-]
I believe the phrase is "socialize the risk, privatize the profits". I have never heard used with the word "democratize".
dizhn 2 hours ago [-]
I think I encountered in Chomsky's writings but I might be wrong.
andersmurphy 4 hours ago [-]
With the power of LLMs anyone can make and sell foot guns.
aitchnyu 3 hours ago [-]
One dev of a Lovable competitor pointed me to the rules thats supposed to ensure queries are limited to that user's data. This seems like "pretty please?" to my amateur eyes.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
adampunk 2 hours ago [-]
So the problem I'm having is I don't know what I'm doing vis a vis security, so I can't audit my own understanding by just sitting in a chair, but here's what I've been doing.
I'm building a desktop app that has has authentication needs because we need to connect our internal agents and also allow the user to connect theirs. We pay for our agents, the user pays for theirs (or pays us to use ours etc.). These are, relatively speaking, VERY SIMPLE PROBLEMS, nevertheless agents are happy to consume and leak secrets, or break things in much stranger ways, like hooking the wrong agent up to the wrong auth which would have charged a user for our API calls. That seemed very unlikely to me until I saw it.
So far what has "worked" (made me feel less anxious, aside from the niggling worry that this is theater) is:
1. Having a really strong and correct understanding of our data flows. That's not about security per se so at least that I can be ok at it. This allows me to...
2. Be aggressive and paranoid about not doing it at all, if it can be helped. Where I actually handle authentication is as minimal as possible (one should have some reasonable way to prove that to yourself). Done right the space is small enough to reason about.
How do I do 1 & 2 while not knowing anything? Painfully and slowly and by reading. The web agents are good if you're honest about your level of knowledge and you ask for help in terms of sources to read. It's much more effective than googling. Ask, read what the agents say, press them for good recommendations for YOU to read, not anyone. Then go out and read those sources. Have I learned enough to supervise a frontier model? No. Absolutely not. Am I doing it anyway? Yes.
s_ting765 4 hours ago [-]
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
ch4s3 3 hours ago [-]
This is actually pretty interesting. I guess I knew you could do this offensively but it didn’t occur to me to use it OWASP style to test my own work.
ctoth 4 hours ago [-]
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
aplomb1026 4 hours ago [-]
[dead]
octoclaw 4 hours ago [-]
[dead]
julianlam 4 hours ago [-]
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
Rendered at 22:09:02 GMT+0000 (Coordinated Universal Time) with Vercel.
https://www.youtube.com/watch?v=m-W8vUXRfxU
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
[0]: https://idiallo.com/blog/my-non-programmer-friends-built-app...
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
https://news.microsoft.com/source/1996/10/24/microsoft-annou...
Securing all that got very technical and nuanced with hundreds of complex scenarios and tools and protocols. Tech companies raced to produce services the mass public can use, hiring hordes of very smart, expensive and technical developers to develop and secure, and they still get it wrong frequently. While the FOSS community adopted the "get good or gtfo" approach as in [1].
The average person has no chance. That's why closed wall-gardened platforms like iOS and Android are winning.
1: https://www.youtube.com/watch?v=40SnEd1RWUU
You see this type of template response copy pasted basically under any post/comment of this kind.
I think at the end of the day we’ll be able to look back and see what/who fared better, based on actual data.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
I get that we'll never ship a perfect release, but if you have to push fixes once a day it seems you've lost perspective.
Vibe coding slopiness is more acceptable now because we've lowered our standards
Companies don't take security seriously now (and predating vibe coding)
> Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things
When was this? What world? Did I skip worldlines? Is this a new Universe?
The world I remember is that anybody could write a program and put it on the Internet. Is this not the world you remember?
Further, when those engineers were "trained" ... were there no data breaches before 2022?
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
https://github.com/dyad-sh/dyad/blob/de2cc2b48f2c8bfa401608c...
I'm interested to hear how other people approach this.
I'm building a desktop app that has has authentication needs because we need to connect our internal agents and also allow the user to connect theirs. We pay for our agents, the user pays for theirs (or pays us to use ours etc.). These are, relatively speaking, VERY SIMPLE PROBLEMS, nevertheless agents are happy to consume and leak secrets, or break things in much stranger ways, like hooking the wrong agent up to the wrong auth which would have charged a user for our API calls. That seemed very unlikely to me until I saw it.
So far what has "worked" (made me feel less anxious, aside from the niggling worry that this is theater) is: 1. Having a really strong and correct understanding of our data flows. That's not about security per se so at least that I can be ok at it. This allows me to... 2. Be aggressive and paranoid about not doing it at all, if it can be helped. Where I actually handle authentication is as minimal as possible (one should have some reasonable way to prove that to yourself). Done right the space is small enough to reason about.
How do I do 1 & 2 while not knowing anything? Painfully and slowly and by reading. The web agents are good if you're honest about your level of knowledge and you ask for help in terms of sources to read. It's much more effective than googling. Ask, read what the agents say, press them for good recommendations for YOU to read, not anyone. Then go out and read those sources. Have I learned enough to supervise a frontier model? No. Absolutely not. Am I doing it anyway? Yes.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.