It seems like this collection of tools gives you a ton of lethal-trifecta risk for prompt injection attacks. How have you mitigated this—are you doing something like CaMeL?
hgaddipa001 3 hours ago [-]
We do a lot of processing on our backend to prevent against prompt injection, but there definitely still is some risk. We can do better on as is always the case.
Need to read up on how CaMel does it. Do you have any good links?
Here’s a paper offering a survey of different mitigation techniques, including CaMeL. Design Patterns for Securing LLM Agents against Prompt Injections (2025):
https://arxiv.org/abs/2506.08837
Don't worry have worked with a few friends experienced in prompt injection to help with the platform.
But will read these too :)
bfeynman 3 hours ago [-]
what's interesting about this one is that their claims about what makes slashy different are almost entirely wrong... almost all the big models let you connect and do all of the things mentioned. Not understanding MCP at all is hilarious. If an agent has toosl to access multiple data sources it will make calls across them to resolve things, not sure whatever claiming but there's no way you are actually indexing at scale and probably doing just the exact same thing.
hgaddipa001 20 minutes ago [-]
Why wouldn't we be indexing at scale?
BrandiATMuhkuh 4 hours ago [-]
Congratulations on the launch.
I think it's a smart move to not use MCP here. Because your LLM really needs to understand how the different integrations work together.
Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question.
So how do you do the indexing?
For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.
Would Slashy be able to handle that?
hgaddipa001 4 hours ago [-]
Not sure we haven't ever done volume that size for one person, but in theory should be able too!
We use indexing similar to glean (but a bit less elegant without the ACLs)
Can talk more about your use case if you'd like to.
Send me a text at 262-271-5339
mritchie712 8 hours ago [-]
How does the scraper work? e.g. LinkedIn aggressively blocks scraping and you'd need to be logged in to see most things you'd care about. How do you handle that?
hgaddipa001 8 hours ago [-]
We don't scrape LinkedIn ourselves, instead work with large data providers who do live scraping.
Have a waterfall approach in case one source doesn't have the requested information!
who is it? give them a plug, seems like it works well.
hgaddipa001 5 hours ago [-]
Unfortunately have an NDA with them so can't disclose (most of our providers are still in stealth)
dcsan 6 hours ago [-]
nice launch!
Do you worry that AI browser agents (comet etc) will eat this market of light integrations? Since the user is already logged in to various services like linkedin/email etc it's easy for tasks to be scripted together - or fully prompted.
also what did you use to make the video? looks better than most looms.
hgaddipa001 6 hours ago [-]
Oh I used Screen Studio :)
Thanks for the compliment.
Not worried about browser agents, as we actually have pretty deep integrations (we include semantic search as well as user action graphs).
Naturally apis will always be better than browsers as apis are computer languages and browsers are human language.
The sale of Browser Company today too I think shows there's not that much of a ceiling for agentic browsers.
nikolayasdf123 8 hours ago [-]
> we build own MCP
> we use existing models via their API
> we use existing tools/services/platforms
> ChatGPT/OpenWebUI-like web interface
> mostly uses text, no image, no desktop control (?)
hardly can see what this app brings. also, it is paid and requests are routed to someone else? shouldn't this be free, local, and with bring-your-own–key already with things like ollama/llama.cpp?
hgaddipa001 8 hours ago [-]
We actually don't use MCP!
We just make our own tools in-house :)
Hmm the local open source model is something we've thought of, but currently haven't found open source models to be usable
AmazingTurtle 8 hours ago [-]
Why __don't__ you use MCP?
hgaddipa001 7 hours ago [-]
Find that the quality of them currently aren't there yet for a general system. They tend to be designed just to use that singular app instead of to be used in parallel with other apps.
esafak 7 hours ago [-]
But you are compatible with MCP, right? Otherwise users are going to miss out on the MCP ecosystem. And you are going to be spending all your time developing your own versions of MCP plugins. Wouldn't it be easier to improve the existing ones?
tptacek 4 hours ago [-]
MCP is what you use to make tools you own compatible with agents (like Claude Code) that you don't --- or vice/versa. It's not doing anything useful in the scenario where you own both the tool calling code and the agent.
esafak 3 hours ago [-]
The question is whether the tools are limited to what they offer.
tptacek 3 hours ago [-]
Are you sure they want to provide access to arbitrary random tools other people wrote? It's easy enough to add MCP support to native tool calls, but I don't know that that's a great idea given their problem domain.
hgaddipa001 6 hours ago [-]
It's a bit more complicated. We have a full custom single agent architecture, sort of like Manus that isn't fully compatible with MCP
bfeynman 2 hours ago [-]
that just sounds like you have no idea what MCP is, I don't even like MCPs but I can't even understand what angle you are coming from unless they specifically mean using external MCPs instead of your own, since it is you know open source...
brazukadev 6 hours ago [-]
[flagged]
hgaddipa001 6 hours ago [-]
It's useful for quality.
For example we can read and attach pdfs to gmail which not a lot of people can, since we have our own internal storage api.
brazukadev 2 hours ago [-]
oh so now we are flagging people that think not having MCP support is bad?
AtomicByte 8 hours ago [-]
[flagged]
dang 7 hours ago [-]
> Out of all the AI slopups I've seen, y'all might the worst. Have fun, clowns.
You can't attack others like this here. We've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
hgaddipa001 7 hours ago [-]
We don't use open source models as of right now!
tehsuk 6 hours ago [-]
Anything that gives any sort of system access to sensitive data and lets agents carry out actions on basically unchecked input sounds like a complete security and privacy nightmare by design.
hgaddipa001 5 hours ago [-]
Why do you think privacy?
Security I understand, but if you consent to giving it access would it not be fine for privacy.
brazukadev 5 hours ago [-]
You give it access, it grabs your ssh keys and exfiltrate to some third party server. That is not the access the user gave to your platform but it is what it would be capable of doing.
hgaddipa001 5 hours ago [-]
Ohh we don't give it computer use access or anything like that. We inject tokens post tool call, so to protect users from the agent doing anything malicious.
namanyayg 8 hours ago [-]
Slashy is great and the founders are so talented. I've been following Pranjali on Twitter for a while -- they've got great weekly videos where they keep releasing new features.
The team ships fast and I'm excited to see where they go
hgaddipa001 8 hours ago [-]
Not sure if we've ever spoken before but appreciate the support <3
> We use a single agent architecture (as we found this reduces hallucinations)
Do you have a benchmark for this? in my experience, hallucinations have nothing to do with what framework you use.
hgaddipa001 6 hours ago [-]
We did a lot of internal testing but no official benchmark.
We find that the less the agent knows, the more it hallucinates
digitcatphd 8 hours ago [-]
I really hate to be the curmudgeon here but won't foundation models end up having their own AI workflows like the GPT store but with MCP?
I could really envision saving an 'AI Workflow' template with integrated MCP clients that will balloon once adoption is reached. Right now adoption is low so its not a priority for them, once it is, they will tack it on.
I really wish this the best of luck its a great concept, but surely you must be thinking ahead to plan for this situation.
hgaddipa001 8 hours ago [-]
Yep!
We do think there's a good chance they'll make their own version.
But we view it as a Dropbox situation where, the foundation models much like Apple and Google know that this will be the future, but are a bit slow to act on it.
digitcatphd 7 hours ago [-]
I would argue Dropbox was a new product category rather than a feature and as such, was a much deeper strategic decision to enter that category than add a feature. My only recommendation would be to focus on deep complex workflows (E.g. N8N style) with extensive integrations or build out a developer community so you can build some data lock in, because if they are surface level templates surely these will get easily disrupted.
hgaddipa001 6 hours ago [-]
Yep!
That's our goal long term to get better templates
FergusArgyll 8 hours ago [-]
How much time do you spend in gmail now? have you continued to track that?
hgaddipa001 8 hours ago [-]
Now probably ~1 hour or so
nikolayasdf123 8 hours ago [-]
> scraping LinkedIn profiles
is this legal? last time I checked linkedin.com/robots.txt do not allow scraping, unless explicit approval from linkedin
mandeepj 33 minutes ago [-]
LinkedIn has api. So why to scrap?
breadwinner 7 hours ago [-]
If it is publicly available information it is legal to scrape it, regardless of what robots.txt says.
As an attorney (and this is not legal advice), I don't think it's quite that simple. The court held that the CFAA does not proscribe scraping of pages to which the user already has access and in a way that doesn't harm the service, and thus it's not a crime. But there are other mechanisms that might impact a scraper, such as civil liability, that have not been addressed uniformly by the courts yet. And if you scrape in such a way that does harm the operator (e.g. by denying service), it might still be unlawful, even criminal.
There's a relevant footnote in the cited HiQ Labs v. LinkedIn case:
"LinkedIn’s cease-and-desist letter also asserted a state common law claim of trespass to chattels. Although we do not decide the question, it may be that web scraping exceeding the scope of the website owner’s consent gives rise to a common law tort claim for trespass to chattels, at least when it causes demonstrable harm."
They also said: "Internet companies and the public do have a substantial interest in thwarting denial-of-service attacks and blocking abusive users, identity thieves, and other ill-intentioned actors."
It's a good idea to take legal conclusions from media sites with a grain of salt. Same goes for any legal discussion on social media, including HN. If you want a thorough analysis of legal risk--either for your business or for personal matters--hire a good lawyer.
hgaddipa001 6 hours ago [-]
Smart
nhod 5 hours ago [-]
Or run your legal questions through a frontier model and then have a lawyer verify the answers. You can save a lot of money and time.
Yes, all LLM caveats apply. Due your diligence. But they are quite good at this now.
otterley 2 hours ago [-]
Have you actually tried this approach? I’m curious as to the result, especially when you took it to your lawyer. Not a contract review but a business practice risk evaluation.
hgaddipa001 4 hours ago [-]
Hmm this is a good idea too
hgaddipa001 8 hours ago [-]
We get our data from third party data vendors who we assume have gotten explicit approval from linkedin!
scblock 8 hours ago [-]
You assume! Such due diligence!
hgaddipa001 7 hours ago [-]
Unfortunately not able to get into their codebase
Disposal8433 6 hours ago [-]
Or yours...
hgaddipa001 6 hours ago [-]
What would you like to see?
Can tell you :)
milkshakes 46 minutes ago [-]
you're building a tool that is designed to sink its tentacles into peoples' most personal accounts and take unsupervised automated actions with them, using a technology that has serious, well known, documented security issues. you haven't demonstrated any experience with, awareness of, or consideration for the security issues at hand, so the ideal amount of code to share would likely be all of it.
hgaddipa001 31 minutes ago [-]
Fair enough makes sense to not have trust!
We like to believe we're pretty trustworthy, and do our best to make everything secure.
Jayakumark 8 hours ago [-]
Looks nice, but little hesitant to give access to emails. What model is being used on backend ?
hgaddipa001 7 hours ago [-]
We use Claude/OpenAI right now with Groq for tool routing!
I'd say maybe to get comfortable try out the non email features first, but we don't have access to any of your data.
stavros 7 hours ago [-]
How do you not have access to the data if I give you access to my email?
hgaddipa001 6 hours ago [-]
The agent does!
We don't, and agent pulls in data only when executing queries
stavros 6 hours ago [-]
Does the agent run on hardware you control?
hgaddipa001 6 hours ago [-]
Runs on AWS for now!
stavros 6 hours ago [-]
So you do have access to all the data. It's not really a great look if you're lying about what you have access to, and this is a technical audience, it's not like we don't know how agents work.
brazukadev 5 hours ago [-]
Sad state of current launch HNs where OP don't even know they are talking to hackers, not people that get easily impressed.
brazukadev 6 hours ago [-]
So you have access to the users Gmail, not "the agent".
hgaddipa001 6 hours ago [-]
Hmm ig yeah I can be more granular.
Yeah we store our user credentials on our side and manage them. Along with refreshing tokens and so forth
soniczentropy 5 hours ago [-]
This is horrifying. Everyone should be horrified.
stavros 4 hours ago [-]
I think they mean OAuth credentials (all these APIs use OAuth unless you're doing something terribly wrong).
hgaddipa001 31 minutes ago [-]
Yep we're using Oauth, so it's easy for a user to disconnect.
asah 7 hours ago [-]
Or an alt/throwaway email...
hgaddipa001 6 hours ago [-]
ooh good idea!
HeadphoneJunkie 8 hours ago [-]
This is quite useful where has this been all my life
Email drafting is decent since it reads my drive, previous emails, and everything else so it has a good bit of context
hgaddipa001 8 hours ago [-]
Nice!
Let me know how it goes, and feel free to text/call me at 262-271-5339 with any feedback
7 hours ago [-]
Xevion 8 hours ago [-]
> connects to apps and does tasks
Gosh, I hope it also does things too!
hgaddipa001 8 hours ago [-]
haha, it certainly does :)
brazukadev 6 hours ago [-]
Honestly, what have HN become?
These AI projects are looking more and more like shitcoins and their creators are shitcoin shillers.
hgaddipa001 6 hours ago [-]
Have you tried out Slashy?
What makes you say that
brazukadev 5 hours ago [-]
Not really and this is totally not related to Slashy, it just look like the same as the other 20 Slashys launched last month. Launch HNs used to be exciting.
Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself.
EMM_386 2 hours ago [-]
The strategy is throw a little bit of money at everything, hope one of them will become a unicorn, everyone gets richer.
Rinse and repeat.
You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good). If that product doesn't get hot, who cares? Plenty more money to go around on the next batch, because one of them probably will!
hgaddipa001 5 hours ago [-]
Hmm that's fair, we're definitely not the most exciting launch out there compared to others in our batch.
I'd like to think the fact we do what we promise is exciting, but without trying the product hard to convey that well :)
8 hours ago [-]
hugedickfounder 6 hours ago [-]
[dead]
hugedickfounder 5 hours ago [-]
[dead]
Rendered at 00:54:10 GMT+0000 (Coordinated Universal Time) with Vercel.
Need to read up on how CaMel does it. Do you have any good links?
Regardless, here’s the CaMeL paper. Defeating Prompt Injections by Design (2025): https://arxiv.org/abs/2503.18813
Here’s a paper offering a survey of different mitigation techniques, including CaMeL. Design Patterns for Securing LLM Agents against Prompt Injections (2025): https://arxiv.org/abs/2506.08837
And here’s a high-level overview of the state of prompt injection from 'simonw (who coined the term), which includes links to summaries of both papers above: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
Don't worry have worked with a few friends experienced in prompt injection to help with the platform.
But will read these too :)
Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question. So how do you do the indexing?
For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.
Would Slashy be able to handle that?
We use indexing similar to glean (but a bit less elegant without the ACLs)
Can talk more about your use case if you'd like to.
Send me a text at 262-271-5339
Have a waterfall approach in case one source doesn't have the requested information!
Do you worry that AI browser agents (comet etc) will eat this market of light integrations? Since the user is already logged in to various services like linkedin/email etc it's easy for tasks to be scripted together - or fully prompted.
also what did you use to make the video? looks better than most looms.
Thanks for the compliment.
Not worried about browser agents, as we actually have pretty deep integrations (we include semantic search as well as user action graphs).
Naturally apis will always be better than browsers as apis are computer languages and browsers are human language.
The sale of Browser Company today too I think shows there's not that much of a ceiling for agentic browsers.
> we use existing models via their API
> we use existing tools/services/platforms
> ChatGPT/OpenWebUI-like web interface
> mostly uses text, no image, no desktop control (?)
hardly can see what this app brings. also, it is paid and requests are routed to someone else? shouldn't this be free, local, and with bring-your-own–key already with things like ollama/llama.cpp?
We just make our own tools in-house :)
Hmm the local open source model is something we've thought of, but currently haven't found open source models to be usable
For example we can read and attach pdfs to gmail which not a lot of people can, since we have our own internal storage api.
You can't attack others like this here. We've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Security I understand, but if you consent to giving it access would it not be fine for privacy.
The team ships fast and I'm excited to see where they go
https://x.com/raidingAI/status/1955890345927172359
Do you have a benchmark for this? in my experience, hallucinations have nothing to do with what framework you use.
We find that the less the agent knows, the more it hallucinates
I could really envision saving an 'AI Workflow' template with integrated MCP clients that will balloon once adoption is reached. Right now adoption is low so its not a priority for them, once it is, they will tack it on.
I really wish this the best of luck its a great concept, but surely you must be thinking ahead to plan for this situation.
We do think there's a good chance they'll make their own version.
But we view it as a Dropbox situation where, the foundation models much like Apple and Google know that this will be the future, but are a bit slow to act on it.
That's our goal long term to get better templates
is this legal? last time I checked linkedin.com/robots.txt do not allow scraping, unless explicit approval from linkedin
See: https://www.webspidermount.com/is-web-scraping-legal-yes/
There's a relevant footnote in the cited HiQ Labs v. LinkedIn case:
"LinkedIn’s cease-and-desist letter also asserted a state common law claim of trespass to chattels. Although we do not decide the question, it may be that web scraping exceeding the scope of the website owner’s consent gives rise to a common law tort claim for trespass to chattels, at least when it causes demonstrable harm."
They also said: "Internet companies and the public do have a substantial interest in thwarting denial-of-service attacks and blocking abusive users, identity thieves, and other ill-intentioned actors."
It's a good idea to take legal conclusions from media sites with a grain of salt. Same goes for any legal discussion on social media, including HN. If you want a thorough analysis of legal risk--either for your business or for personal matters--hire a good lawyer.
Yes, all LLM caveats apply. Due your diligence. But they are quite good at this now.
Can tell you :)
We like to believe we're pretty trustworthy, and do our best to make everything secure.
I'd say maybe to get comfortable try out the non email features first, but we don't have access to any of your data.
We don't, and agent pulls in data only when executing queries
Yeah we store our user credentials on our side and manage them. Along with refreshing tokens and so forth
Email drafting is decent since it reads my drive, previous emails, and everything else so it has a good bit of context
Let me know how it goes, and feel free to text/call me at 262-271-5339 with any feedback
Gosh, I hope it also does things too!
What makes you say that
Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself.
Rinse and repeat.
You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good). If that product doesn't get hot, who cares? Plenty more money to go around on the next batch, because one of them probably will!
I'd like to think the fact we do what we promise is exciting, but without trying the product hard to convey that well :)