NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
MinIO repository is no longer maintained (github.com)
muragekibicho 5 hours ago [-]
I ran a moderately large opensource service and my chronic back pain was cured the day I stopped maintaining the project.

Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.

bojangleslover 4 hours ago [-]
Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.

I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.

Ensorceled 1 hours ago [-]
> > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun.

> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).

MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?

mbreese 1 hours ago [-]
The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers.

You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.

Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.

Ensorceled 37 minutes ago [-]
> Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.

No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.

hobofan 4 hours ago [-]
I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes.
dizhn 1 hours ago [-]
I've been using rustfs for some very light local development and it looks.. fine: )
codegladiator 4 hours ago [-]
can vouch for SeaweedFS, been using it since the time it was called weedfs and my managers were like are you sure you really want to use that ?
sshine 2 hours ago [-]
Wasabi looks like a service.

Any recommendation for an in-cluster alternative in production?

Is that SeaweedFS?

jodrellblank 1 hours ago [-]
I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway).

It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.

Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.

https://ceph.io/en/users/documentation/

https://docs.ceph.com/en/latest/

https://indico.cern.ch/event/1337241/contributions/5629430/a...

ranger_danger 15 minutes ago [-]
Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day.

While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.

phoronixrly 47 minutes ago [-]
Nothing wrong? Does minio grant the basic freedoms of being able to run the software, study it, change it, and distribute it?

Did minio create the impression to its contributors that it will continue being FLOSS?

ufocia 35 minutes ago [-]
Yes the software is under AGPL. Go forth and forkify.

The choice of AGPL tells you that they wanted to be the only commercial source of the software from the beginning.

phoronixrly 10 minutes ago [-]
> the software is under AGPL. Go forth and forkify.

No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.

> they wanted to be the only commercial source of the software

The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.

jbstack 5 hours ago [-]
There's nothing wrong at all with charging for your product. What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.

Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.

Someone 2 hours ago [-]
> What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.

But FOSS means “this particular set of source files is free to use and modify”. It doesn’t include “and we will forever keep developing and maintaining it forever for free”.

It’s only different if people, in addition to the FOSS license, promise any further updates will be under the same license and then change course.

And yes, there is a gray area where such a promise is sort-of implied, but even then, what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?

ufocia 28 minutes ago [-]
> what do you prefer, the developers abandoning the project, or at least having the option of a paid-for version?

It's not a binary choice. I prefer the developers releasing the software under a permissive license. I agree that relying on freemium maintenance is naive. The community source lives on, perhaps the community should fork and run with it for the common good absorbing the real costs of maintenance.

devsda 3 hours ago [-]
Everyone is quick to assert rights granted by the license terms and fast to say the authors should have chosen a better license from the start in case the license doesnt fit the current situation.

License terms don't end there. There is a no warranty clause too in almost every open source license and it is as important as the other parts of the license. There is no promise or guarantees for updates or future versions.

direwolf20 3 hours ago [-]
They're not saying they violated the license, they're saying they're assholes. It may not be illegal to say you'll do something for free and then not do it, but it's assholish, especially if you said it to gain customers.
rofrol 2 hours ago [-]
They gave code for free, under open source, but you call them assholes if they do not release more code for free. So who is the asshole here? You or them?
skeledrew 2 hours ago [-]
There's no broken promise though. It's the users who decide+assume, on their own going in, that X project is good for their needs and they'll have access to future versions in a way they're comfortable with. The developers just go along with the decision+assumption, and may choose to break it at any point. They'd only be assholes if they'd explicitly promised the project would unconditionally remain Y for perpetuity, which is a bs promise nobody should listen to, cuz life.
devsda 2 hours ago [-]
> say you'll do something for free

I think this is where the problem/misunderstanding is. There's no "I will do/release" in OSS unless promised explicitly. Every single release/version is "I released this version. You are free to use it". There is no implied promise for future versions.

Released software is not clawed back. Everyone is free to modify(per license) and/or use the released versions as long as they please.

ufocia 25 minutes ago [-]
Customers are the ones that continue to pay. If they continue to pay they will likely receive maintenance from the devs. If they don't, they are no longer or never have been customers.
ranger_danger 8 minutes ago [-]
It would be interesting to see if there could be a sustainable OSS model where customers are required to pay for the product, and that was the only way to get support for it as well.

Even if the source was always provided (and even if it were GPL), any bug reports/support requests etc. would be limited to paying customers.

I realize there is already a similar model where the product/source itself is always free and then they have a company behind it that charges for support... but in those cases they are almost always providing support/accepting bug reports for free as well. And maybe having the customer pay to receive the product itself in the first place, might motivate the developers to help more than if they were just paying for a support plan or something.

puszczyk 4 hours ago [-]
> Just be honest since the start

While I agree with the sentiment, keep in mind that circumstances change over the years. What made sense (and what you've believed in) a few years ago may be different now. This is especially true when it comes to business models.

hirako2000 3 hours ago [-]
When your product entered mainstream with integration that would yield millions when virtually obliged to get a license is typically what happens.

When backed by a company there is an ethical obligation to keep, at least maintenance. Of course legally they can do what they wish. It isn't unfair to call it bad practice.

skeledrew 2 hours ago [-]
There's no way that maintaining something is an ethical obligation, regardless of popularity. There is only legal obligation, for commercial products.
hirako2000 24 minutes ago [-]
If offering a tie in thing supposedly free of charge without warning that would end once it serves a party less profit purpose then yes.

Ethics are not obligations, they are moral principles. Not having principles doesn't send you to prison that is why it isn't law. It makes you lose moral credit though.

rofrol 2 hours ago [-]
There is no ethical obligation. You just want them to release new work under open source licence.
hirako2000 27 minutes ago [-]
They already had. And for what purpose you think?
j1elo 1 hours ago [-]
The only meaningful informed decision, but sadly much less known (and I think we should talk and insist more on it), is to be wary if you see a CLA. Not all do, but most perform Copyright Assignment, and that's detrimental to the long-term robustness of Open Source.

Having a FOSS license is NOT enough. Idealy the copyright should be distributed across all contributors. That's the only way to make overall consensus a required step before relicensing (except for reimplementation).

Pick FOSS projects without CLAs that perform Copyright Assignment to an untrusted entity (few exceptions apply, e.g. the FSF in the past)

timcobb 1 hours ago [-]
> Just be honest since the start that your product will eventually abandon its FOSS licence.

Hoes does this look? How does one "just" do this? What if the whole thing was an evolution over time?

hiAndrewQuinn 5 hours ago [-]
>Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision.

"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.

Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.

And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.

berkes 4 hours ago [-]
"Informed decisions" mean you need to have the information.

Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.

The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.

I think this applies here too.

Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".

mactavish88 1 hours ago [-]
I hear this perspective a lot in relation to open source projects.

What it fails to recognize is the reality that life changes. Shit happens. There's no way to predict the future when you start out building an open source project.

(Coming from having contributed to and run several open source projects myself)

vladms 5 hours ago [-]
Isn't this the normal sales anyhow for many products? One attracts a customer with unreasonable promises and features, makes him sign a deal to integrate, then issues appear once in production that make you realize you will need to invest more.

When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.

Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).

praptak 4 hours ago [-]
> I find it unreasonable to "demand" clarity "at the start" because there is no such thing.

Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.

On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.

qudat 35 minutes ago [-]
At this point I don’t trust any company that offers a core free tool with an upsell. Trials or limited access is one thing, but a free forever product that needs active maintaining, be skeptical.

It’s been tough for us at https://pico.sh trying to figure out the right balance between free and paid and our north star is: how much does it cost us to maintain and support? If the answer scales with the number of users we have then we charge for it. We also have a litmus test for abuse: can someone abuse the service? We are putting it behind a paywall.

StopDisinfo910 4 hours ago [-]
> then doing a bait-and-switch

FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.

The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.

growse 4 hours ago [-]
It's a social contract, which for many people is a moral contract.
627467 3 hours ago [-]
Show me a FOSS license where a commitment to indefinite maintenance is promised.
account42 3 hours ago [-]
Social contracts are typically unwritten so the license would be the wrong place to look for it.
skeledrew 2 hours ago [-]
If it's neither written nor explicitly spoken, then it's not a contract of any kind. It's just an - usually naive - expectation.
dbacar 7 minutes ago [-]
It was not expectation when they started, did a lot to lure many into the ecosystem. When you release it free, wait for the momentum to build, then you cut off, it is something else. And the worse is they did it in a very short time. Check out elasticsearch, the same route but did not abandon the 7 release like this.
account42 48 minutes ago [-]
A social contract isn't a legal contract to begin with, but even for those "written or explicitly spoken" is not a hard requirement.
StopDisinfo910 4 hours ago [-]
Where is this mythical social contract found? I stand by my point: it's a software license, not a marriage.

Free users certainly would like it to be a social contract like I would like to be gifted a million dollars. Sadly, I still have to work and can't infinitely rely on the generosity of others.

growse 54 minutes ago [-]
The social contract is found (and implicitly negotiated) in the interactions between humans, ie: society.
account42 2 hours ago [-]
Where is the contract to return the shopping cart to the corral?
StopDisinfo910 2 hours ago [-]
Your analogy doesn't make sense. You are getting benefits from using the shopping cart and you bring back as it's expected as part of the exchange. You bring the cart back to where you took which is a low effort commitment entirely proportional to what you got from it.

Free software developers are gifting you something. Expecting indefinite free work is not mutual respect. That's entitlement.

The common is still there. You have the code. Open source is not a perpetual service agreement. It is not indentured servitude to the community.

Stop trying to guilt trip people into giving you free work.

imtringued 23 minutes ago [-]
In this context the social contract would be an expectation that specifically software developers must return the shopping cart for you, but you would never expect the same from cashiers, construction workers, etc.

If the software developer doesn't return your cart, he betrayed the social contract.

This sounds very manipulative and narcissistic.

PunchyHamster 4 hours ago [-]
it's still a bait and switch, considering they started removing features before the abandonment.
Ekaros 4 hours ago [-]
Users can fork it from point they started removing features. Fully inside social, moral and spiritual contract of open source.
dangus 2 hours ago [-]
This isn’t about people working for free.

Nobody sensible is upset when a true FOSS “working for free” person hangs up their boots and calls it quits.

The issue here is that these are commercial products that abuse the FOSS ideals to run a bait and switch.

They look like they are open source in their growth phase then they rug pull when people start to depend on their underlying technology.

The company still exists and still makes money, but they stopped supporting their open source variant to try and push more people to pay, or they changed licenses to be more restrictive.

It has happened over and over, just look at Progress Chef, MongoDB, ElasticSearch, Redis, Terraform, etc.

skeledrew 1 hours ago [-]
In this particular case, it's the fault of the "abused" for even seeing themselves as such in the first place. Many times it's not even a "bait-and-switch", but reality hitting. But even if it was, just deal with it and move on.
imtringued 15 minutes ago [-]
This is definitely the case because the accusations and supposed social contract seem extremely one-sided towards free riding.

Nobody here is saying they should donate the last version of MinIO to the Apache software foundation under the Apache license. Nobody is arguing for a formalized "end of life" exit strategy for company oriented open source software or implying that such a strategy was promised and then betrayed.

The demand is always "keep doing work for me for free".

nubinetwork 2 hours ago [-]
> bait and switch

Is it really though? They're replacing one product with another, and the replacement comes with a free version.

yread 2 hours ago [-]
Easy. If you see open source software maintained by a company, assume they will make it closed source or enshittify the free version. If it's maintained by an individual assume he will get bored with it. Plan accordingly. It may not happen and then you'll be pleasantly surprised
jillesvangurp 4 hours ago [-]
It's part of the due diligence process for users to decide if they can trust a project.

I use a few simple heuristics:

- Evaluate who contributes regularly to a project. The more diverse this group is, the better. If it's a handful of individuals from 1 company, see other points. This doesn't have to be a show stopper. If it's a bit niche and only a handful of people contribute, you might want to think about what happens when these people stop doing that (like is happening here).

- Look at required contributor agreements and license. A serious red flag here is if a single company can effectively decide to change the license at any point they want to. Major projects like Terraform, Redis, Elasticsearch (repeatedly), etc. have exercised that option. It can be very disruptive when that happens.

- Evaluate the license allows you do what you need to do. Licenses like the AGPLv3 (which min.io used here) can be problematic on that front and comes with restrictions that corporate legal departments generally don't like. In the end choosing to use software is a business decision you take. Just make sure you understand what you are getting into and that this is OK with your company and compatible with business goals.

- Permissive licenses (MIT, BSD, Apache, etc.) are popular with larger companies and widely used on Github. They facilitate a neutral ground for competitors to collaborate. One aspect you should be aware off is that the very feature that makes them popular also means that contributors can take the software and create modifications under a different license. They generally can't re-license existing software or retroactively. But companies like Elasticsearch have switched from Apache 2.0 to closed source, and recently to AGPLv3. Opensearch remains Apache 2.0 and has a thriving community at this point.

- Look at the wider community behind a project. Who runs it; how professional are they (e.g. a foundation), etc. How likely would it be to survive something happening to the main company behind a thing? Companies tend to be less resilient than the open source projects they create over time. They fail, are subject to mergers and acquisitions, can end up in the hands of hedge funds, or big consulting companies like IBM. Many decades old OSS projects have survived multiple such events. Which makes them very safe bets.

None of these points have to be decisive. If you really like a company, you might be willing to overlook their less than ideal licensing or other potential red flags. And some things are not that critical if you have to replace them. This is about assessing risk and balancing the tradeoff of value against that.

Forks are always an option when bad things happen to projects. But that only works if there's a strong community capable of supporting such a fork and a license that makes that practical. The devil is in the details. When Redis announced their license change, the creation of Valkey was a foregone conclusion. There was just no way that wasn't going to happen. I think it only took a few months for the community to get organized around that. That's a good example of a good community.

adamcrow64 5 hours ago [-]
exactly
alexpadula 4 hours ago [-]
I don’t feel that way at all. I’ve been maintaining open source storage systems for few years. I love it. Absolutely love it. I maintain TidesDB it’s a storage engine. I also have back pain but that doesn’t mean you can’t do what you love.
XCSme 16 minutes ago [-]
Thanks, you finally settled my dilemma of whether I should have a free version of UXWizz...
suyash 4 hours ago [-]
If your main motivation creating/maintaince a popular open source project was to make money then you don't really undersand the open source ethos.
skeledrew 1 hours ago [-]
Even if motivation isn't about making money, people still need to eat, and deal with online toxicity.
krystalgamer 4 hours ago [-]
it's not about the money. for large open source projects you need to allocate time to deal with the community. for someone that just wants to put code out there that is very draining and unpleasant.

most projects won't ever reach that level though.

imiric 3 hours ago [-]
> it's not about the money

OP sure makes it sound like it's about the money.

> for someone that just wants to put code out there that is very draining and unpleasant.

I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project. This doesn't mean bending to all their wishes and doing work you don't enjoy, but a certain level of communication and collaboration is core to the idea of open source. Throwing some code over the fence and forgetting about it is only marginally better than releasing proprietary software. I can only interpret this behavior as self-serving for some reason (self-promotion, branding, etc.).

duckmysick 3 hours ago [-]
Most open source projects start small. The author writes code that solves some issue they have. Likely, someone else has the same problem and they would find the code useful. So it's published. For a while it's quiet, but one day a second user shows up and they like it. Maybe something isn't clear or they have a suggestion. That's reasonable and supporting one person doesn't take much.

Then the third user shows up. They have an odd edge case and the code isn't working. Fixing it will take some back and forth but it still can be done in a respectable amount of time. All is good. A few more users might show up, but most open source projects will maintain a small audience. Everyone is happy.

Sometimes, projects keep gaining popularity. Slowly at first, but the growth in interest is there. More bug reports, more discussions, more pull requests. The author didn't expect it. What was doable before takes more effort now. Even if the author adds contributors, they are now a project and a community manager. It requires different skills and a certain mindset. Not everyone is cut out for this. They might even handle a small community pretty well, but at a certain size it gets difficult.

The level of communication and collaboration required can only grow. Not everyone can deal with this and that's ok.

imiric 35 minutes ago [-]
All of that sounds reasonable. But it also doesn't need to be a reason to find maintaining OSS very draining or unpleasant, as GP put it.

First of all, when a project grows, its core team of maintainers can also grow, so that the maintenance burden can be shared. This is up to the original author(s) to address if they think their workload is a problem.

Secondly, and coming back to the post that started this thread, the comment was "working for free is not fun", implying that if people paid for their work, then it would be "fun". They didn't complain about the amount of work, but about the fact that they weren't financially compensated for it. These are just skewed incentives to have when working on an open source project. It means that they would prioritize support of paying customers over non-paying users, which indirectly also guides the direction of the project, and eventually leads to enshittification and rugpulls, as in MinIO's case.

The approach that actually makes open source projects thrive is to see it as an opportunity to build a community of people who are passionate about a common topic, and deal with the good and the bad aspects as they come. This does mean that you will have annoying and entitled users, which is the case for any project regardless of its license, but it also means that your project will be improved by the community itself, and that the maintenance burden doesn't have to be entirely on your shoulders. Any successful OSS project in history has been managed this way, while those that aren't remain a footnote in some person's GitHub profile, or are forked by people who actually understand open source.

skeledrew 1 hours ago [-]
The person "throwing" the software has 0 obligation to any potential or actual users of said software. Just the act of making it available, even without any kind of license, is already benevolent. Anything further just continues to add to that benevolence, and nothing can take away from it, not even if they decide to push a malware-ridden update.

There is obligation to a given user only if it's explicitly specified in a license or some other communication to which the user is privy.

orphea 2 hours ago [-]

  > but a certain level of communication and collaboration is core to the idea of open source.
Ugh, no. Open source is "I made something cool, here, you can have it too", anything beyond that is your own expectations.
account42 2 hours ago [-]
> I never understood this. Then why publish the code in the first place? If the goal is to help others, then the decent thing would be to add documentation and support the people who care enough to use your project.

Because these things take entirely different skill sets and the latter might be a huge burden for someone who is good at the former.

bdauvergne 2 hours ago [-]
Who gave you the right to "decent" things anyway ? Yeah it would be cool, but do you have any lega/social/moral right to it ? Absolutely not.
jamespo 3 hours ago [-]
That collaboration goes both ways, or not as is often the case.
einpoklum 5 hours ago [-]
> Ultimately, dealing with people who don't pay for your product is not fun.

I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.

berkes 4 hours ago [-]
You can achieve something like this with a pricing strategy.

As DHH and Jason Fried discuss in both the books REWORK, It Doesn’t Have to Be Crazy at Work, and their blog:

> The worst customer is the one you can’t afford to lose. The big whale that can crush your spirit and fray your nerves with just a hint of their dissatisfaction.

(It Doesn’t Have to Be Crazy at Work)

> First, since no one customer could pay us an outsized amount, no one customer’s demands for features or fixes or exceptions would automatically rise to the top. This left us free to make software for ourselves and on behalf of a broad base of customers, not at the behest of any single one. It’s a lot easier to do the right thing for the many when you don’t fear displeasing a few super customers could spell trouble.

(https://signalvnoise.com/svn3/why-we-never-sold-basecamp-by-...)

But, this mechanism proposed by DHH and Fried only remove differences amongst the paying-customers. I Not between "paying" and "non-paying".

I'd think, however, there's some good ideas in there to manage that difference as well. For example to let all the customers, paying- or not-paying go through the exact same flow for support, features, bugs, etc. So not making these the distinctive "drivers" why people would pay. E.g. "you must be paying customer to get support". Obviously depends on the service, but maybe if you have other distinctive features that people would pay for (e.g. hosted version) that could work out.

jcgl 1 hours ago [-]
I think this is a good point and a true point.

However, I understood GP's mention of "embarrassment" to speak more to their own feelings of responsibility. Which would be more or less decoupled from the pressure that a particular client exerts.

StopDisinfo910 4 hours ago [-]
People who paid for your software don't really have a right to lord you around. You can chose to be accommodating because they are your customers but you hold approximately as much if not more weight in the relationship. They need your work. It's not so much special treatment as it is commissioned work.

People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.

darkwater 3 hours ago [-]
I'm probably projecting the idea I have of myself here but if someone says

> every exchange is about what's best for humanity and the public in general

it means that they are the kind of individual who deeply care for things to work, relationships to be good and fruitful and thus if they made someone pay for something, they think they must listen to them and comply their requests, because well, they are a paying customer and the customer is always right, they gave me their money etc etc

StopDisinfo910 2 hours ago [-]
There is no tension there.

You can care about the work and your customer will still setting healthy boundaries and accepting that wanting to do good work for them doesn't mean you are beside them.

Business is fundamentally about partnership, transactional and moneyed partnerships, but partnership still. It's best when both suppliers and customers are aware of that and like any partnership, it structured and can be stopped by both partners. You don't technically owe them more than what's in the contract and that puts a hard stop which is easy to identify if needed.

account42 2 hours ago [-]
Legally speaking, accepting payment makes it very clear that there is a contract under which you have obligations, both explicitly spelled out and implied.
ForHackernews 4 hours ago [-]
Maybe open source developers should stop imagining the things they choose to give away for free as "products". I maintain a small open source library. It doesn't make any money, it will never make any money, people are free to use or not as they choose. If someone doesn't like the way I maintain the repository they are free to fork it.
palata 3 hours ago [-]
Agreed, but that's only half of it. The second half is that open source users should stop imagining the things they choose to use for free as "products".

Users of open source often feel entitled, open issues like they would open a support ticket for product they actually paid for, and don't hesitate to show their frustration.

Of course that's not all the users, but the maintainers only see those (the happy users are usually quiet).

I have open sourced a few libraries under a weak copyleft licence, and every single time, some "people from the community" have been putting a lot of pressure on me, e.g. claiming everywhere that the project was unmaintained/dead (it wasn't, I just was working on it in my free time on a best-effort basis) or that anything not permissive had "strings attached" and was therefore "not viable", etc.

The only times I'm not getting those is when nobody uses my project or when I don't open source it. I have been open sourcing less of my stuff, and it's a net positive: I get less stress, and anyway I wasn't getting anything from the happy, quiet users.

EdiX 2 hours ago [-]
It used to be that annoying noobs were aggressively told to RTFM, their feelings got hurt and they would go away. That probably was too harsh. But then came corporate OSS and with it corporate HR in OSS. Being the BOFH was now bad, gatekeeping was bad. Now everyone feels entitled to the maintainer time and maintainers burn out.

It's a trade off, we made it collectively.

account42 2 hours ago [-]
I think this gets complicated when you have larger open source projects where contributors change over time. By taking over stewardship of something that people depend on you should have some obligation to not intentionally fuck those people over even if you are not paid for it.

This is also true to some extend when it's a project you started. I don't think you should e.g. be able to point to the typical liability disclaimer in free software licenses when you add features that intentionally harm your users.

imiric 3 hours ago [-]
It's remarkable how many people wrongly assume that open source projects can't be monetized. Business models and open source are orthogonal but compatible concepts. However, if your primary goal while maintaining an open source project is profiting financially from it, your incentives are skewed. If you feel this way, you should also stop using any open source projects, unless you financially support them as well.

Good luck with the back pain.

samrith 4 hours ago [-]
[dead]
PhilippGille 6 hours ago [-]
mickael-kerjean 5 hours ago [-]
I'm the author of another option (https://github.com/mickael-kerjean/filestash) which has a S3 gateway that expose itself as a S3 server but is just a proxy that forward your S3 call onto anything else like SFTP, local FS, FTP, NFS, SMB, IPFS, Sharepoint, Azure, git repo, Dropbox, Google Drive, another S3, ... it's entirely stateless and act as a proxy translating S3 call onto whatever you have connected in the other end
GCUMstlyHarmls 43 minutes ago [-]
Is this some dark pattern or what?

https://imgur.com/a/WN2Mr1z (UK: https://files.catbox.moe/m0lxbr.png)

I clicked settings, this appeared, clicking away hid it but now I cant see any setting for it.

The nasty way of reading that popup, my first way of reading it, was that filestash sends crash reports and usage data, and I have the option to have it not be shared with third parties, but that it is always sent, and it defaults to sharing with third parties. The OK is always consenting to share crash reports and usage.

I'm not sure if it's actually operating that way, but if it's not the language should probably be

    Help make this software better by sending crash reports and anonymous usage statistics.

    Your data is never shared with a third party.

    [ ] Send crash reports & anonymous usage data.
    

    [ OK ]
Zambyte 52 minutes ago [-]
Another alternative that follows this paradigm is rclone

https://rclone.org/commands/rclone_serve/

havnagiggle 4 hours ago [-]
I was looking at running [versitygw](https://github.com/versity/versitygw) but filestash looks pretty sweet! Any chance you're familiar with Versity and how the S3 proxy may differ?
mickael-kerjean 4 hours ago [-]
I did a project with Monash university who were using Versity on their storage to handle multi tiers storage on their 12PB cluster, with glacier like capabilities on tape storage with a robot picking up data on their tape backup and a hot storage tier for better access performance, lifecycle rules to move data from hot storage to cold, etc.... The underlying storage was all Versity and they had Filestash working on top, effectively we did some custom plugins so you could recall the data on their own selfhosted glacier while using it through the frontend so their user had a Dropbox like experience. Depending on what you want to do they can be very much complimentary
antongribok 2 hours ago [-]
Monash University is also a Ceph Foundation member.

They've been active in the Ceph community for a long time.

I don't know any specifics, but I'm pretty sure their Ceph installation is pretty big and used to support critical data.

cookiengineer 4 hours ago [-]
Didn't know about filestash yet. Kudos, this framework seems to be really well implemented, I really like the plugin and interface based architecture.
PunchyHamster 4 hours ago [-]
from my experiences

rustfs have promise, supports a lot of features, even allows to bring your own secret/access keys (if you want to migrate without changing creds on clients) but it's very much still in-development; and they have already prepared for bait-and-switch in code ( https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens... )

Ceph is closest feature wise to actual S3 feature-set wise but it's a lot to setup. It pretty much wants few local servers, you can replicate to another site but each site on its own is pretty latency sensitive between storage servers. It also offers many other features aside, as S3 is just built on top of their object store that can be also used for VM storage or even FUSE-compatible FS

Garage is great but it is very much "just to store stuff", it lacks features on both S3 side (S3 have a bunch of advanced ACLs many of the alternatives don't support, and stuff for HTTP headers too) and management side (stuff like "allow access key to access only certain path on the bucket is impossible for example). Also the clustering feature is very WAN-aware, unlike ceph where you pretty much have to have all your storage servers in same rack if you want a single site to have replication.

runiq 1 hours ago [-]
> [rustfs] have already prepared for bait-and-switch in code

There's also a CLA with full copyright assignment, so yeah, I'd steer clear of that one: https://github.com/rustfs/rustfs/blob/main/CLA.md

antongribok 2 hours ago [-]
Not sure what you mean about Ceph wanting to be in a single rack.

I run Ceph at work. We have some clusters spanning 20 racks in a network fabric that has over 100 racks.

In a typical Leaf-Spine network architecture, you can easily have sub 100 microsecond network latency which would translate to sub millisecond Ceph latencies.

We have one site that is Leaf-Spine-SuperSpine, and the difference in network latency is barely measurable between machines in the same network pod and between different network pods.

wvh 5 hours ago [-]
Apart from Minio, we tried Garage and Ceph. I think there's definitely a need for something that interfaces using S3 API but is just a simple file system underneath, for local, testing and small scale deployments. Not sure that exists? Of course a lot of stuff is being bolted onto S3 and it's not as simple as it initially claimed to be.
hobofan 4 hours ago [-]
SeaweedFS's new `weed mini` command[0] does a great job at that. Previously our most flakey tests in CI were due to MinIO sometimes not starting up properly, but with `weed mini` that was completely resolved.

[0]: https://github.com/seaweedfs/seaweedfs/wiki/Quick-Start-with...

egorfine 4 hours ago [-]
> for local, testing and small scale deployments

Yes I'm looking for exactly that and unfortunately haven't found a solution.

Tried garage, but they require running a proxy for CORS, which makes signed browser uploads a practical impossibility for the development machine. I had no idea that such a simple popular scenario is in fact too exotic.

status_quo69 31 minutes ago [-]
I've been looking into rclone which can serve s3 in a basic way https://rclone.org/commands/rclone_serve_s3/
PunchyHamster 4 hours ago [-]
Minio started like that but they migrated away from it. It's just hard to keep it up once you start implementing advanced S3 features (versioning/legal hold, metadata etc.) and storage features (replication/erasure coding)
seddonm1 4 hours ago [-]
What about s3 stored in SQLite? https://github.com/seddonm1/s3ite

This was written to store many thousands of images for machine learning

magicalhippo 4 hours ago [-]
From what I can gather, S3Proxy[1] can do this, but relies on a Java library that's no longer maintained[2], so not really much better.

I too think it would be great with a simple project that can serve S3 from filesystem, for local deployments that doesn't need balls to the walls performance.

[1]: https://github.com/gaul/s3proxy

[2]: https://jclouds.apache.org/

memset 4 hours ago [-]
9dev 4 hours ago [-]
WAY too much. I just need a tiny service that translates common S3 ops into filesystem ops and back.
dijit 5 hours ago [-]
Would be cool to understand the tradeoffs of the various block storage implementations.

I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).

Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.

GeertJohan 5 hours ago [-]
That's a great list. I've just opened a pull request on the minio repository to add these to the list of alternatives.

https://github.com/minio/minio/pull/21746

augusto-moura 50 minutes ago [-]
I believe the Minio developers are aware of the alternatives, having only their own commercial solution listed as alternatives might be a deliberate decision. But you can try merging the PR, there's nothing wrong with it
bluepuma77 2 hours ago [-]
The mentioned AIStor "alternative" is on the min.io website. It seems like a re-brand. I doubt they will link to competing products.
hinata08 5 hours ago [-]
While I do approve of that MR, doing it is ironic considering the topic was "MinIO repository is no longer maintained"

Let's hope the editor has second thoughts on some parts

GeertJohan 4 hours ago [-]
I'm well aware of the irony surrounding minio, adding a little bit more doesn't hurt :P
justincormack 5 hours ago [-]
Wrote a bit about differences between rustfs and garage here https://buttondown.com/justincormack/archive/ignore-previous... - since then rustfs fixed the issue I found. They are for very different use cases. Rustfs really is close to a minio rewrite.
PunchyHamster 4 hours ago [-]
there is one thing that worries me about rustfs: https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens...

I expect rugpull in the future

_bobm 47 minutes ago [-]
hah, good on them.

nice catch.

dizhn 1 hours ago [-]
Both rustfs and seaweedfs are pretty pretty good based on my light testing.
courtcircuits 5 hours ago [-]
From my experience, Garage is the best replacement to replace MinIO *in a dev environment*. It provides a pretty good CLI that makes automatic setup easier than MinIO. However in a production environment, I guess Ceph is still the best because of how prominent it is.
egorfine 4 hours ago [-]
Garage doesn't support CORS which makes it impossible to use for development for scenarios where web site visitors PUT files to pre-signed URLs.
courtcircuits 2 hours ago [-]
Yep I know, I had to build a proxy for s3 which supports custom pre-signed URLs. In my case it was worth it because my team needs to verify uploaded content for security reasons. But for most cases I guess that you can't really bother deploying a proxy just for CORS.

https:/github.com/beep-industries/content

0xUndefined 5 hours ago [-]
Had great experience with garage for an easy to setup distributed s3 cluster for home lab use (connecting a bunch of labs run by friends in a shared cluster via tailscale/headscale). They offer a "eventual consistency" mode (consistency_mode = dangerous is the setting, so perhaps don't use it for your 7-nines SaaS offering) where your local s3 node will happily accept (and quickly process) requests and it will then duplicate it to other servers later.

Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.

lhaussknecht 5 hours ago [-]
We are using ruftfs for our simple usecases as a replacement for minio. Very slim footprint and very fast.
philipwhiuk 4 hours ago [-]
To be clear AIStor is built by the MinIO team so this is just an upsell.
ElDji 2 hours ago [-]
I successfully migrated from MinIO to Ceph, which I highly recommend. Along the way, I tested SeaweedFS, which looked promising. However, I ran into a strange bug, and after diagnosing it with the help of Claude, I realized the codebase was vibe-coded and riddled with a staggering number of structural errors. In my opinion, SeaweedFS should absolutely not be used for anything beyond testing — otherwise you're almost certain to lose data.
merpkz 5 hours ago [-]
I just bit the bullet last week and figured we are going to migrate our self hosted minio servers to ceph instead. So far 3 server ceph cluster has been setup with cephadm and last minio server is currently mirroring its ~120TB buckets to new cluster with a whopping 420MB/s - should finish any day now. The complexity of ceph and it's cluster nature of course if a bit scary at first compared to minio - a single Go binary with minimal configuration, but after learning the basics it should be smooth sailing. What's neat is that ceph allows expanding clusters, just throw more storage servers at it, in theory at least, not sure where the ceiling is for that yet. Shame minio went that way, it had a really neat console before they cut it out. I also contemplated le garage, but it seem elasticsearch is not happy with that S3 solution for snapshots, so ceph it is.
lima 4 hours ago [-]
It's complex, but Ceph's storage and consensus layer is battle-tested and a much more solid foundation for serious use. Just make sure that your nodes don't run full!
antongribok 2 hours ago [-]
Make sure you have solid Linux system monitoring in general. About 50% of running Ceph successfully at scale is just basic, solid system monitoring and alerting.
axegon_ 5 hours ago [-]
We all saw that coming. For quite some time they have been all but transparent or open, vigorously removing even mild criticism towards any decisions they were making from github with no further explanation, locking comments, etc. No one that's been following the development and has been somewhat reliant on min.io is surprised. Personally the moment I saw the "maintenance" mode, I rushed to switch to garage. I have a few features I need to pack in a PR ready but I haven't had time to get to that. I should probably prioritize that.
3r7j6qzi9jvnve 6 hours ago [-]
See https://news.ycombinator.com/item?id=46136023 - MinIO is now in maintenance-mode

It was pretty clear they pivoted to their closed source repo back then.

paulkre 5 hours ago [-]
Maintenance-mode is very different from "THIS REPOSITORY IS NO LONGER MAINTAINED".
jychang 5 hours ago [-]
Yes, the difference is the latter means "it is no longer maintained", and the former is "they claim to be maintaining it but everyone knows it's not really being maintained".
black3r 4 hours ago [-]
in theory "maintenance mode" should mean that they still deal with security issues and "no longer maintained" that they don't even do that anymore...

unless a security issue is reported it does feel very much the same...

entuno 2 hours ago [-]
"Critical security fixes may be evaluated on a case-by-case basis" didn't exactly give much confidence that they'd even be doing that.
embedding-shape 5 hours ago [-]
Given the context is a for-profit company who is moving away from FOSS, I'm not sure the distinction matters so much, everyone understands what the first one means already.
spapas82 1 hours ago [-]
Has anybody actually tried AIStor ? Is it possible to migrate/upgrade from a minio installation to AIStor ? It seems to be very simple, just change the binary from minio to aistor: https://docs.min.io/enterprise/aistor-object-store/upgrade-a...

Is AIStor Free really free like they claim here https://www.min.io/pricing, i.e

  Free
  For developers, researchers, enthusiasts, small organizations, and anyone comfortable with a standalone deployment.
  Full-featured, single-node deployment architecture
  Self-service community Slack and documentation support
  Free of charge
I could use that if it didn't have hidden costs or obligations.
welcome_dragon 60 minutes ago [-]
Fool me once ...
valyala 4 hours ago [-]
If you are struggling with observability solutions which require object storage for production setups after such news (i.e. Thanos, Loki, Mimir, Tempo), then try alternatives without this requirement, such as VictoriaMetrics, VictoriaLogs and VictoriaTraces. They scale to petabytes of data on regular block storage, and they provide higher performance and availability than systems, which depend on manually managed object storage such as MinIO.
lucideer 5 hours ago [-]
This is timely news for me - I was just standing up some Loki infrastructure yesterday & following Grafana's own guides on object storage (they recommend minio for non-cloud setups). I wasn't previously experienced with minio & would have completely missed the maintenance status if it wasn't for Checkov nagging me about using latest tags for images & having to go searching for release versions.

Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.

valyala 4 hours ago [-]
Why do you need non-trivial dependency on the object storage for the database for logs in the first place?

Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.

Object storage has zero advantages over regular block storage if you run it on yourself:

- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.

- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.

- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.

- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.

- It is much harder to configure, operate and troubleshoot than block storage.

So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.

Disclaimer: I'm the core developer of VictoriaLogs.

lucideer 1 hours ago [-]
> Object storage has zero advantages over regular block storage if you run it on yourself

Worth adding, this depends on what's using your block storage / object storage. For Loki specifically, there are known edge-cases with large object counts on block storage (this isn't related to object size or disk space) - this obviously isn't something I've encountered & I probably never will, but they are documented.

For an application I had written myself, I can see clearly that block storage is going to trump object storage for all self-hosted usecases, but for 3P software I'm merely administering, I have less control over its quirks & those pros -vs- cons are much less clear cut.

lucideer 4 hours ago [-]
Initially I was just following recommendations blindly - I've never run Loki off-cloud before so my typical approach to learning a system would be to start with defaults & tweak/add/remove components as I learn it. Grafana's docs use object storage everywhere, so it's a lot easier with you're aligned, you can rely more heavily on config parity.

While I try to avoid complexity, idiomatic approaches have their advantages; it's always a trade-off.

That said my first instinct when I saw minio's status was to use filestorage but the rustfs setup has been pretty painless sofar. I might still remove it, we'll see.

singularfutur 3 hours ago [-]
COSS companies want it both ways. Free community contributions and bug reports during the growth phase. Then closed source once they've captured enough users. The code you run today belongs to you. The roadmap belongs to their investors.
nephihaha 3 hours ago [-]
Duolingo used unpaid labour to build its resources. Now it charges money for premium
simonw 2 hours ago [-]
I wonder how many of the 504 contributors listed on GitHub would still have contributed their (presumably) free labor if they had known the company would eventually abandon the open source version like this while continuing to offer their paid upgraded versions.
ricardobeat 42 minutes ago [-]
It’s not the first time this happens, and won’t be the last.

If there is a real community around it, forking and maintaining an open edition will be a no-brainer.

danirod 5 hours ago [-]
AIstor. They just slap the word AI anywhere these days.
wiether 4 hours ago [-]
In French the adjective follows the name so AI is actually IA.

On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.

A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...

liviux 5 hours ago [-]
Is that an I (indigo) or l (lama)? I though it was L, lol
wczekalski 60 minutes ago [-]
I was recently migrating a large amount of data off of MinIO and wrote some tools for it in case anybody needs that https://github.com/dialohq/minio-format-rs
piker 4 hours ago [-]
AGPL is dead as a copy-left measure. LLMs do not understand, and would not care anyway, about regurgitating code that you have published to the internet.
ozgrakkurt 4 hours ago [-]
Even having it as a private repo on github is a mistake at this point.

Self hosted or just using git itself is only solution

PunchyHamster 4 hours ago [-]
no, it's actually great, that just means now all LLM code that included that needs to be AGPL-licensed
mickael-kerjean 4 hours ago [-]
All LLM i've tried are capable to write plugin for my AGPL work: https://github.com/mickael-kerjean/filestash
jamiemallers 5 hours ago [-]
This is becoming a predictable pattern in infrastructure tooling: build community on open source, get adoption, then pivot to closed source once you need revenue. Elastic, Redis, Terraform, now MinIO.

The frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.

For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.

Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.

mananaysiempre 15 minutes ago [-]
> Elastic, Redis, Terraform, now MinIO.

Redis is the odd one out here[1]: Garantia Data, later known as Redis Labs, now known as Redis, did not create Redis, nor did it maintain Redis for most of its rise to popularity (2009–2015) nor did it employ Redis’s creator and then-maintainer 'antirez at that time. (He objected; they hired him; some years later he left; then he returned. He is apparently OK with how things ended up.) What the company did do is develop OSS Redis addons, then pull the rug on them while saying that Redis proper would “always remain BSD”[2], then prove that that was a lie too[3]. As well as do various other shady (if legal) stuff with the trademarks[4] and credits[5] too.

[1] https://www.gomomento.com/blog/rip-redis-how-garantia-data-p...

[2] https://redis.io/blog/redis-license-bsd-will-remain-bsd/

[3] https://lwn.net/Articles/966133/

[4] https://github.com/redis-rs/redis-rs/issues/1419

[5] https://github.com/valkey-io/valkey/issues/544

apexalpha 5 hours ago [-]
I guess we need a new type of Open Source license. One that is very permissive except if you are a company with a much larger revenue than the company funding the open source project, then you have to pay.

While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.

jeroenhd 4 hours ago [-]
Various projects have invented licenses like that. Those licenses aren't free, so the FOSS crowd won't like them. Rather than inventing a new one, you're probably better grabbing whatever the other not-free-but-close-enough projects are doing. Legal teams don't like bespoke licenses very much which hurts adoption.

An alternative I've seen is "the code is proprietary for 1 year after it was written, after that it's MIT/GPL/etc.", which keeps the code entirely free(ish) but still prevents many businesses from getting rich off your product and leaving you in the dust.

You could also go for AGPL, which is to companies like Google like garlic is to vampires. That would hurt any open core style business you might want to build out of your project though, unless you don't accept external contributions.

Ekaros 5 hours ago [-]
That would be interesting to figure out. Say you are single guy in some cheaper cost of living region. And then some SV startup got say million in funding. Surely that startup should give at least couple thousand to your sole proprietorship if they use your stuff? Now how you figure out these thresholds get complex.
igsomething 5 hours ago [-]
Server Side Public License? Since it demands any company offering the project as a paid product/service to also open source the related infrastructure, the bigger companies end up creating a maintained fork with a more permissive license. See ElasticSearch -> OpenSearch, Redis -> Valkey
oblio 2 hours ago [-]
Inflicting pain is most likely worth it in the long run. Those internal projects now have to fight for budget and visibility and some won't make it past 2-5 years.
pabs3 3 hours ago [-]
The hyperscalers will just rewrite your stuff from scratch if its popular enough, especially now with AI coding.
oblio 2 hours ago [-]
1. Completely giving up is worse.

2. You're forgetting bureaucracy and general big company overhead. Hyperscalers have tried to kill a lot of smaller external stuff and frequently they end up their own chat apps, instead.

baq 4 hours ago [-]
you won't get VC funding with this license which is the whole point of even starting a business in the wider area
einpoklum 4 hours ago [-]
I would say what we need is more of a push for software to become GPLed or AGPLed, so that it (mostly) can't be closed up in a 'betrayal' of the FOSS community around a project.
pjmlp 1 hours ago [-]
This is the newer generations re-discovering why various flavours of Shareware and trial demos existed since the 1980's, even though sharing code under various licenses is almost as old as computing.
PunchyHamster 4 hours ago [-]
> For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.

I struggle to even find example of VC-backed OSS that didn't go "ok closing down time". Only ones I remember (like Gitlab) started with open core model, not fully OSS

wvh 4 hours ago [-]
I think the landscape has changed with those hyperscalers outcompeting open-source projects with alternative profit avenues for the money available in the market.

From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.

Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.

arkh 5 hours ago [-]
Well, anyone using the product of an open source project is free to fork it and then take on the maintenance. Or organize multiple users to handle the maintenance.

I don't expect free shit forever.

rd 5 hours ago [-]
ai
patrick4urcloud 9 minutes ago [-]
omg move to rustfs
ruhith 3 hours ago [-]
This is pretty predictable at this point. VC-backed open source with a single vendor always ends up here eventually. The operational tradeoff was always MinIO being dead simple versus Ceph being complex but foundation-governed. Turns out "easy to set up" doesn't matter much when you wake up to a repo going dark. The real lesson is funding model matters more than license. If there's no sustainable path that doesn't involve closing the source, you're just on a longer timeline to the same outcome.
merpkz 4 hours ago [-]
Tangentially related, since we are on the subject of Minio. Minio has or rather had an option to work as an FTP server! That is kind of neat because CCTV cameras have an option to upload a picture of motion detected to an FTP server and that being a distributed minio cluster really was a neat option, since you could then generate an event of a file uploaded, kick off a pipeline job or whatever. Currently instead of that I use vsftpd and inotify to detect file uploads but that is such a major pain in the ass operate, it would be really great to find another FTP to S3 gateway.
greut 4 hours ago [-]
arend321 3 hours ago [-]
I use three clustered Garage nodes in a multi cloud setup. No complaints.
karolist 4 hours ago [-]
I've moved my SaaS I'm developing to SeaweedFS, it was rather painless to do it. I should also move away from minio-go SDK to just use the generic AWS one, one day. No hard feelings from my side to MinIO team though.
olalonde 4 hours ago [-]
Looks like they pivoted to "AI storage", whatever that means.
egorfine 4 hours ago [-]
That means that any project without the letters "AI" in the name is dead in the eyes of investors.

Even plain terminals are now "agentic orchestrators": https://www.warp.dev

olalonde 3 hours ago [-]
Are investors really that gullible? Whenever I see "AI" slapped onto an obviously non-AI product, it's an instant red flag to me.
latexr 2 hours ago [-]
Nextgrid 2 hours ago [-]
As long as there's at least one gullible in the pack, all the other ones will behave the same because they now know there's one idiot that will happily hold the bag when it comes crashing down. They're all banking on passing the bag onto someone else before the crash.

A Ponzi can be a good investment too (for a certain definition of "good") as long as you get out before it collapses. The whole tech market right now is a big Ponzi with everyone hoping to get out before it crashes. Worse, dissent risks crashing it early so no talks of AI limitations or the lack of actual, sustainable productivity improvements are allowed, even if those concerns do absolutely happen behind closed doors.

egorfine 1 hours ago [-]
Are you kidding me

"Long Island Iced Tea Corp [...] In 2017, the corporation rebranded as Long Blockchain Corp [...] Its stock price spiked as much as 380% after the announcement."

https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

stuaxo 3 hours ago [-]
Fair enough.

I've used minio a lot but only as part of my local dev pipeline to emulate s3, and never paid.

adamcrow64 5 hours ago [-]
We moved to garage because minio let us down.
sschueller 5 hours ago [-]
So far for me garage seems to work quite well as an alternative although it does lack some of the features of minio.
allovertheworld 6 hours ago [-]
Any good alternatives for local development?
gardnr 5 hours ago [-]

  garaged:
    image: dxflrs/garage:v2.2.0
    ports:
      - "3900:3900"
      - "3901:3901"
      - "3902:3902"
      - "3903:3903"
    volumes:
      - /opt/garage/garage.toml:/etc/garage.toml:ro
      - /opt/garage/meta:/var/lib/garage/meta
      - /opt/garage/data:/var/lib/garage/data
espenb 5 hours ago [-]
I didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.

https://github.com/espebra/stupid-simple-s3

luke5441 4 hours ago [-]
The listing is perhaps in line with the first two "s". It seems it always iterates through all files, reads the "meta.json", then filters?
espenb 3 hours ago [-]
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.

1. All filenames are read. 2. All filenames are sorted. 3. Pagination applied.

It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.

luke5441 3 hours ago [-]
Maybe mention it somewhere as a limitation, so it is not used for use-cases where listing is important and there are many objects?

Listing was IMO a problem with minio as well, but maybe it is not that important because it seems to have succeeded anyway.

courtcircuits 5 hours ago [-]
Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
ahoka 2 hours ago [-]
S3 Ninja if you really just need something local to try your code with.
slooonz 5 hours ago [-]
versitygw is the simplest "just expose some S3-compatible API on top of some local folder"
pikachu0625 5 hours ago [-]
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
daurnimator 4 hours ago [-]
seaweedfs: `weed server -s3` is enough to spin up a server locally
Scarjit 5 hours ago [-]
RustFS is dead simple to setup.
Havoc 5 hours ago [-]
It has unfortunately also had a fair bit of drama already for a pretty young project
WesolyKubeczek 3 hours ago [-]
Took them how many weeks to go from „maintenance mode” to unmaintained?

They could just archive it there and then, at least it would be honest. What a bunch of clowns.

rbbydotdev 5 hours ago [-]
Is there not a community fork? Even as is, is it still recommended for use?
franchb 5 hours ago [-]
I started a fork during the Christmas holidays https://github.com/kypello-io/kypello , but I’ve paused it for now.
adamcrow64 5 hours ago [-]
We moved to Garage. Minio let us down.
moralestapia 1 hours ago [-]
Lmao, that was fast.
mattbee 2 hours ago [-]
This has been on the cards for at least a year, with the increasingly doomy commits noted by HN.

Unfortunately I don't know of any other open projects that can obviously scale to the same degree. I built up around 100PiB of storage under minio with a former employer. It's very robust in the face of drive & server failure, is simple to manage on bare hardware with ansible. We got 180Gbps sustained writes out of it, with some part time hardware maintenance.

Don't know if there's an opportunity here for larger users of minio to band together and fund some continued maintenance?

I definitely had a wishlist and some hardware management scripts around it that could be integrated into it.

kklimonda 2 hours ago [-]
Ceph can scale to pretty large numbers for both storage, writes and reads. I was running 60PB+ cluster few years back and it was still growing when I left the company.
_joel 57 minutes ago [-]
Ceph, definitely.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:24:18 GMT+0000 (Coordinated Universal Time) with Vercel.