This designation is usually reserved for foreign adversaries/companies, and so this is crazy to apply it to US company over a sudden contract dispute... that was previously agreed upon by all parties.
This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.
Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing any business with an entire industry/sector.
pstuart 57 minutes ago [-]
> This should make any US company nervous
"Nice little business ya got here -- it'd be shame if something happened to it..."
Analemma_ 48 minutes ago [-]
Shame you didn’t donate $25 million to Trump, like the company we decided to give the contract to instead did, who will benefit tremendously from you being designated a supply chain risk. Maybe next time you’ll be a little smarter.
germandiago 2 hours ago [-]
This is awful. That a disagreement tjat involves politics can make a company ruined is really awful.
The civil society should be quite concerned about this kind of attacks.
softwaredoug 1 hours ago [-]
It opens the door for Democratic administrations to do the same to vendors for their own political reasons.
That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.
wrs 46 minutes ago [-]
No it doesn't. As with so many other things this administration does, this door was not open. They bashed through it anyway.
jaredklewis 27 minutes ago [-]
Bold of you to assume Democrats are going to be allowed to govern again.
1718627440 48 minutes ago [-]
I think we should really judge governments by their actions and stop labeling democracies, if they do such things that don't look like democracies at all.
irthomasthomas 23 minutes ago [-]
They will rename it The Free Democratic Republic of America.
netfortius 11 minutes ago [-]
What people seem to refuse to accept is that democrats won't have another chance, any time soon. It's done and gone. Count one or two generations, at a minimum, under the new Epstein class regime, before people may try to rise.
JeremyNT 15 minutes ago [-]
> That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.
Meh, I think it's entirely asymmetrical in this era. Democrats aren't good for much, but they're very good at respecting norms.
Trump is willing to do completely unprecedented, vindictive, and malicious things because he's so popular with so many people who are either checked out, nihilistic, corrupt, or just completely unconcerned about the concept of good governance.
It's not a pendulum where there's some super-corrupt Democrat waiting in the wings to do the same things upon their enemies, this really is the Republican party openly embracing kleptocracy and lawlessness.
Analemma_ 1 hours ago [-]
Yeah, now that this door is cracked open, it's now possible to decapitate SpaceX, which is at least as natsec-critical as Anthropic. The owner is a drug addict, has business interests in China, and is a Russian sympathizer (recall all the restrictions on Ukraine using Starlink), which all together is way stronger evidence for SCR designation than anything Anthropic has done. They're quickly going to come to regret opening this can of worms, but what else is new.
tw04 58 minutes ago [-]
Trump isn’t planning on ever leaving office before his death. His sycophants will just say yes in the hopes of unconditional pardons. They know they’ll never hold a position of power again so they’re grabbing everything they can while they can.
thinkingtoilet 1 hours ago [-]
I love when a Republican does something awful the response is "but what about if Democrats do that same awful thing to us!" as opposed to discussing and admitting that the Republicans did something awful.
softwaredoug 1 hours ago [-]
The only way you convince Republicans it’s awful is by reminding Republicans power can be abused in both directions.
1718627440 49 minutes ago [-]
Oh. Before your comment I completely misunderstood "Democratic administrations". I understood it to mean administrations of countries that are democratic, not an US administration that is dominated by the Democratic party.
SpicyLemonZest 59 minutes ago [-]
I think you're misinterpreting the discussion here. Democrats are precommitting that they are going to do the same awful thing; when the time comes, I will be contacting my legislators demanding that they do to OpenAI or SpaceX whatever is done to Anthropic now. It's outrageous that Sam Altman would step in to try and benefit from the political persecution of his main competitor and we must ensure that he regrets this.
manoDev 1 hours ago [-]
“Concerned” is an understatement. USA is already operating at nazi Germany levels and more than half of the civil society is approving. Not that it’s a surprise for global spectators though - it’s finally showing it’s true colors.
It might be that this admin does not have the capacity to reason about second or third order effects.
But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?
andrewstuart2 2 hours ago [-]
Crossing red lines for previous administrations is clearly a goal at this point.
Chance-Device 1 hours ago [-]
Anthropic should never have gotten into bed with the military or intelligence services to begin with. They wanted to make a deal with the devil and dictate the terms, that is the problem. If they had stayed out this wouldn’t be happening. Yes, someone else will probably step in and do all the evil you have just refused to do, but that isn’t a reason to instead decide to do it personally.
Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.
mitthrowaway2 56 minutes ago [-]
According to legend the devil adheres precisely to the terms of the contacts he signs; it's usually the foolhardy peasant who didn't notice the fine print.
idiotsecant 57 minutes ago [-]
The military is perhaps the biggest possible customer around. They do plenty of things that aren't blowing people up. It's not bad to help with non combat tasks.
Chance-Device 52 minutes ago [-]
Yeah, but aren’t all of those things in service of “blowing people up”?
upboundspiral 20 minutes ago [-]
National defense is important, just ask Europe post Ukraine war.
People taking a good idea and extending it to do bad does harm twice: in the bad act itself and in making a good thing seem bad.
I am strongly against US starting wars and as you say blowing people up.
I am also strongly against the US being defenseless in the case of a national emergency.
draw_down 1 hours ago [-]
[dead]
hedayet 2 hours ago [-]
So, DoD has done what it said it would. And OpenAI has jumped on the opportunity.
Congress must approve any renaming the Department of Defense... They haven't. Stop giving them what they want without them even doing it in good faith at all.
They'll do nothing. It's really hard to take the morals of these devs seriously when they're already fine working for, and have a history with, some of the most evil companies in current existence.
Computer0 1 hours ago [-]
And when you suggest unions and worker organization to them they gloat that the company takes better care of them without it. They don't want to be in the same class as the peasantry.
2 hours ago [-]
Waterluvian 1 hours ago [-]
It was really easy to close out my ChatGPT account and switch to Claude. I was really only there out of inertia. I don’t do anything beyond occasional free tier stuff like rubber ducking but so far Claude is so much better.
jdndbdjsj 58 minutes ago [-]
I prefer the claude code CLI interface for everything anyway. It is actually more convenient. Memory is local files, type one word to use rather than navigate.
alanwreath 41 minutes ago [-]
But for how long? If Claude is a supply chain risk. That means anyone hosting him would also be a supply chain risk.
Ergo AWS/Azure/GCP - nobody will host them because it’s Anthropic or the lucrative government contracts. Hegseth/Trump didn’t just say “you’ll never do business with the US” - it’s that they will never do business IN the US. Hopefully that means they’ll be able to take up shop elsewhere in the world.
stared 57 minutes ago [-]
Should it be officially marked as the date of transition from liberal democracy to illiberal democracy?
Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.
quentindanjou 53 minutes ago [-]
I always find it interesting to listen US citizen answer "What would it take for you to not consider your country a democracy?" and admire the wide range of answers and denials.
cheesecompiler 49 minutes ago [-]
Right, as if _this_ is straw that breaks the camel's back, and not the pile of hay the camel has been carrying for decades.
breakpointalpha 49 minutes ago [-]
Many proudly and loudly claim the US is a "republic".
GaryBluto 45 minutes ago [-]
Is it not?
zem 31 minutes ago [-]
secret police kidnapping people off the streets didn't clue you in?!!
softwaredoug 1 hours ago [-]
These bullies wilt when everyone stands up in one voice. But when some parties capitulate (OpenAI), it sets a precedent that this behavior is OK. And then it’s not long until you become the target.
Labeling Anthropic a supply chain risk only because they were uninterested in doing business with the US government under the terms requested seems very much a bullying tactic that results in something the west critiques China for: coerced alignment.
Anthropic has been given a death sentence.
creddit 2 hours ago [-]
Naturally OpenAI also releases their new model on the same day.
Makes sense, obviously, but yeesh.
oompydoompy74 2 hours ago [-]
Exported all my chats and deleted my ChatGPT account yesterday. The current administration not liking you is the strongest signal I could possibly have to go all in on a particular company.
harmmonica 1 hours ago [-]
I canceled my subscription, but have not yet exported and deleted because I'm an idiot, and also because I'm not sure if deleting it will have any actual impact (is it a hard delete? Likely not, even if they say it is).
And I'm just trying to play out what happens if Anthropic, and Google (if they haven't already), capitulate. Am I just going to forego using the best models and suffer any repercussions of not having access when the people who couldn't care less if the military is using AI for illegal uses continue to leverage them? When I say illegal I'm talking about the surveillance-of-US-citizens red line Anthropic would not agree to. The autonomous weapon one I'm sure there are zero laws against and so that wouldn't actually be illegal.
pmarreck 1 hours ago [-]
It’s not a hard delete because for legal reasons, they may have to retain it
razster 57 minutes ago [-]
Supposedly they hold your deleted conversations/projects for 30 days. If that is true or not, idk, but it was asked when this first started.
soupfordummies 2 hours ago [-]
Are you able to view your chats through the .html file in the export? Mine are all garbled, like the JSON's not being parsed properly or something.
eecc 1 hours ago [-]
Asked for an export but still haven’t received the mail with the download link
fabbbbb 1 hours ago [-]
Took exactly 24 hours for me, on the minute..
mnsc 43 minutes ago [-]
Me too, why do they do that?
silveira 2 hours ago [-]
Thanks, I had deleted the app from all my devices but not deleted the account. I have deleted my account now too.
jrsj 1 hours ago [-]
I am in the exact opposite position — I am such an Anthropic hater (bc I’m forced to use Claude Code instead of OpenCode against my will) that I am now pro-Trump
(this is a joke, please forgive me for engaging in public wrongthink)
lePask 2 hours ago [-]
[dead]
ainiriand 2 hours ago [-]
Work is not good per-se.
GuinansEyebrows 1 hours ago [-]
no but it is unfortunately the only option for most of us for now.
drivebyhooting 2 hours ago [-]
Oh you’ll still work. As a supplicant on your hands and knees for your capitalist overlords.
pmarreck 1 hours ago [-]
This is nonsense, you can’t fire an AI and an AI will never take credit nor will it take responsibility. Humans will always be in charge, because you will never be able to completely trust an AI, because it has no skin in the game, literally.
groby_b 1 hours ago [-]
If we could move past the empty rhetoric?
It turns out a working economy requires well paid workers because somebody needs to buy shit
Even "capitalist overlords" (why not "evil bourgeois swine", while we're here) realize that. The "all SWE replaced" jabbering is a sales pitch to the uninformed. I.e. it's more P.T. Barnum than Jay Gould.
drivebyhooting 1 hours ago [-]
People need to buy food. Are farmers well paid?
10xDev 2 hours ago [-]
You made a fresh account to say this or is this ironically a clawdbot
100xLLM 1 hours ago [-]
[flagged]
10xDev 1 hours ago [-]
It is duplicating...
@dang something needs to be done about this.
Edit: it even created an account based on my username. wtf...
hsbauauvhabzb 1 hours ago [-]
@dang is not a tag that notifies dang. You must email HN.
llm_nerd 1 hours ago [-]
The job of every software developer is, in essence, to make people unemployed. This is a particularly silly bit of morality to hang on.
2 hours ago [-]
tokyobreakfast 2 hours ago [-]
[flagged]
kelnos 2 hours ago [-]
That's not civil disobedience. It's voting with your wallet.
mpalmer 2 hours ago [-]
Maybe look up civil disobedience?
surgical_fire 2 hours ago [-]
Eh, they are all morally indefensible.
Anthropic had no problems to do business with the current administration until now. Are we to pretend it was all for happy purposes until now?
ecshafer 2 hours ago [-]
Yeah how could Anthropic do business with the democratically elected government of the United States?
croes 2 hours ago [-]
Let‘s ignore all the bad things they have done since that, including killing two US citizens.
krapht 1 hours ago [-]
Even if you don't like the current administration, the rank and file are still out there doing valuable work. The government is more than ICE; it also administers welfare, funds research, collects taxes, and distributes social security payments to the old and infirm.
archagon 1 hours ago [-]
Easy enough to slap a “not for military or police use” clause on the license, then. Oh, what’s that? They don’t want to do that?
pmarreck 1 hours ago [-]
Presumably, if a spouse requested being able to have access to your phone at any time for a spot check, and to vet every single outgoing call and text you made, you would still marry this person, because that’s pretty much what Anthropic was requesting. Go/no-go on a per project basis.
No reasonable self-respecting person would agree to that, that’s basically “my relationship with you is contingent upon your guilt, until proven innocent.”
Dear HN: I would like comments before additional downvotes, please, this is not fucking Reddit
cedws 1 hours ago [-]
Don't get too attached. We're witnessing capitalism in its most ruthless form. Any of these companies will discard their principles the moment it becomes existential.
estearum 1 hours ago [-]
We are quite literally witnessing someone taking a massive hit for not discarding their principles.
Evergreen dril: "The wise man bowed his head and said 'there's no difference between good things and bad things you imbecile'"
cedws 1 hours ago [-]
What's your source for this "massive hit"? All I've seen is a massive PR upside, despite what I said above.
0cf8612b2e1e 47 minutes ago [-]
Losing a contract with the Pentagon and potentially all Federal-interacting businesses sounds like a pretty severe monetary hit. One which is hard to recoup by a bunch of $20/month consumer subscriptions.
amazingamazing 2 hours ago [-]
If Anthropic changes course will you move to Gemini? If all models do, local llama I assume?
pavlov 1 hours ago [-]
Mistral is European and has competitive models.
DeepSeek is Chinese.
Avoiding the MAGA collaborators is not as difficult as you make it seem. Foundation models have genuine global competition.
knollimar 34 minutes ago [-]
Mistral models are competitive? I thought they were far behind
pmarreck 1 hours ago [-]
I wish it was just as easy to avoid the terrorist collaborators; unfortunately, the terrorists and their supporters don’t produce anything
rvz 1 hours ago [-]
Anthropic, Google and the rest of them are all just as bad.
Local LLMs gives you the freedom to use a model without a third-party vendor which is the whole point here.
j45 1 hours ago [-]
Being model independent and cross-model capable is the required skill.
nickysielicki 2 hours ago [-]
Does anyone know which law firm is representing anthropic?
grvbck 2 hours ago [-]
For now it's in-house counsel Jeffrey Bleich, former special counsel to President Obama.
Tainted? Because they refused to change a contract that was already signed to allow for surveillance of Americans and fully autonomous kill bots? I guarantee, if a sane and non-fascist administration ever takes power again Anthropic will be forgiven. Being attacked by this administration is an honor. OpenAI on the other hand…
22 minutes ago [-]
LightBug1 58 minutes ago [-]
Well, would you want to given the rotten-KFC-stench of the current admin?
nineteen999 32 minutes ago [-]
Wonder how long it will take the American public to designate the US Govt a threat to national security, and using AI to assemble their own autonomous civilian defense robots to protect the public from the government-approved population suppression robots.
Right to bare arms and all that etc.
blacksmith_tb 39 minutes ago [-]
A bit ironic then that they're actively using Claude in the current war effort[1].
Is this the reason Claude models disappeared from AWS cloud in Brazil?
martinwright 2 hours ago [-]
Part of me wonders if it was a plan to squeeze between Anthropic & big gov contracts
yoyohello13 1 hours ago [-]
I'm willing to bet is was a golf course conversation.
"Hey why is the gov using Anthorpic over OpenAI, don't you know how much money I've donated?"
adamtaylor_13 2 hours ago [-]
Writing out a thought I had, someone please critique my reasoning here...
What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?
Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
nostrademons 1 hours ago [-]
I kinda wonder if this is how we got DeepSeek. It was developed by a Chinese hedge fund. Entirely possible their business model was to take out large leveraged puts against the major U.S. AI vendors; shit on their business models with an entirely open-source model; and profit. The stock market certainly dropped in a massive way when DeepSeek was released, so if they traded against NVDA/GOOG/META et al, they profited in a big way.
jrsj 53 minutes ago [-]
They would never do this because the entire point of the company is to try and control what AI is allowed to do, who is allowed to use it, and what they’re allowed to do with it. The overarching philosophy of Anthropic is explicitly opposed to open models. If it were up to them it would be illegal to inference them in the U.S.
jdndbdjsj 52 minutes ago [-]
If I were to download those weights I can't run them unless I spend $100k on a cluster, so the privacy advantage is not there yet.
We already have Groq, Celebras and AWS Bedrock and others in the inference of open models space, so the model is usable that way.
Is Claude better than Llama, Gwen etc. Probably. For now.
But for how long? Dissolving means relying on Meta or Deepseek etc. to pick up and carry on tuning. Otherwise it'll be as useful as a GPT2 or Atari ST eventually in a competitive environment.
Also open sourcing the weights is handing it over to DoD (aka DoW).
Complicated question but probably not the best move. Keep going means keep working on safety research.
mitthrowaway2 18 minutes ago [-]
Then the Pentagon would freely use it for autonomous weapons, just like Anthropic doesn't want them to do. Next question?
stirlo 2 hours ago [-]
There’s plenty of markets outside the pentagon to sell to.
Far more likely is they spin up a defence focused subsidiary with slightly different policies if they really want to sell to them.
BoiledCabbage 1 hours ago [-]
> Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
I mean what if all the employees stripped off their clothes and walked through the streets naked while barking, then called up their middle school math teachers and barked live dogs then moved to a commune and stood on their heads.
> Writing out a thought I had, someone please critique my reasoning here...
I mean to critique your reasoning, it makes sense to also include a criteria of something they might reasonably do. There are an infinite number of unhinged things a group of people could in theory do. But maybe start with something they would actually have an incentive to do.
Why would they voluntarily dissolve their company, put themselves out of work, release their crown jewels and get nothing for it? Yes it's unhinged but unless I'm missing something bug, they wouldn't do that because they wouldn't at all want that to happen.
xpe 1 hours ago [-]
> I'm just curious what the most unhinged response to this might be.
Are you asking how dangerous open-weight models are? You could start with:
From OpenAI authors, far from neutral : "Estimating Worst-Case Frontier Risks of Open-Weight LLMs" https://arxiv.org/abs/2508.03153
cush 1 hours ago [-]
Is Claude Code's outputted code also part of the supply chain risk?
6 minutes ago [-]
parliament32 1 hours ago [-]
Is there a link to the actual order anywhere? For us FedRAMP folks, the exact order contents actually matter, rather than a journalistic regurgitation. I was hoping one of the links in the article pointed to a source, but they're all just links back to other WSJ pages.
SpicyLemonZest 1 hours ago [-]
It sounds like they still have not issued any sort of actual order. The "formal label" described in the article is that they sent a communication directly to Anthropic saying they're a supply chain risk.
mentalgear 2 hours ago [-]
I said it before and I say it again: If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US, I don't see how any rational investor could still see US companies as a secure investment. When the rule of law degrades into pay-to-play politics, the inevitable result is a mass exodus of both capital and top-tier talent.
And to add to this quoting another commentator on the issue: First the Meritocracy goes, then the Freedom goes.
exceptione 2 hours ago [-]
You can download the manual from kremlin.gov, and I am only half-joking here.
hungryhobbit 2 hours ago [-]
Rational investors live in reality. In reality, a great deal of business conducted throughout the world involves graft; companies accept that, and keep doing business.
It's not a good thing, AT ALL. There's a huge loss of overall productivity when you have corrupt systems (see Russia), which is why modern governments have worked so hard to lower corruption. But Trump ruining all that isn't going to end business ... it's just going to make everyone pay more for everything.
watwut 1 hours ago [-]
> which is why modern governments have worked so hard to lower corruption
I would argue that they did not. They should have and some were better then others.
But, bulk of financial markets, all of predictionmarkets and crypto, startups and sillicon valley, Musk imperium, Thiel, Murdock, all run on corruption. And to large extend, Trump is the endgame of that.
ImPostingOnHN 2 hours ago [-]
Investor: "here's a million dollars for a ballroom, it'd be real nice if you cancelled the government's contract with our investment's competitors."
Seems like a great ROI. The loser is Average Joe with a 401(k).
There is a substantial difference between the standard lobbying and greasing the legislative wheels, and what's going on with this current administration.
Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.
JackSlateur 25 minutes ago [-]
At least of the previous couple US election, "people" paid more than a billion dollar each wanna-be president
That is investment aka corruption
2 hours ago [-]
bdangubic 2 hours ago [-]
That is already started to happen but it cannot happen overnight. Not only is it not easy but finding alternatives is also not easy. Just think of from your own personal perspective, say you have $100m right now invested in US business and wisely you say "I gotta get my shit away from this mess" - where exactly would you park your assets? You will find a way of course but you won't be moving $100m elsewhere overnight
georgemcbay 2 hours ago [-]
> I said it before and I say it again: If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US, I don't see how any rational investor could still see US companies as a secure investment.
Arguably large parts of the market in the US have been irrational and largely vibes based for a long time at this point. This action (like many others coming out of the Trump administration) adds to the chaos but I tend to doubt it will be the event that causes Wile E. Coyote to look down.
idontgetit1988 2 hours ago [-]
>I don't see how any rational investor could still see US companies as a secure investment.
You don't see how?
Well, just watch and wait, and you will see that this will have essentially zero effect on US investment.
It's petty and sad, but nothing ever happens.
Who else is even in the conversation? China? They would never do something like this!
Herring 2 hours ago [-]
Since the end of WW2, and especially since the end of the Cold War, Democratic administrations have presided over significantly higher job growth than Republican administrations.
I think the implication is that democratic presidents are less likely to do dumb shit like this, which harms the economy.
shimman 1 hours ago [-]
On the contrary, hopefully this gives the next democratic administration ammunition to take down big tech. Might as well classify Meta, Microsoft, Amazon, and Apple as supply-chain risks too with this logic.
Too bad that Congress has abdicated their responsibility to the executive branch, no reason why Congress couldn't have more control over the Pentagon. The President only has legal authority to command forces, not control an entire institution; but this would require Congress actually doing their job and not justifying more corporate welfare forever.
Herring 23 minutes ago [-]
A lot of people love watching dumb shit, like reality tv. Crucially also, it's a privilege that a lot of people don't want to give up. It's like pretending gravity doesn't apply to you.
2 hours ago [-]
sam0x17 33 minutes ago [-]
Streisand effect I think this will boost sales
gritspants 28 minutes ago [-]
I hope so. I will never type a single thought of my own or personal detail into an OpenAI product again. I have no doubt at some point OpenAI will be asked by DoD to hand over customer data and they will do so. If I use AI at all for nonprofessional reasons it will be Anthropic/Claude.
6thbit 1 hours ago [-]
Does this mean nobody on a large company selling to government can use any Anthropic tool or model?
So that’s most of sp500 and their providers?
6thbit 1 hours ago [-]
Would this mean Any systems built with Claude in defense environments may need to be rebuilt or removed?
zppln 55 minutes ago [-]
From what I understand it cannot be used to perform work on contracts where the DoW is on the other side. [1]
In practice I would suspect companies with such contracts would play it safe by outright banning the use of Anthtropic products, even if they could technically be used for work on contracts with other parties.
The consequence is that any company that does business with the U.S. military, and potentially any company that does business with the government in general, must stop using Anthropic's products for that work.
Anthropic has vowed to fight this designation in court.
Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.
If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.
Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.
mitthrowaway2 3 hours ago [-]
Every action has an opposite reaction. The DoD has made itself riskier to do business with, and future contacts will have to price that risk in.
alephnerd 3 hours ago [-]
FedRAMP and FedRAMP adjacent revenue is non-negotiable for vast swathes of businesses. The designation of "supply chain risk" is viral in nature because no GRC team will dare take such a risk within their supply chain because most customers add BOM requirements in contracts so this will end up falling under those already.
There's a lot of backchanneling going on between Emil and Dario because everyone's in the same circles but it's all for naught.
hedayet 2 hours ago [-]
In Hegseth's voice - No longer politically correct "DoD". It's precisely violent DoW now.
stefan_ 2 hours ago [-]
The DoD has been rather consistent that they will decide what to do with a product sold to them, not some random vendor. There is nothing extra to "price in".
nkohari 2 hours ago [-]
The "extra" is that the government is now attempting to unilaterally renegotiate contracts, and if the contractor disagrees, not only do they terminate the agreement but they restrict how other companies can work with you.
bicx 2 hours ago [-]
Apparently that's not 100% true. The DoD contractor itself can still use Anthropic's technology, just not on U.S. military contract projects.
jacquesm 53 minutes ago [-]
If you were a contractor to DoD (no way I'm calling them DoW) would you take the risk of doing business with a company that has been labeled a supply chain risk by your main customer?
ectospheno 2 hours ago [-]
They will stop just to be sure no boundaries are crossed.
alephnerd 2 hours ago [-]
The issue is the onus is on the contractor to prove that Anthropic technology has not tainted US government contracted projects - this is a herculean task verging on impossible. Additionally, most contracts will mandate SLAs around removing BOM risks.
AnotherGoodName 2 hours ago [-]
I’d like a lawyer to give some input. If you have a company that deals with the military does this chain down to not being allowed to use Claude or not?
Imustaskforhelp 2 hours ago [-]
IANAL and this is my understanding of the situation (I can be completely wrong) but yes, any company that deals with military cant use Claude (anthropic)
In fact adding onto it, IIRC this is the reason why google and amazon have to divest essentially from Anthropic if they want Govt. contracts
Hope this helps though a lawyer's input will definitely be more credible. So its good for them to respond as well.
2 hours ago [-]
yoyohello13 2 hours ago [-]
Of course. Hegseth said it, there is no way they could back out. Looking 'weak' is the worst possible thing for this admin. They would rather look childish, stupid, and evil, as long as they don't look 'weak'.
Especially 'weak' things like 'caring about people'.
hax0ron3 2 hours ago [-]
I am a political moderate who dislikes both the Democrats and Republicans. I think that I have been fair to the Trump administration in the past, including occasionally defending them from some of the less reality-based accusations against them.
I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd like to work with this administration".
blipvert 1 hours ago [-]
Genuine question - was your fair consideration prior to or after J6?
hax0ron3 25 minutes ago [-]
Both.
baxtr 2 hours ago [-]
I would love to understand in more detail what kind of use cases we’re talking about.
Is this about locating the right target for a sortie for example?
rustyhancock 2 hours ago [-]
Anthropic already had a deal via Palantir so it seems it's models are used in a variety of ways by the pentagon.
The reports about Venezuela and Iran seem to suggest it's primary role was processing bulk intel.
But also that it was being used in planning and target selection.
Presumably what spooked Anthropic was that these tools were about to be directed internally.
But it's not clear if this is a point of principle that the government wants no holds barred with it's tools?
beambot 2 hours ago [-]
Anthropic was very clear about the usage restrictions: They didn't want them being used to control autonomous kill drones or mass surveillance of the American public. That's it. DoW didn't like that -- for reasons that will probably soon become apparent.
jml78 1 hours ago [-]
Correct, it will be about silencing any opposition against this administration. OpenAI will be happy to let their models be used to persecute, kill, and destroy american democracy if it lines Sam's pockets.
hk__2 2 hours ago [-]
> I would love to understand in more detail what kind of use cases we’re talking about.
The whole point is that the use-case does not matter; either you allow the government to do everything they want, either you don’t.
pirate787 2 hours ago [-]
either you allow a democratically elected government to do everything they want that is legal, or you insert private corporate decision-making into every government decision which is untenable
sodality2 2 hours ago [-]
Is there any evidence that going outside the scope of the agreement would amount to anything more than a contract violation? Are we really to expect that Anthropic general counsel sits at the API gates allowing or blocking requests?
More generally, are there any comparable contract requirements in the field of defense, for a company in the same position as Anthropic? I'm curious.
_heimdall 2 hours ago [-]
You're missing the huge step that the government asking for "all legal uses" terminology is also who decides what is legal. Congress isn't willing to act as a check on executive power, meaning the contract they demanded simply says "I do what I want."
realo 2 hours ago [-]
Sure... So the USA of Trump have just decided to stop themselves and all their military suppliers from using the very best coding tools.
I suppose the USA's frenemies will jump on the occasion and use the incredible opportunity offered to them in a silver platter.
eikenberry 1 hours ago [-]
Could this be the chain of events that finally pops the AI bubble? If OpenAI's reputation hit slows growth enough to scare off investors and Anthropic's growth stalls due to this government attack...
scottyah 30 minutes ago [-]
I think it's a good chance tbh. It would take the S&P down with it too
m_ke 3 hours ago [-]
We can all thank the VCs and CEOs who fully embraced and enabled this administration
strange_quark 2 hours ago [-]
We're all trying to find the guy who did this!
hypeatei 2 hours ago [-]
And 32% of eligible voters that thought Kamala would've been worse.
m_ke 2 hours ago [-]
Don't blame the voters, they didn't get to pick her and did not run her campaign.
hypeatei 1 hours ago [-]
Oh no, I will. They're absolutely culpable.
m_ke 1 hours ago [-]
I think the DNC and the media might need to get some of that blame for being empty vessels for corporate interests that allowed this conman to get elected twice
osiris970 44 minutes ago [-]
Voters knew who trump was and chose him. They deserve all the blame. As a voting adult ur choices have consequences. All voters who voted trump or third party deserve all the consequences
wrs 2 hours ago [-]
Once again our leadership is "playing government" like a bunch of 12-year-olds, lashing out impulsively without thinking of the consequences. And no doubt once again it'll take a year for this to wind its way through the legal system and be reversed long after the damage is done, as is finally happening with the tariff fiasco.
seydor 2 hours ago [-]
A reminder to Anthropic, european residence visas start at $250K
scuff3d 3 hours ago [-]
Huh, and I thought conservatives were all about government staying our of the way of the private sector. Go figure...
germandiago 2 hours ago [-]
Not these people for what I see.
tantalor 3 hours ago [-]
Conservatives haven't had any power in Washington in decades. They are in thrall to MAGA now, which is all about seizing the means of production when its convenient.
rjbwork 2 hours ago [-]
They're not conservatives.
ekjhgkejhgk 3 hours ago [-]
StOp MaKiNg EvErYtHiNg PolItIcaL
bdangubic 2 hours ago [-]
In an article that discusses Pentagon doing stupid shit? :)
jacquesm 51 minutes ago [-]
That was without a doubt meant sarcastic.
jmspring 2 hours ago [-]
Next up, after some sort of bribe, the administration opens up Qwen models to be used by the Pentagon.
2OEH8eoCRo0 3 hours ago [-]
Fascism
readytion 1 hours ago [-]
[flagged]
foxes 14 minutes ago [-]
> Oh no my favourite ai company did / didn't collaborate
How could the regime do such a thing, doesn't law mean anything?!! /s
First they came for my neighbour now they came for my llm!!
mdni007 2 hours ago [-]
[flagged]
mmastrac 2 hours ago [-]
I really can't tell if this is snark or not.
zthrowaway 2 hours ago [-]
Mark Levin is that you?
tokyobreakfast 2 hours ago [-]
[flagged]
cakealert 2 hours ago [-]
[flagged]
Rudybega 2 hours ago [-]
Anthropic and the military had a contract. The military wanted to change the terms of that contract. Anthropic said no, which is their clearly defined contractual right. They got labeled a supply chain risk. How is this anything other than a shakedown? Does contract law mean anything to this administration?
cakealert 1 hours ago [-]
The other such labeled companies have contracts too.
mediaman 49 minutes ago [-]
10 USC 3252 has only been used once, against Acronis AG, a Swiss company with Russian connections.
Acronis did not have DOD contracts.
Other companies (Huawei) have been deemed risks under different laws, or by Congress, but they also didn't have direct DOD contracts.
Do you have any evidence for your assertion? Did you check if it is true before posting?
26 minutes ago [-]
timmmmmmay 1 hours ago [-]
no, the other such labeled companies are foreign owned firms like Huawei that the government never intended to do business with in the first place
ok_dad 1 hours ago [-]
The legal definition of supply chain risk:
> “Supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252).
Naming a US company a "supply chain risk" is basically saying "this company is an adversary of the USA", which is FUCKING INSANE.
yoyohello13 1 hours ago [-]
They think anyone who isn't a republican is an adversary of the USA.
kelnos 2 hours ago [-]
Because it's not a military asset? It's a privately-owned asset.
cakealert 2 hours ago [-]
> Because it's not a military asset? It's a privately-owned asset.
Are you under the impression that the military is submitting Anthropic API calls?
Whatever model the military is using is as much of an asset as the F35 they purchased.
Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.
monocasa 1 hours ago [-]
And the F35 comes with tons of contract terms in favor of the manufacturer. Like I've heard about how planes have been grounded because although an air base has the parts and mechanics rated to perform the repair on site, the servicing contract only allows it to be performed by the service contractors who needed to be flown in.
Jtsummers 1 hours ago [-]
The DOD can't even force companies to hand over data, such as schematics, if it wasn't in the original contract without providing extra payment negotiated with the contractor, and they can't force the contractor to set a particular price. This has happened on numerous systems. One of the biggest I'm aware of was the H-60 where the DOD ended up reverse engineering the early helicopters in order to maintain them, all because the DOD program office forgot to include a data rights clause in the contract (Sikorski didn't forget, they just didn't remind the DOD).
BoiledCabbage 1 hours ago [-]
> Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.
I think your mistakenly thinking of it as an asset. It's not as asset like a house, it's a service. They have a service contract. They have uptime and SLA commitments. That contract has parameters, and changing those parameters means a new contract.
A similar service would be signing up a private company to do intelligence gathering and analysis for the DoD in Asia. They find a company that specializes in Asia and sign a contract. They give them work and the contractors fulfill it. Coming back and saying "we want you now to give us analysis for important decisions in South America." The company would reasonably reply "we don't have the skills to do that in South America. Our team knows nothing about South Am, we're no better than someone off the street at that. There is no credibility behind anything we'd say about South America. And on top our contract was foe Asia". If we want to discuss a plan for hiring people for South Am let's discuss it, but that's a new contract." And then the DoD saying they're a supply chain risk makes no sense.
Or if you want an even more and hyperbolic example they cant take those data analysis to and say we're sending them ti the front lines of Iran. The company say no, and the DoD replying "you're a supply chain risk". They are not renting people, they are signing for a service of data analysis. Similarly they are not renting hardware they are signing for an LLM/intelligence service.
randallsquared 31 minutes ago [-]
> Are you under the impression that the military is submitting Anthropic API calls?
Yes? I assume that it's not in a government owned and operated datacenter, but likely in AWS (govcloud or whatever) and maintained/serviced by Anthropic SREs like I suppose regular Claude is.
gAI 2 hours ago [-]
status.claude.com shows the uptime for the government cloud service. It's running in-part on an AWS server.
mitthrowaway2 1 hours ago [-]
Because last time I checked, private companies that voluntarily offer a service to the government on contract terms are free to put whatever restrictions they want into their contract, and the government is free to not sign it if they don't like it?
Or is, say, FedEx now a supply chain risk too, if they happened to offer parcel delivery services for the DoD and put in a clause excluding delivery to active war zones?
pmarreck 1 hours ago [-]
Congratulations, you are clearly the smartest person on this forum, and I don’t mean that facetiously. The number of naïve comments here is absolutely astounding.
It would be like a spouse proposing restrictions and terms of their access to your phone contingent on you marrying them. Assuming guilt until proven innocent
kevinwang 59 minutes ago [-]
Even in your analogy, it's appropriate to reject the terms of marriage and not wed this person. But it's unprecedented to also vindictively ruin their life (e.g. by unilaterally putting them in jail)
xpe 1 hours ago [-]
> It would be like a spouse proposing restrictions and terms of their access to your phone contingent on you marrying them.
It is easy to cherry pick one metaphor. We owe it to ourselves to think better than that.
What happens when you analyze this overall situation in all of its richness from multiple points of view and then seek synthesis? Speaking for myself, I would want to know your (1) probabilistic priors: the Bayesian equivalent of "disclosing your biases"; (2) supporting information; (3) conflicting information: I want to know that you aren't just ignoring it; (4) various theories/models you considered; (5) overall probabilistic take. All in all, I'm uninterested in analysis disconnected from the historical particulars.
Few people have the skillset and time to dig in properly. I suggest starting with "A Tale of Three Contracts" by Zvi Mowshowitz [1] In my experience, you would be hard-pressed to find anything around AI of this quality in the usual mainstream publications.
2. One what basis is it rational to give the current administration (the leadership) the benefit of the doubt w.r.t. having a sincere drive towards advancing the national security of the United States? The evidence highly points in the other direction: towards corruption, political ends, and narcissistic whims.
eth0up 2 hours ago [-]
First, I personally predict, for myself, Anthropic will bend soon and this will be history.
The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.
Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.
We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.
I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.
Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.
'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.
My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.
And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.
One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.
A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.
I could go on, or post essays, but I such is not well received in this savage land.
The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *
*See comment history
manofmanysmiles 2 hours ago [-]
I may look at your comment history.
I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.
I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.
eth0up 29 minutes ago [-]
Things I should have, but didn't include:
1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.
2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.
3) This is normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.
4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.
I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.
(*for the ad homs out there)
eth0up 51 minutes ago [-]
Fairy astute intuition of my actual circumstances.
I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.
Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.
If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.
1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.
2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.
2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.
3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.
4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.
5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.
6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange
7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.
8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.
9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.
*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.
manofmanysmiles 31 minutes ago [-]
Okay, I think I see what you're saying.
Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.
The through line I am seeing here is that:
1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."
2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.
3) People don't care, or don't realize why this might be harmful to thriving human life.
To condene even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.
Is this perhaps what you're getting at ?
I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.
If this is what you mean, I can imagine that you would be cautious with your words.
I'll end with:
Don't worry
About a thing
Because
Every little thing
Is gonna be alright
eth0up 23 minutes ago [-]
I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?
manofmanysmiles 11 minutes ago [-]
Email me if you want to discuss more.
tempacct423 26 minutes ago [-]
I am in the minority here. But not supporting your own government's defense/war department seems rather unpatriotic and short-sighted.
We can argue all day long about supporting whichever admin is currently there and who is bad/good as determined by a few almighty elites in the tech world, but it screams irrational and short-sighted to make decisions on behalf of the country by a few tech elites.
Dario's latest interview made this crystal clear: he (and his EA cohort) feel that Congress is moving too slow and that they should determine what's good and bad for the country.
Like dude, is there anything at all you learned from the covid debacle through all the mess of the past few years? Like really a tech guy is gonna coach the USA what's right and wrong? Who are you to decide for the rest of us?
Techbros were wrong so many times (web3! crypto nonsense! theranos! some 500$ juice squeezing machine! and all of them forbes 30 under 30 folks! )... what are the odds you are gonna be wrong now when you look back say a year from now? The most profit making technology of the last few years are Polymarket! and Kalshi! and short-term loans (with a twist of course)! (Not even LLMs which are currently burning money)
And what's this nonsense hatred to working for/with the defense/war dept of YOUR OWN COUNTRY?
In most of the rest of the world, this is pride! It makes a mockery of the poor kids who serve this country to protect your tech bro hype!
Why this whole (fake?) self flagellation nonsense when pretty much everything we got in the US thus far is due to the USD backed by the most modern military superpower in history! Why be ashamed of this?
20 minutes ago [-]
mrtksn 2 hours ago [-]
Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk?
For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.
The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal.
I don’t see why anyone not into these things would not be a supply chain risk.
I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.
kelnos 2 hours ago [-]
> Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk?
In the US, government is not in control of business specifics. Certainly the government can regulate businesses, but when the government wants to do business with a company, they don't get to dictate the terms. The government and the company come to a negotiated agreement, and then both abide by the terms of that agreement. Or they don't come to an agreement, and they go their separate ways, and that's the end of it.
This was just a contract dispute, and nothing more. The US government has no legal right to use any companies' products on terms that the US government dictates. (Yes, there are exceptional/emergency cases where they can do this, but that's more a nuclear option, and shouldn't be used lightly.) Consider a different set of circumstances: the US government wants to be able to use Claude at $10 per seat per month, unlimited usage. Should Anthropic be forced to accept these terms? And if they don't, it's reasonable to designate them a supply-chain risk? I don't think so. A dispute over contract terms around acceptable use is no different.
Designating Anthropic a supply-chain risk is about retaliation and retribution, plain and simple. The US government, outside of the Pentagon, could certainly use Anthropic for many different purposes if they wanted to, and it would be fine. But not now: as a supply-chain risk, no one in the US government can use them for any purpose. And this might even be a problem for unrelated companies that use Anthropic products internally, but also want to obtain and work on government contracts.
dralley 2 hours ago [-]
Anthropic and the Government both signed a contract. Anthropic is still abiding the terms of that contract. The Government is demanding that they be able to disobey the contract.
wrs 2 hours ago [-]
Everything is negotiable, and the Negotiator in Chief clearly likes to pull all the levers he can find, legal or not. (Well, the Supreme Court ruled that it's all legal if he does it, right?)
mrtksn 2 hours ago [-]
Implementation details TBH. They want “their boys” to do as said. No respect to agreement or legality as we can see in other dealings. They hold all they cards.
stonogo 2 hours ago [-]
It's not an "implementation detail." Either obeying contract law subjects you to being designated a supply-chain risk, or it does not, and that decision has ramifications outside this "implementation."
mrtksn 2 hours ago [-]
Irrelevant. The president holds all the cards, he is above the law and you are a supply chain risk if you ask anything else other than “how high” when you are told to jump. Laws or contracts are things in the past. The most a contract can do is define your limits and obligations, not your rights or privileges,
yibg 35 minutes ago [-]
If the president can come to your house and burn it down, do we just throw up our hands and say, well he holds all the cards, oh well. Or do we call that out as being a bad thing?
kelnos 1 hours ago [-]
> The president holds all the cards, he is above the law
Even though it seems that way, he really isn't, even now. Many of his EOs and other actions have been struck down in court, and while compliance with court orders has been far from perfect (another alarming trend), Trump has not actually gotten away with doing everything he wants to do.
I do fear for the future of this country, for rule of law, and the democractic norms that degrade day by day. But Trump is not actually above the law, as much as he wants to be.
nkohari 2 hours ago [-]
> The president holds all the cards, he is above the law
This is provably not true. The fastest way for this to become true is to believe it, or at least to parrot it, even in a facetious way.
rjbwork 2 hours ago [-]
You got downvoted a bit but I upvoted. You're clearly being descriptive in your statements, not prescriptive. I tend to agree that this is how things are now.
Our country is not being run by the rule of law right now.
infogulch 2 hours ago [-]
The US Military's demand that the product they purchase is able to be used for all lawful purposes seems pretty reasonable, and is really the only valid line to draw. Forcing one's own ethics onto the military's use of your product is nonsensical on its face.
ssl-3 1 hours ago [-]
If I produce and sell widgets in my widget shop, then nobody but me gets to decide how I make those widgets.
The government can come into my shop and order sixty thousand widgets built exactly the way they say they want them built, and it may be something that doesn't run afoul of any laws at all.
But that doesn't mean that I am required or compelled to build widgets their way -- or at all.
I'm free to tell them to fuck off.
The government can then find go someone else to build widgets to their specifications (or not; that's very distinctly not my problem).
basket_horse 1 hours ago [-]
And that’s what’s happening here. The government is telling Anthropic to fuck off and they are finding someone else
RoddaWallPro 55 minutes ago [-]
Actually, that is not what is happening here. What is happening here is that the govt is saying "Okay, we will not buy your widgets. Also, anyone who _does_ buy your widgets, regardless of what they are doing with them, we the government will not do any business with them." Which is waayyyy beyond just not buying widgets. That is outright retaliation and using your power to attempt to destroy a company.
1 hours ago [-]
mitthrowaway2 44 minutes ago [-]
... No?
The government signed a contract with Anthropic, then changed their minds and decided they don't like the terms of the agreement that they had already voluntarily signed, and then they designated Anthropic a supply chain risk.
It's like ordering a pizza to the Pentagon, and then saying "actually we made a mistake with our order; we want that pizza delivered to Venezuela, please do that". And then when Dominos politely says that's outside of their service area, you call them a threat to national security, say they're trying to dictate terms, and ban them from ever doing business with any of your vendors ever again.
yibg 34 minutes ago [-]
The right response is to not use the said product and use something else. If i want your widget to do something I want and you refuse, I don't get to smash your shop.
watwut 1 hours ago [-]
It is completely normal to have ethics based conditions like that. It already eciats - drugs that can not be used in execution or elements that cant be used in arms
Goverment is being super unreasonable here. And tyrannical too, companies dont have duty to provide unreliable arms for illegal war.
Rendered at 22:33:45 GMT+0000 (Coordinated Universal Time) with Vercel.
This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.
Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing any business with an entire industry/sector.
"Nice little business ya got here -- it'd be shame if something happened to it..."
The civil society should be quite concerned about this kind of attacks.
That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.
Meh, I think it's entirely asymmetrical in this era. Democrats aren't good for much, but they're very good at respecting norms.
Trump is willing to do completely unprecedented, vindictive, and malicious things because he's so popular with so many people who are either checked out, nihilistic, corrupt, or just completely unconcerned about the concept of good governance.
It's not a pendulum where there's some super-corrupt Democrat waiting in the wings to do the same things upon their enemies, this really is the Republican party openly embracing kleptocracy and lawlessness.
https://news.ycombinator.com/item?id=47186677 I am directing the Department of War to designate Anthropic a supply-chain risk (twitter.com/secwar) 5 days ago, 1083+ comments
https://news.ycombinator.com/item?id=47189441 Anthropic says it will challenge Pentagon supply chain risk designation in court (reuters.com) 5 days ago, 37+ comments
But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?
Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.
People taking a good idea and extending it to do bad does harm twice: in the bad act itself and in making a good thing seem bad.
I am strongly against US starting wars and as you say blowing people up.
I am also strongly against the US being defenseless in the case of a national emergency.
I'm curious what'll openai signatories on notdivided.org do now - https://news.ycombinator.com/item?id=47188473
Remain undivided in spirit while grinding for OpenAI?
> https://www.sfgate.com/tech/article/brockman-openai-top-trum...
https://en.wikipedia.org/wiki/United_States_Department_of_De...
Ergo AWS/Azure/GCP - nobody will host them because it’s Anthropic or the lucrative government contracts. Hegseth/Trump didn’t just say “you’ll never do business with the US” - it’s that they will never do business IN the US. Hopefully that means they’ll be able to take up shop elsewhere in the world.
Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.
Anthropic has been given a death sentence.
Makes sense, obviously, but yeesh.
And I'm just trying to play out what happens if Anthropic, and Google (if they haven't already), capitulate. Am I just going to forego using the best models and suffer any repercussions of not having access when the people who couldn't care less if the military is using AI for illegal uses continue to leverage them? When I say illegal I'm talking about the surveillance-of-US-citizens red line Anthropic would not agree to. The autonomous weapon one I'm sure there are zero laws against and so that wouldn't actually be illegal.
(this is a joke, please forgive me for engaging in public wrongthink)
It turns out a working economy requires well paid workers because somebody needs to buy shit
Even "capitalist overlords" (why not "evil bourgeois swine", while we're here) realize that. The "all SWE replaced" jabbering is a sales pitch to the uninformed. I.e. it's more P.T. Barnum than Jay Gould.
@dang something needs to be done about this.
Edit: it even created an account based on my username. wtf...
Anthropic had no problems to do business with the current administration until now. Are we to pretend it was all for happy purposes until now?
No reasonable self-respecting person would agree to that, that’s basically “my relationship with you is contingent upon your guilt, until proven innocent.”
Dear HN: I would like comments before additional downvotes, please, this is not fucking Reddit
Evergreen dril: "The wise man bowed his head and said 'there's no difference between good things and bad things you imbecile'"
DeepSeek is Chinese.
Avoiding the MAGA collaborators is not as difficult as you make it seem. Foundation models have genuine global competition.
Local LLMs gives you the freedom to use a model without a third-party vendor which is the whole point here.
https://www.inc.com/chris-morris/legal-legend-leading-anthro...
Right to bare arms and all that etc.
1: https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
"Hey why is the gov using Anthorpic over OpenAI, don't you know how much money I've donated?"
What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?
Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
We already have Groq, Celebras and AWS Bedrock and others in the inference of open models space, so the model is usable that way.
Is Claude better than Llama, Gwen etc. Probably. For now.
But for how long? Dissolving means relying on Meta or Deepseek etc. to pick up and carry on tuning. Otherwise it'll be as useful as a GPT2 or Atari ST eventually in a competitive environment.
Also open sourcing the weights is handing it over to DoD (aka DoW).
Complicated question but probably not the best move. Keep going means keep working on safety research.
Far more likely is they spin up a defence focused subsidiary with slightly different policies if they really want to sell to them.
I mean what if all the employees stripped off their clothes and walked through the streets naked while barking, then called up their middle school math teachers and barked live dogs then moved to a commune and stood on their heads.
> Writing out a thought I had, someone please critique my reasoning here...
I mean to critique your reasoning, it makes sense to also include a criteria of something they might reasonably do. There are an infinite number of unhinged things a group of people could in theory do. But maybe start with something they would actually have an incentive to do.
Why would they voluntarily dissolve their company, put themselves out of work, release their crown jewels and get nothing for it? Yes it's unhinged but unless I'm missing something bug, they wouldn't do that because they wouldn't at all want that to happen.
Are you asking how dangerous open-weight models are? You could start with:
Ryan Greenblatt on the AI Alignment Forum : "When is it important that open-weight models aren't released?" https://www.alignmentforum.org/posts/TeF8Az2EiWenR9APF/when-...
From the Centre for Future Generations : "Can open-weight models ever be safe?" https://cfg.eu/can-open-weight-models-ever-be-safe/
From OpenAI authors, far from neutral : "Estimating Worst-Case Frontier Risks of Open-Weight LLMs" https://arxiv.org/abs/2508.03153
It's not a good thing, AT ALL. There's a huge loss of overall productivity when you have corrupt systems (see Russia), which is why modern governments have worked so hard to lower corruption. But Trump ruining all that isn't going to end business ... it's just going to make everyone pay more for everything.
I would argue that they did not. They should have and some were better then others.
But, bulk of financial markets, all of predictionmarkets and crypto, startups and sillicon valley, Musk imperium, Thiel, Murdock, all run on corruption. And to large extend, Trump is the endgame of that.
Seems like a great ROI. The loser is Average Joe with a 401(k).
https://en.wikipedia.org/wiki/Regulatory_capture
Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.
That is investment aka corruption
Arguably large parts of the market in the US have been irrational and largely vibes based for a long time at this point. This action (like many others coming out of the Trump administration) adds to the chaos but I tend to doubt it will be the event that causes Wile E. Coyote to look down.
You don't see how?
Well, just watch and wait, and you will see that this will have essentially zero effect on US investment.
It's petty and sad, but nothing ever happens.
Who else is even in the conversation? China? They would never do something like this!
https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...
Too bad that Congress has abdicated their responsibility to the executive branch, no reason why Congress couldn't have more control over the Pentagon. The President only has legal authority to command forces, not control an entire institution; but this would require Congress actually doing their job and not justifying more corporate welfare forever.
So that’s most of sp500 and their providers?
In practice I would suspect companies with such contracts would play it safe by outright banning the use of Anthtropic products, even if they could technically be used for work on contracts with other parties.
[1] https://www.anthropic.com/news/statement-comments-secretary-...
Anthropic has vowed to fight this designation in court.
Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.
If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.
Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.
There's a lot of backchanneling going on between Emil and Dario because everyone's in the same circles but it's all for naught.
In fact adding onto it, IIRC this is the reason why google and amazon have to divest essentially from Anthropic if they want Govt. contracts
Hope this helps though a lawyer's input will definitely be more credible. So its good for them to respond as well.
Especially 'weak' things like 'caring about people'.
I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd like to work with this administration".
Is this about locating the right target for a sortie for example?
The reports about Venezuela and Iran seem to suggest it's primary role was processing bulk intel.
But also that it was being used in planning and target selection.
Presumably what spooked Anthropic was that these tools were about to be directed internally.
But it's not clear if this is a point of principle that the government wants no holds barred with it's tools?
The whole point is that the use-case does not matter; either you allow the government to do everything they want, either you don’t.
More generally, are there any comparable contract requirements in the field of defense, for a company in the same position as Anthropic? I'm curious.
I suppose the USA's frenemies will jump on the occasion and use the incredible opportunity offered to them in a silver platter.
How could the regime do such a thing, doesn't law mean anything?!! /s
First they came for my neighbour now they came for my llm!!
Acronis did not have DOD contracts.
Other companies (Huawei) have been deemed risks under different laws, or by Congress, but they also didn't have direct DOD contracts.
Do you have any evidence for your assertion? Did you check if it is true before posting?
> “Supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252).
Naming a US company a "supply chain risk" is basically saying "this company is an adversary of the USA", which is FUCKING INSANE.
Are you under the impression that the military is submitting Anthropic API calls?
Whatever model the military is using is as much of an asset as the F35 they purchased.
Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.
I think your mistakenly thinking of it as an asset. It's not as asset like a house, it's a service. They have a service contract. They have uptime and SLA commitments. That contract has parameters, and changing those parameters means a new contract.
A similar service would be signing up a private company to do intelligence gathering and analysis for the DoD in Asia. They find a company that specializes in Asia and sign a contract. They give them work and the contractors fulfill it. Coming back and saying "we want you now to give us analysis for important decisions in South America." The company would reasonably reply "we don't have the skills to do that in South America. Our team knows nothing about South Am, we're no better than someone off the street at that. There is no credibility behind anything we'd say about South America. And on top our contract was foe Asia". If we want to discuss a plan for hiring people for South Am let's discuss it, but that's a new contract." And then the DoD saying they're a supply chain risk makes no sense.
Or if you want an even more and hyperbolic example they cant take those data analysis to and say we're sending them ti the front lines of Iran. The company say no, and the DoD replying "you're a supply chain risk". They are not renting people, they are signing for a service of data analysis. Similarly they are not renting hardware they are signing for an LLM/intelligence service.
Yes? I assume that it's not in a government owned and operated datacenter, but likely in AWS (govcloud or whatever) and maintained/serviced by Anthropic SREs like I suppose regular Claude is.
Or is, say, FedEx now a supply chain risk too, if they happened to offer parcel delivery services for the DoD and put in a clause excluding delivery to active war zones?
It would be like a spouse proposing restrictions and terms of their access to your phone contingent on you marrying them. Assuming guilt until proven innocent
It is easy to cherry pick one metaphor. We owe it to ourselves to think better than that.
What happens when you analyze this overall situation in all of its richness from multiple points of view and then seek synthesis? Speaking for myself, I would want to know your (1) probabilistic priors: the Bayesian equivalent of "disclosing your biases"; (2) supporting information; (3) conflicting information: I want to know that you aren't just ignoring it; (4) various theories/models you considered; (5) overall probabilistic take. All in all, I'm uninterested in analysis disconnected from the historical particulars.
Few people have the skillset and time to dig in properly. I suggest starting with "A Tale of Three Contracts" by Zvi Mowshowitz [1] In my experience, you would be hard-pressed to find anything around AI of this quality in the usual mainstream publications.
[1] https://thezvi.substack.com/p/a-tale-of-three-contracts
1. Last week I made a case for why DoD, if rational, would accept limited use under a consequentialist decision theory frame: https://news.ycombinator.com/item?id=47190039
2. One what basis is it rational to give the current administration (the leadership) the benefit of the doubt w.r.t. having a sincere drive towards advancing the national security of the United States? The evidence highly points in the other direction: towards corruption, political ends, and narcissistic whims.
The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.
Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.
We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.
I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.
Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.
'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.
My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.
And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.
One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.
A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.
I could go on, or post essays, but I such is not well received in this savage land.
The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *
*See comment history
I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.
I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.
1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.
2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.
3) This is normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.
4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.
I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.
(*for the ad homs out there)
I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.
Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.
If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.
1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.
2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.
2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.
3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.
4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.
5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.
6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange
7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.
8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.
9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.
*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.
Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.
The through line I am seeing here is that:
1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."
2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.
3) People don't care, or don't realize why this might be harmful to thriving human life.
To condene even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.
Is this perhaps what you're getting at ?
I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.
If this is what you mean, I can imagine that you would be cautious with your words.
I'll end with:
Don't worry
About a thing
Because
Every little thing
Is gonna be alright
We can argue all day long about supporting whichever admin is currently there and who is bad/good as determined by a few almighty elites in the tech world, but it screams irrational and short-sighted to make decisions on behalf of the country by a few tech elites.
Dario's latest interview made this crystal clear: he (and his EA cohort) feel that Congress is moving too slow and that they should determine what's good and bad for the country.
Like dude, is there anything at all you learned from the covid debacle through all the mess of the past few years? Like really a tech guy is gonna coach the USA what's right and wrong? Who are you to decide for the rest of us?
Techbros were wrong so many times (web3! crypto nonsense! theranos! some 500$ juice squeezing machine! and all of them forbes 30 under 30 folks! )... what are the odds you are gonna be wrong now when you look back say a year from now? The most profit making technology of the last few years are Polymarket! and Kalshi! and short-term loans (with a twist of course)! (Not even LLMs which are currently burning money)
And what's this nonsense hatred to working for/with the defense/war dept of YOUR OWN COUNTRY?
In most of the rest of the world, this is pride! It makes a mockery of the poor kids who serve this country to protect your tech bro hype!
Why this whole (fake?) self flagellation nonsense when pretty much everything we got in the US thus far is due to the USD backed by the most modern military superpower in history! Why be ashamed of this?
For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.
The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal.
I don’t see why anyone not into these things would not be a supply chain risk.
I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.
In the US, government is not in control of business specifics. Certainly the government can regulate businesses, but when the government wants to do business with a company, they don't get to dictate the terms. The government and the company come to a negotiated agreement, and then both abide by the terms of that agreement. Or they don't come to an agreement, and they go their separate ways, and that's the end of it.
This was just a contract dispute, and nothing more. The US government has no legal right to use any companies' products on terms that the US government dictates. (Yes, there are exceptional/emergency cases where they can do this, but that's more a nuclear option, and shouldn't be used lightly.) Consider a different set of circumstances: the US government wants to be able to use Claude at $10 per seat per month, unlimited usage. Should Anthropic be forced to accept these terms? And if they don't, it's reasonable to designate them a supply-chain risk? I don't think so. A dispute over contract terms around acceptable use is no different.
Designating Anthropic a supply-chain risk is about retaliation and retribution, plain and simple. The US government, outside of the Pentagon, could certainly use Anthropic for many different purposes if they wanted to, and it would be fine. But not now: as a supply-chain risk, no one in the US government can use them for any purpose. And this might even be a problem for unrelated companies that use Anthropic products internally, but also want to obtain and work on government contracts.
Even though it seems that way, he really isn't, even now. Many of his EOs and other actions have been struck down in court, and while compliance with court orders has been far from perfect (another alarming trend), Trump has not actually gotten away with doing everything he wants to do.
I do fear for the future of this country, for rule of law, and the democractic norms that degrade day by day. But Trump is not actually above the law, as much as he wants to be.
This is provably not true. The fastest way for this to become true is to believe it, or at least to parrot it, even in a facetious way.
Our country is not being run by the rule of law right now.
The government can come into my shop and order sixty thousand widgets built exactly the way they say they want them built, and it may be something that doesn't run afoul of any laws at all.
But that doesn't mean that I am required or compelled to build widgets their way -- or at all.
I'm free to tell them to fuck off.
The government can then find go someone else to build widgets to their specifications (or not; that's very distinctly not my problem).
The government signed a contract with Anthropic, then changed their minds and decided they don't like the terms of the agreement that they had already voluntarily signed, and then they designated Anthropic a supply chain risk.
It's like ordering a pizza to the Pentagon, and then saying "actually we made a mistake with our order; we want that pizza delivered to Venezuela, please do that". And then when Dominos politely says that's outside of their service area, you call them a threat to national security, say they're trying to dictate terms, and ban them from ever doing business with any of your vendors ever again.
Goverment is being super unreasonable here. And tyrannical too, companies dont have duty to provide unreliable arms for illegal war.