Rendered at 22:03:49 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
ehnto 20 hours ago [-]
With respect to my private data, it seems all roads eventually lead to California.
crefiz 8 hours ago [-]
Sorry, what does that means? GDPR national here
alsetmusic 8 hours ago [-]
Probably a reference to Silicon Valley?
tencentshill 1 days ago [-]
Notable: Added "Microsoft Azure, which provides cloud infrastructure for all Anthropic products (Worldwide)."
CodeCompost 16 hours ago [-]
That is significant for us. We have already accepted the risk of using Microsoft Azure so we use GitHub Copilot for that reason.
We have Claude disabled at the moment but if Anthropic has moved over to Azure then we can consider to start using it.
"Accepted the risk", just in case people don't know, is a compliancy term. I don't mean that Azure is risky.
speedgoose 15 hours ago [-]
> I don't mean that Azure is risky.
Depending on the company and their sector, people will nod in approval, or start laughing.
My company also accepted the risk of using Microsoft. We have a "data sharing agreement" together, with very powerful magical words. Compliance people are happy and sleep well.
ptx 7 hours ago [-]
They added Microsoft but alongside them also list Google and Amazon for "all products".
cdrnsf 23 hours ago [-]
Hopefully it goes better for them than it has for GitHub.
dylan604 22 hours ago [-]
hope in one hand and do something in the other to see which one fills up faster. hoping is always a strained good idea, but hoping on Azure really strains credulity
seemaze 19 hours ago [-]
If you hope for a hand full of do, you win(doze?)
copperx 19 hours ago [-]
But increases credibility?
pwarner 23 hours ago [-]
Microsoft 365 Copilot has enabled Claude models, and I imagine they want that running on Azure?
jadbox 22 hours ago [-]
Likely. MS doesn't like using models that are not hosted by them internally (see VSCode Copilot)
varispeed 23 hours ago [-]
Ahh now it is clear why so many outages lately. Solid choice.
victor106 18 hours ago [-]
When you host a solid model on terrible infrastructure, the infrastructure wins
baq 15 hours ago [-]
As God intended.
I fear the day it becomes the other way around.
withinboredom 7 hours ago [-]
Hold my beer.
rvz 23 hours ago [-]
There you go. So when Azure has an outage, so will Anthropic (and Github).
Now expect both of them to have unstable uptime and outages every week.
brookst 11 hours ago [-]
You don’t thino Anthropic has any kind of resiliency, so net new compute reduces downtime? Any docs on that view?
To be clear, for those reading these comments and thinking “oh no Azure”, this is an addition to the list of cloud companies that provide “cloud infrastructure worldwide” for “all products”. Alongside GCP and AWS. This is not a GitHub style announcement that they’ve moved all operations to Azure.
kleene_op 13 hours ago [-]
Coincidently, Claude Code appears to be down right now (in Europe West at least).
Everytime I hear of something X Azure announcement, that something just seem to break right away.
I know correlation is not causation, but my opinion of Azure is already too damn low to not link those two events.
matheusmoreira 13 hours ago [-]
It's also down for me here in Brazil. Getting overloaded errors for about one hour now. It's been happening a lot this week. Is this normal for Anthropic?
SomeUserName432 13 hours ago [-]
Working fine for me right now, from Brazil. Claude via Github Copilot at least.
matheusmoreira 12 hours ago [-]
I'm using Claude Code on the terminal. Not sure if it matters.
The promotional double usage period is just about to end too. Sucks.
CamouflagedKiwi 11 hours ago [-]
It's down for me too. A colleague says it's up though - it's possible they're shedding different groups of users (he has the Max subscription, I don't).
craxyfrog 20 hours ago [-]
Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
yalogin 19 hours ago [-]
I don’t know what I am looking at there. What is a subprocessor?
stingraycharles 17 hours ago [-]
It’s a legal term for handling data. It’s when Anthropic uses an external party to handle their data / systems, but Anthropic is the legal entity responsible for the data privacy, as the customer (you) has a contract with Anthropic.
so i thought there were multiple fedramp service providers offering hosted claude models. not sure why they are linking to one in particular
dan000892 19 hours ago [-]
Where are they linking to just one? The chart shows three: Palantir, AWS GovCloud, and GCP w/FR-High Assured Workload.
The chart should show ITAR also IMO. Only Palantir and AWS GovCloud would have checkboxes and that’s extremely relevant to defense contractors. (Vertex AI is available within an FR-High assured workload but not ITAR, the only conceivable reason for which would be foreign person access to the US sovereign production environment.)
motbus3 10 hours ago [-]
The slopped page doesn't work properly on mobile chrome
craxyfrog 20 hours ago [-]
Worth noting the distinction between subprocessors that handle customer data vs. those that handle operational/business data. The ones in the "Customer Data" category are where the compliance implications are most significant for enterprise customers under GDPR, HIPAA, or similar frameworks.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
gnabgib 1 days ago [-]
Title: Welcome to the Anthropic Trust Center
.. was this a deep link? You might want to repeat in the comments
barbazoo 24 hours ago [-]
> Anthropic Subprocessor Changes
> General
> Published March 26, 2026
> We've updated our subprocessor list with three additions
Works for me, gotta scroll down a bit
gnabgib 24 hours ago [-]
That's an h3 not a title. Looks like they probably meant: https://trust.anthropic.com/updates, it's still an entry in an h3 (with "Welcome to the Anthropic Trust Center" as the title), but it is at least the most recent update (canonical would stop this being directly linked)
22 hours ago [-]
rvz 24 hours ago [-]
[flagged]
iambateman 23 hours ago [-]
I hear the slot machine thing a lot but I don’t get it.
I use Claude Code every day for coding because it makes me way more productive. But I don’t resonate with the slot machine effect. Can you expand on what mechanism you see that give it a slot machine effect? Is it for all users or just a subset?
svnt 22 hours ago [-]
For people who want to ask a model for an app, or a website, or something at a level of “hey you make apps right, I have had this idea for years…” the experience is akin to a slot machine — sometimes they get what they imagined their description would create and it works, and sometimes they get a hollow chocolate approximation.
fenykep 23 hours ago [-]
I think it is just a strawman extrapolation of the nondeterministic nature of LLMs.
rvz 22 hours ago [-]
[flagged]
wewewedxfgdf 20 hours ago [-]
[flagged]
sdwr 20 hours ago [-]
The more they share, the easier it is to exploit the system.
20 hours ago [-]
20 hours ago [-]
alexjurkiewicz 20 hours ago [-]
"I'm trying to do something illegal and Anthropic are aware. Why do they keep banning me??"
wewewedxfgdf 20 hours ago [-]
Unless you're not.
Look, if you make an LLM and you don't want people using it in a particular way then communicate with them. And if you can detect what you think is such behavior, then communicate. Out in real life you don't threaten people with end of relationship with every issue that comes up.
It's such childish business to always pull out and threaten the ban hammer any time there's any possible issue with how they want their system used.
Nuzzerino 19 hours ago [-]
The last time I used Claude, I was completely locked out of a long chat (including not being able to view it) for sending something innocent that was written in another language, where there was apparently some confusion with the translation. I’m sure it will get worse over time until Chinese models start to proliferate more and challenge the monopoly on regulatory policy.
octoberfranklin 23 hours ago [-]
WTF is a "subprocessor"?
They should just be honest and say "data loophole".
dchuk 21 hours ago [-]
It’s basically another party that is used as infrastructure by the company you’re using the services of, who has access to your data, but that sub processor doesn’t need to extend its terms down into the eula. So like if you host databases on aws, they are your sub processor.
pdabbadabba 23 hours ago [-]
It is an important legal concept under the GDPR and other data governance frameworks.
We have Claude disabled at the moment but if Anthropic has moved over to Azure then we can consider to start using it.
"Accepted the risk", just in case people don't know, is a compliancy term. I don't mean that Azure is risky.
Depending on the company and their sector, people will nod in approval, or start laughing.
My company also accepted the risk of using Microsoft. We have a "data sharing agreement" together, with very powerful magical words. Compliance people are happy and sleep well.
I fear the day it becomes the other way around.
Now expect both of them to have unstable uptime and outages every week.
Everytime I hear of something X Azure announcement, that something just seem to break right away.
I know correlation is not causation, but my opinion of Azure is already too damn low to not link those two events.
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
The chart should show ITAR also IMO. Only Palantir and AWS GovCloud would have checkboxes and that’s extremely relevant to defense contractors. (Vertex AI is available within an FR-High assured workload but not ITAR, the only conceivable reason for which would be foreign person access to the US sovereign production environment.)
For anyone evaluating this for a procurement decision: the relevant questions are (1) which subprocessors have access to content you send in API requests, (2) what data processing agreements are in place with each, and (3) what is the notification window for new subprocessor additions. The 30-day notice for customer data subprocessors is fairly standard for enterprise SaaS at this point.
Publishing this list proactively rather than only on request is a positive signal, even if the list itself is fairly short.
.. was this a deep link? You might want to repeat in the comments
> General
> Published March 26, 2026
> We've updated our subprocessor list with three additions
Works for me, gotta scroll down a bit
I use Claude Code every day for coding because it makes me way more productive. But I don’t resonate with the slot machine effect. Can you expand on what mechanism you see that give it a slot machine effect? Is it for all users or just a subset?
Look, if you make an LLM and you don't want people using it in a particular way then communicate with them. And if you can detect what you think is such behavior, then communicate. Out in real life you don't threaten people with end of relationship with every issue that comes up.
It's such childish business to always pull out and threaten the ban hammer any time there's any possible issue with how they want their system used.
They should just be honest and say "data loophole".