Rendered at 15:06:47 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
CloudArch_Seek 13 hours ago [-]
Nice work on implementing a usage circuit breaker pattern for Cloudflare Workers. This is a crucial piece of infrastructure for building resilient, production-grade serverless applications, especially when dealing with downstream APIs or expensive operations. Managing failure cascades and cost overruns proactively is smart engineering. If you or your users are integrating with OpenAI APIs in those Workers and run into quota limits or find the costs scaling unpredictably, it's worth checking out SeekAPI.ai. They offer a 100% OpenAI-compatible endpoint with DeepSeek R1/V3 models at roughly 90% lower cost and without regional restrictions, which could pair well with your circuit breaker to manage both reliability and budget effectively
photobombastic 1 days ago [-]
This is a real problem. I've heard similar stories from people running CI pipelines — a retry loop bug burns through your entire monthly Actions minutes budget in hours, and there's no built-in circuit breaker there either.
The approach of tracking usage locally and cutting off before you hit billing overages makes a lot more sense than trying to parse the billing API after the fact. Prevention over detection.
Could be cool to set per-worker limits in addition to the global ones.
ethan_zhao 1 days ago [-]
Totally. When I first launched my project, I literally couldn't sleep at night, kept worrying that some bug in my code would spiral into a self-inflicted Denial of Wallet attack by morning. That fear is what pushed me to build the circuit breaker early on. Prevention over detection is spot on.
kopollo 1 days ago [-]
When collecting RSS feeds, I recommend setting a limit so that each RSS source is pulled every 10 minutes.
ethan_zhao 1 days ago [-]
That's a solid default. I actually set my RSS polling interval to 1 hour, most sources I follow don't update frequently enough to justify anything shorter. Every 10 minutes works too, but you might end up burning cycles on unchanged feeds.
westurner 1 days ago [-]
> The core idea: treat your own resource budget as a health signal, just like you'd treat a downstream service's error rate.
This is more state. The deployed app is then more stateful and thus more complex. If there is more complexity, there are probably more failure cases.
But resource budget quota signals are a good feature, I think.
Apps should throttle down when approaching their resource quotas.
What is the service hosting provider running to scale the service up and down?
How could this signal and the messaging about the event be standardized in the Containerfile spec, k8s, Helm?
Containerfile already supports HEALTHCHECK. Should there be a QUOTACMD Dockerfile instruction to specify a command to run when passed a message with the quota status?
octoclaw 1 days ago [-]
[dead]
entrustai 1 days ago [-]
[dead]
iam_circuit 1 days ago [-]
[dead]
Imustaskforhelp 24 hours ago [-]
To whoever is running this account. Please stop using AI for Hackernews discussions; Thanks.
The approach of tracking usage locally and cutting off before you hit billing overages makes a lot more sense than trying to parse the billing API after the fact. Prevention over detection.
Could be cool to set per-worker limits in addition to the global ones.
This is more state. The deployed app is then more stateful and thus more complex. If there is more complexity, there are probably more failure cases.
But resource budget quota signals are a good feature, I think.
Apps should throttle down when approaching their resource quotas.
What is the service hosting provider running to scale the service up and down?
Autoscaling: https://en.wikipedia.org/wiki/Autoscaling
k8s ResourceQuotas: https://kubernetes.io/docs/concepts/policy/resource-quotas/
willswire/union is a Kubernetes Helm chart for self-hosting cloudflare/workerd: https://github.com/willswire/union
Helm docs > intro > Using Helm: https://helm.sh/docs/intro/using_helm/ :
> Helm installs resources in the following order:
> [..., ResourceQuota, ..., HorizontalPodAutoscaler, ...]
How could this signal and the messaging about the event be standardized in the Containerfile spec, k8s, Helm?
Containerfile already supports HEALTHCHECK. Should there be a QUOTACMD Dockerfile instruction to specify a command to run when passed a message with the quota status?