BASE44DEVS

FIX · BACKEND · HIGH

Fix Base44 429 Rate-Limit Errors Throttling Your Production App

Base44 returns 429 errors in production because the platform applies undocumented rate limits at the project, function, and account level, and the SDK has no built-in backoff so retries pile on. Fix it by adding exponential backoff with jitter to every SDK call, batching reads to reduce request volume, caching idempotent responses, and adding a circuit breaker that halts calls for 60 seconds after the third 429.

Last verified
2026-05-01
Category
BACKEND
Difficulty
MODERATE
DIY possible
YES

What's happening

Your app is humming along. You launch a product on Product Hunt. Traffic spikes. Suddenly users see error toasts, function calls fail, and your dashboards stop loading. Open the network tab and you see HTTP 429 Too Many Requests coming back from /functions/* endpoints and SDK collection calls.

Or you launched a scheduled job that processes 10,000 records. Halfway through, every call starts returning 429. Your batch dies in an inconsistent state.

Or you onboarded a new enterprise customer and their initial data sync hits a wall: every third call fails with 429, the SDK does not retry, your import script aborts, and you spend the next hour debugging a problem that does not happen on dev because dev never sees real volume.

This is the Base44 production throttle. Not a feature. Not a documented limit you can plan around. An undocumented ceiling that hits when traffic is most valuable.

Why this happens

Base44's backend is a multi-tenant platform. Every project shares compute, network, and Deno isolate resources with every other project on the same hardware. To prevent any single project from monopolizing a node, the platform applies rate limits.

These limits operate at three levels.

Per-function rate limits. A single function URL has a maximum sustained request rate. Above that, the platform returns 429. The exact threshold is not published and appears to vary by region, time of day, and platform load.

Per-project (app) rate limits. The aggregate request rate across all functions in your project is capped separately. You can hit this even if no individual function is under heavy load — it surfaces as 429 from a function that was previously fine.

Per-account rate limits. If you have multiple apps under one account, the aggregate across them has a ceiling too. Less common to hit, but possible during cross-app batch jobs.

The platform also has a published per-request item cap of 5,000, announced November 27, 2025, for collection list operations. This is harder than a rate limit — it is not a temporary throttle, it is a fixed maximum, and you must paginate around it.

The compounding factor is that the SDK does not implement retry-with-backoff. A 429 surfaces as a thrown error. Application code that retries naively (loop until success) or that has multiple parallel calls each retrying makes the throttle worse and extends the throttle window. The platform's throttle algorithm is not exposed, but empirically a sustained naive-retry pattern can keep you in 429-land for 60-300 seconds where a properly backed-off pattern would resolve in 5-15 seconds.

Sources: feedback.base44.com posts on rate limiting and the November 2025 5,000-item announcement, third-party reviews on production scaling (nocode.mba/articles/base44-review, lowcode.agency/blog/base44-not-working-errors-fixes).

How to reproduce

  1. Pick a backend function in your app. Confirm it works under normal load.
  2. Run a load-testing script that fires 30 requests per second sustained for 60 seconds. Use hey, ab, or a simple curl loop:
# Loop 30 RPS for 60 seconds
for i in $(seq 1 1800); do
  curl -s -o /dev/null -w "%{http_code}\n" \
    -X POST https://yourapp.base44.app/functions/yourFunc \
    -H "Content-Type: application/json" \
    -d '{"i": '$i'}' &
  sleep 0.033
done | sort | uniq -c
  1. Observe the response code distribution. If you see 429 in the count, you have triggered the throttle. If you see only 200 and 500, your function is hitting other failures first.
  2. Repeat with bursts of 100 parallel requests every 5 seconds. The 429 rate increases.
  3. Try the same against an SDK collection call from the frontend (open DevTools, run a script that calls base44.collection("orders").list() 50 times in a tight loop). Note any 429s.
  4. Stop. Wait 5 minutes. Hit the function once. Confirm it works again. The throttle resets.

Step-by-step fix

1. Add exponential backoff with jitter to every SDK call

Wrap every SDK call (and every function fetch) in a backoff helper.

// src/lib/with-backoff.ts
export async function withBackoff<T>(
  fn: () => Promise<T>,
  opts: { retries?: number; baseMs?: number; maxMs?: number } = {}
): Promise<T> {
  const { retries = 5, baseMs = 200, maxMs = 8000 } = opts;
  let lastErr: unknown;
  for (let attempt = 0; attempt <= retries; attempt++) {
    try {
      return await fn();
    } catch (err) {
      lastErr = err;
      // Only retry on rate-limit-like failures.
      const status = (err as { status?: number })?.status;
      const message = (err as Error)?.message ?? "";
      const isRetryable =
        status === 429 ||
        status === 503 ||
        /rate.?limit|too many|429/i.test(message);
      if (!isRetryable || attempt === retries) throw err;

      const delay = Math.min(maxMs, baseMs * 2 ** attempt) * (0.5 + Math.random());
      await new Promise((resolve) => setTimeout(resolve, delay));
    }
  }
  throw lastErr;
}

Use it everywhere:

const orders = await withBackoff(() =>
  base44.collection("orders").list({ where: { user_id: userId } })
);

This alone resolves 80 percent of 429-driven outages. The jitter prevents synchronized retry storms when many users hit the throttle at once.

2. Add a circuit breaker

After three consecutive 429s within 60 seconds, halt all calls to that endpoint for 60 seconds. This prevents the throttle window from extending due to repeated probing.

// src/lib/circuit-breaker.ts
const breakers = new Map<string, { failures: number; openedAt: number }>();

export async function withCircuitBreaker<T>(
  key: string,
  fn: () => Promise<T>
): Promise<T> {
  const state = breakers.get(key) ?? { failures: 0, openedAt: 0 };
  if (state.openedAt && Date.now() - state.openedAt < 60_000) {
    throw new Error(`Circuit open for ${key}`);
  }
  try {
    const result = await fn();
    breakers.set(key, { failures: 0, openedAt: 0 });
    return result;
  } catch (err) {
    state.failures++;
    if (state.failures >= 3) state.openedAt = Date.now();
    breakers.set(key, state);
    throw err;
  }
}

3. Cache idempotent reads client-side

For collection reads that change infrequently (user profiles, product catalogs, settings), cache the response with a short TTL. This is the highest-leverage volume reduction.

const cache = new Map<string, { value: unknown; expiresAt: number }>();

export async function cachedRead<T>(key: string, ttlMs: number, fn: () => Promise<T>): Promise<T> {
  const hit = cache.get(key);
  if (hit && hit.expiresAt > Date.now()) return hit.value as T;
  const value = await fn();
  cache.set(key, { value, expiresAt: Date.now() + ttlMs });
  return value;
}

4. Batch reads with explicit pagination

Respect the 5,000-item cap. Never request a full collection in one call. Always paginate, even when you think the collection is small — it grows over time.

async function fetchAll<T>(collection: string): Promise<T[]> {
  const pageSize = 1000;
  const all: T[] = [];
  let offset = 0;
  while (true) {
    const page = await withBackoff(() =>
      base44.collection(collection).list({ offset, limit: pageSize })
    );
    all.push(...page);
    if (page.length < pageSize) break;
    offset += pageSize;
  }
  return all;
}

5. Throttle outbound fan-out

If your app fires N parallel calls per user action, lower N. Use a semaphore to cap concurrency at 5-10 parallel calls per user. Most users do not need 50 simultaneous SDK calls; the agent often generates code that does.

6. Surface 429s clearly when they reach the user

When backoff exhausts retries, do not silently fail. Show the user a "platform busy, retrying" indicator. Log the event to your external sink so you can correlate 429 spikes with traffic patterns.

DIY vs hire decision

DIY this if: You have one or two hot endpoints, you can spend an afternoon adding backoff and caching, and your traffic is predictable enough to test against.

Hire help if: You have a launch coming up that you cannot afford to throttle, you have already shipped a 429-driven incident to customers, or your batch jobs need to process more than 10,000 records reliably. Our fix-sprint instruments backoff, circuit breakers, caching, and pagination across the entire app, sets up a load-test rig you can re-run, and verifies stability under your expected peak traffic before handoff.

Need this fixed before your launch?

Our fix-sprint hardens your entire request layer against 429s: backoff, circuit breakers, caching, pagination, and a load-test rig you keep. Fixed price, 48-72 hour turnaround. Includes a load test against your peak traffic projection.

Start a fix sprint for production throttling

QUERIES

Frequently asked questions

Q.01What are Base44's actual rate limits?
A.01

Base44 does not publish detailed rate-limit numbers. The known caps are a 5,000-item-per-request limit (announced November 27, 2025) and undocumented per-project, per-function, and per-account throttles that surface as 429 responses. Empirically, sustained burst traffic above 50-100 RPS on a single function is the most common trigger. Lower tiers appear to throttle earlier than higher tiers, but this is not confirmed in writing by the platform.

Q.02Does the SDK retry on 429 automatically?
A.02

No. The Base44 SDK does not implement retry-with-backoff out of the box. A 429 surfaces to your application code as a thrown error or a rejected promise. If you do not handle it, your UI shows a failure to the user. If you handle it with a naive immediate retry, you make the throttle worse. You must implement exponential backoff with jitter manually.

Q.03Why do 429 errors come in bursts and then disappear for days?
A.03

Rate limits in Base44 appear to use sliding windows. A burst that exceeds the window's allowance triggers throttling for the duration of the window plus a short cooldown. Once you fall below the threshold, traffic resumes normally. This is why teams observe 429s only when traffic spikes — typically during product launches, marketing campaigns, or scheduled batch jobs.

Q.04Does upgrading to a higher tier raise the limits?
A.04

Probably yes, but the platform does not publish tier-specific rate-limit numbers. You will only know empirically after upgrading. For workloads with predictable burst patterns, the higher tier may help. For workloads with truly high throughput needs (1,000+ RPS sustained), Base44 is the wrong platform regardless of tier.

Q.05Can I cache around the rate limits to reduce request volume?
A.05

Yes, and this is the highest-leverage fix. Most apps make the same read multiple times in a session. Cache responses in localStorage or a service worker for idempotent reads (lists, lookups, profile data). A 30-90 second client-side cache typically reduces request volume by 60-80 percent and pushes you well below any throttle. Cache invalidation needs careful design for write-heavy collections.

NEXT STEP

Need this fix shipped this week?

Book a free 15-minute call or order a $497 audit. We will respond within one business day.