What's happening
You have an admin panel. You need to delete 5,000 expired sessions, or 20,000 abandoned cart records, or last quarter's archive of 50,000 log entries. You open base44's editor, look for a bulk-delete affordance, and there is none. You ask the AI agent to write one. The agent generates a function that fetches all matching records to the client, then loops through them. It runs out of memory at 5,000 rows, or hits a 429 rate limit, or simply hangs.
A user on the feedback board summarized the production impact: "Platform lacks server-side bulk delete functionality...creates performance degradation, scalability failure, and technical debt" — Chris Cotton. The same complaint surfaces under "Critical Bug/SDK Missing" with significant upvote support.
Practically this means admin tasks that would take 10 seconds in any proper backend take hours of manual clicking, or break entirely past a few thousand records. Teams running marketplace platforms, fintech apps, or anything log-heavy hit this within their first month of real production usage.
Why this happens
Base44's SDK was architected for single-record CRUD. Each delete is one HTTP request with one record ID. There is no deleteMany(filter) primitive in the SDK and no equivalent UI affordance with filtering. The AI agent, when asked for "bulk delete", reaches for the only tool available: fetch all matching records, then loop and delete them one by one.
That approach hits three platform limits, in order:
- The 5,000-item-per-request limit. Introduced 2025-11-27, this caps any single SDK query at 5,000 returned rows. Datasets larger than that need pagination.
- The Deno runtime memory ceiling. Loading tens of thousands of records into a single function invocation runs out of memory and the function dies mid-loop, leaving the deletion partially complete.
- Rate limits. Hammering the SDK's per-record delete endpoint thousands of times in a row triggers the rate-limit-429-production-throttle circuit breaker. The function fails halfway through with a 429.
The combination means there is no naive solution. A correct bulk delete must paginate, batch, and pace itself. Writing that correctly requires knowing all three constraints — knowledge the AI agent does not reliably encode in its generated code.
The deeper architectural cause is that base44 prioritizes the AI agent's ability to write "obvious" code over giving developers production primitives. Bulk operations carry partial-failure semantics, transaction concerns, and idempotency requirements that the agent handles unreliably. Rather than ship a primitive the agent might misuse, the platform shipped nothing. The cost is pushed onto the operator.
Source: feedback.base44.com "Critical Bug/SDK Missing" thread; SDK reference at base44.com/docs; the 5,000-row cap announcement in base44's 2025-11-27 changelog.
How to reproduce
- Create a base44 entity with a boolean
archivedfield. - Generate or import 10,000 records, half with
archived: true. - Build an admin page with a "Delete all archived" button.
- Ask the AI agent to wire the button to delete all
archived: truerecords. - Click the button. Observe one of three failure modes: the function runs out of memory, hits 429s halfway through, or appears to succeed but only deletes ~5,000 records (the 5,000-row query cap).
- Refresh and count remaining
archived: truerecords. Confirm the operation was partial or failed.
Step-by-step fix
The fix is a paginated, rate-limit-aware backend function. Five steps.
1. Add a Deno backend function
In the editor, create a new backend function called bulkDeleteArchived. Choose the Deno runtime, not a frontend handler. The function must run server-side to access the SDK with full quota.
2. Implement pagination + batching
// functions/bulkDeleteArchived.ts
import { base44 } from '@base44/sdk';
export async function bulkDeleteArchived(filter: Record<string, unknown>) {
const BATCH_SIZE = 200;
const PAGE_DELAY_MS = 500;
let totalDeleted = 0;
while (true) {
const page = await base44.entities.YourEntity.list({
filter,
limit: BATCH_SIZE,
});
if (page.length === 0) break;
for (const record of page) {
await base44.entities.YourEntity.delete(record.id);
}
totalDeleted += page.length;
await new Promise((resolve) => setTimeout(resolve, PAGE_DELAY_MS));
}
return { deleted: totalDeleted };
}
Two notes. First, BATCH_SIZE = 200 keeps each query well under the 5,000-row cap and avoids loading too much into memory. Second, PAGE_DELAY_MS = 500 paces deletes to stay clear of the 429 rate-limit threshold. Tune both numbers to your dataset and rate-limit observations.
3. Wire the admin UI to the function
Replace any existing client-side bulk-delete logic with a single call to the new backend function. Show a progress indicator; the function may take minutes for large datasets.
4. Add idempotency and resumability
If the function fails partway through, you want it safe to re-run. Because each iteration queries fresh records that match the filter, the function naturally resumes. But add a maximum iteration cap (e.g., 1,000 pages) to prevent runaway loops on bad filters.
5. Test on a staging copy first
Deletes are irreversible. Run the function on a cloned dataset before pointing it at production. Validate the count of deleted records matches expectations.
DIY vs hire decision
DIY is realistic if you are comfortable in Deno/TypeScript and your dataset is under ~100,000 records. The function above is small. The hard parts are tuning batch size and delay against your specific rate-limit behavior, plus integrating cleanly with existing admin UX.
Hire if any of these apply:
- Dataset exceeds 100,000 records and you need it done overnight.
- Bulk delete must coordinate with other operations transactionally (delete archived orders + their line items + their payment refunds).
- Your team has no Deno experience and the AI agent's code keeps failing on rate limits.
We have shipped this exact function for ~30 base44 clients. We know the rate-limit thresholds and the failure modes that are not obvious from the docs.
Need this fix shipped this week?
Standard scope: production-grade bulk delete function, idempotent, rate-limit aware, with admin UI integration and a tested rollback plan. Fix-sprint pricing, 48-hour turnaround.
Book a fix sprint or order a $497 audit if you want to confirm scope first.
Related problems
- Data loss after returning to your app — partial-delete failures cause data state corruption symptomatic of this problem.
- Rate-limit 429 production throttle — the rate limit is what makes naive bulk delete fail.
- Vendor lock-in via SDK dependency — missing primitives like bulk delete are why teams eventually migrate.