Why this matters
Performance is not vanity. INP above 200ms loses Google ranking. LCP above 2.5s correlates with 30%+ bounce rate increases on mobile. CLS above 0.1 makes mobile users tap the wrong button and abandon. Base44 apps fail all three thresholds by default, on real-user data, on real mobile devices. That has direct revenue and SEO consequences.
The good news: every bottleneck is mechanical. There is no "Base44 is slow, accept it" answer. There is "the AI agent didn't paginate, didn't code-split, didn't cache, and didn't lazy-load images, and you have to fix each one." This guide walks the fixes in priority order.
What good looks like (the targets)
Use Google's official Core Web Vitals thresholds, measured on the 75th percentile of real-user mobile traffic, not on your laptop in Chrome:
- LCP (Largest Contentful Paint) ≤ 2.5s — the moment the largest visible element finishes painting.
- INP (Interaction to Next Paint) ≤ 200ms — the worst input-response latency in the session.
- CLS (Cumulative Layout Shift) ≤ 0.1 — total layout movement during load.
- TTFB (Time to First Byte) ≤ 600ms — server response time.
Measure with PageSpeed Insights (real-user CrUX data), Vercel Speed Insights, or SpeedCurve. Lab data (Lighthouse local) flatters; field data tells the truth.
The bottlenecks, in order of impact
1. Unpaginated entity queries (highest impact)
The Base44 AI agent routinely emits code like:
// Bad: loads everything on mount.
useEffect(() => {
base44.entities.Todo.list({ user_id: currentUser.id }).then(setTodos);
}, []);
If Todo has 2,000 rows, this fetches 2,000 records, serializes 2,000 JSON objects, and renders 2,000 list items. The fetch alone is 500–1500ms. The render is another few hundred. LCP is destroyed.
Fix: paginate.
const PAGE_SIZE = 50;
useEffect(() => {
base44.entities.Todo.list(
{ user_id: currentUser.id },
"-created_date",
PAGE_SIZE
).then(setTodos);
}, []);
For pages that need more than the first page, add a load-more button or infinite scroll using a cursor on created_date.
Expected improvement: 40–60% LCP reduction on entity-list-heavy pages.
2. No code splitting
Base44's AI agent emits monolithic page bundles. Every route's JS is loaded on initial page load, even routes the user will never visit. A typical Base44 app ships 600–900KB gzipped initial JS, of which 60–80% is unused on the current route.
Fix: dynamic imports per route.
import { lazy, Suspense } from "react";
const SettingsPage = lazy(() => import("./pages/Settings"));
const AdminPage = lazy(() => import("./pages/Admin"));
function App() {
return (
<Suspense fallback={<Loader />}>
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/settings" element={<SettingsPage />} />
<Route path="/admin" element={<AdminPage />} />
</Routes>
</Suspense>
);
}
The Base44 IDE supports React's lazy and dynamic imports. The bundler will split per import. Expect 200–400KB initial bundle reduction on a midsize app.
Expected improvement: 30–50% LCP reduction on cold loads, 100–300ms TTI improvement.
3. Synchronous integration calls in render paths
The agent sometimes emits code like:
// Very bad: blocks render on a 4-second LLM call.
const summary = await base44.integrations.invokeLLM({ prompt: "..." });
return <div>{summary}</div>;
This makes the page wait for the LLM response before any content renders. INP hits 4+ seconds.
Fix: defer to a useEffect with optimistic UI.
const [summary, setSummary] = useState<string | null>(null);
useEffect(() => {
base44.integrations.invokeLLM({ prompt: "..." }).then(setSummary);
}, []);
return summary === null
? <Skeleton lines={3} />
: <div>{summary}</div>;
Better still, cache the result in an entity so subsequent loads are instant:
async function getSummary(promptHash: string) {
const cached = await base44.entities.LLMCache.list({ hash: promptHash }, null, 1);
if (cached.length > 0) return cached[0].response;
const response = await base44.integrations.invokeLLM({ prompt: "..." });
await base44.entities.LLMCache.create({ hash: promptHash, response });
return response;
}
Expected improvement: INP drops from seconds to under 100ms; credit costs reduced by 50–90% depending on cache hit rate.
4. N+1 queries on entity relations
Another agent-emitted pattern:
// Loads todos, then for each todo loads its project.
const todos = await base44.entities.Todo.list({ user_id });
const todosWithProjects = await Promise.all(
todos.map(async t => ({
...t,
project: await base44.entities.Project.get(t.project_id),
}))
);
For 50 todos that's 51 sequential or near-sequential API calls.
Fix: batch with $in.
const todos = await base44.entities.Todo.list({ user_id });
const projectIds = [...new Set(todos.map(t => t.project_id))];
const projects = await base44.entities.Project.list({ id: { $in: projectIds } });
const byId = Object.fromEntries(projects.map(p => [p.id, p]));
const todosWithProjects = todos.map(t => ({ ...t, project: byId[t.project_id] }));
Two API calls instead of 51. Latency drops proportionally.
Expected improvement: 5–20x speedup on relation-heavy lists.
5. Images without explicit dimensions and lazy loading
The agent rarely sets width, height, and loading on <img> tags. Result: layout shift on every image load (CLS), and every off-screen image downloads on initial load (LCP).
Fix:
<img
src="/avatar.jpg"
alt="User avatar"
width="48"
height="48"
loading="lazy"
decoding="async"
/>
For above-the-fold images, use loading="eager" and fetchpriority="high". Below-the-fold, always lazy. Always set explicit dimensions to reserve layout space.
For user-uploaded images, generate a thumbnail variant in a backend function and serve it for list views; only load the full image on detail.
Expected improvement: CLS drops from 0.2–0.4 to under 0.05; LCP improves 200–600ms on image-heavy pages.
6. No HTTP caching
Base44 serves all assets with conservative cache headers, and dynamic responses with no caching at all. Every page load re-downloads assets the browser has from yesterday.
Fix: put Cloudflare in front of your custom domain, and add a Worker that:
- Sets
Cache-Control: public, max-age=31536000, immutableon hashed asset URLs (JS, CSS, fonts). - Adds
Cache-Control: public, max-age=300, stale-while-revalidate=86400to public marketing pages. - Strips cache headers entirely from authenticated routes (they should not be cached at the edge).
Cloudflare's free tier handles this. The Worker code is roughly 30 lines.
Expected improvement: repeat-visit LCP drops 60–80%; bandwidth cost drops similarly.
7. Heavy work on the main thread
The AI agent emits synchronous filter/sort/aggregate code in render paths:
// Blocks main thread for hundreds of ms on large lists.
const sorted = todos
.filter(t => !t.archived)
.sort((a, b) => b.priority - a.priority)
.map(t => ({ ...t, slug: slugify(t.title) }));
For 50 todos this is fine. For 5,000, it locks the UI.
Fix: do the work server-side via a backend function, or push to a worker.
// Backend function: backend/functions/listSortedTodos.ts
export default async function handler(req: Request) {
const { user_id } = await req.json();
const todos = await base44.entities.Todo.list({ user_id, archived: false });
todos.sort((a, b) => b.priority - a.priority);
return new Response(JSON.stringify(todos.map(t => ({ ...t, slug: slugify(t.title) }))), {
status: 200,
headers: { "content-type": "application/json" },
});
}
This keeps the main thread free for paint and input.
Expected improvement: INP drops dramatically on data-heavy pages.
8. Render-blocking third-party scripts
Stripe.js, Intercom, analytics scripts often land in <head> synchronously. Each one delays first paint.
Fix: load every third-party script with async or defer, ideally only on the page that needs it. Stripe.js, for instance, only needs to load on the checkout page.
// Load Stripe.js only when the user is about to check out.
const stripePromise = loadStripe(STRIPE_PUBLISHABLE_KEY);
function CheckoutPage() {
return (
<Elements stripe={stripePromise}>
<CheckoutForm />
</Elements>
);
}
For analytics, prefer Plausible (3KB, async) over Google Analytics (50+KB, multiple round trips).
Expected improvement: 100–400ms LCP improvement; INP improvement on input-heavy flows.
CSR and SEO interaction
Base44's default client-side rendering is the single biggest performance and SEO problem for marketing pages. Google's crawler renders JS but does so on a budget; if your LCP is over 4 seconds, you may not be indexed at all. We cover the SEO consequences in why Base44 apps are invisible to Google and the Base44 SEO best practices article.
For marketing pages, the right answer is to leave Base44's default rendering and serve those routes from a static or SSR layer in front. A simple Cloudflare Worker that fetches the public-facing data and returns server-rendered HTML covers the marketing case. The logged-in app continues to run client-side as the platform expects.
Measurement methodology
Without measurement, every "optimization" is theater. Wire these in before you make changes:
- PageSpeed Insights for the homepage and top three landing pages. Bookmark and check weekly. Watch field metrics, not lab.
- Vercel Speed Insights or SpeedCurve for production. Real-user data segmented by route, device, country.
- Web Vitals JS in your app shell, sending to Plausible or your analytics. Catches regressions between PSI runs.
- A weekly Lighthouse run via WebPageTest with the "median of 9 runs" config, on a Moto G4 throttled to 4G.
Set a budget: any commit that regresses LCP by more than 200ms or INP by more than 50ms is a bug to fix before the next deploy.
Common performance mistakes
Optimizing on a fast laptop in Chrome. The benchmark is a mid-tier Android on 3G. Test there.
Trusting Lighthouse over CrUX. Lighthouse is lab; CrUX is field. They diverge significantly. Real users are what matters.
Treating one page's metrics as the whole app's. Marketing pages and authenticated app shells have different bottlenecks. Measure both separately.
Skipping the cache pass. HTTP caching is the highest-leverage change after pagination, and most teams never do it.
Adding tools without removing any. Each new analytics, monitoring, or chat tool costs you 50–200ms. Audit your script tags monthly and delete what you don't use.
Treating performance as one-time work. Every new feature can regress. Bake budgets into CI.
Optimization checklist
| # | Change | Effort | Expected gain |
|---|---|---|---|
| 1 | Paginate every entity list to 50 records | 30 min/page | 40–60% LCP |
| 2 | Code-split routes via dynamic imports | 1–2 hours | 30–50% LCP |
| 3 | Cache LLM and integration calls in entities | 2–4 hours | INP under 100ms |
| 4 | Batch N+1 entity queries with $in | 30 min/site | 5–20x speedup |
| 5 | Set width, height, loading on every img | 1–2 hours | CLS under 0.05 |
| 6 | Add Cloudflare Workers for HTTP caching | 2–4 hours | 60–80% repeat-visit LCP |
| 7 | Move heavy data work to backend functions | 1–3 hours | INP under 200ms |
| 8 | Defer non-critical third-party scripts | 1 hour | 100–400ms LCP |
| 9 | Pre-render marketing routes via Worker | 4–8 hours | LCP 1.5–2.5s |
| 10 | Wire Web Vitals to analytics, set budget | 2 hours | Detection of regressions |
A focused 2–3 day pass through this list typically takes a Base44 app from "fails Core Web Vitals" to "passes all three" on the 75th percentile.
Want us to optimize your Base44 app?
Our $497 audit measures all three Core Web Vitals on real-user data, identifies the top 5–10 bottlenecks, and delivers a prioritized fix plan with expected gains per change. If you want us to execute the fixes, that's a 48-hour fix sprint at $1,500. Order an audit or book a free 15-minute call.
Related reading
- Base44 Database Best Practices — schema and query patterns that compose with the performance work above.
- Base44 Deployment Checklist — per-deploy performance checks.
- Base44 Credit System Explained — caching integrations is also the biggest credit-saver.