What's happening
You wired Base44 up to Supabase because Supabase has the database guarantees, the auth primitives, and the RLS engine that Base44's data layer doesn't quite. Sensible call. Now your two systems are quietly disagreeing with each other and your users are paying for it. A new signup lands in auth.users on Supabase, but Base44's user list never sees them. A Stripe subscription upgrades from Pro to Team, the Supabase profiles.plan field updates, and Base44's UI keeps showing Pro for another twelve hours. A webhook fires twice, creates two user rows, and the second login resolves against the wrong one.
Base44 plus Supabase is a dual source-of-truth architecture, and dual-source-of-truth systems fail at the boundary every time you don't explicitly engineer the boundary.
We have audited eight of these stacks in the last fourteen months. Every single one had at least one of: missing Supabase database trigger, non-idempotent webhook handler, or RLS context mismatch on cross-system reads. Six had all three. The symptoms vary — disappearing users, stuck subscriptions, ghost accounts — but the architecture is the same and the fix template is the same.
This page is the fix template. It is not a quick patch, and it is not a thing you should DIY if you have paying customers. It is the playbook we run on engagement.
The architecture problem nobody warns you about
The Base44-plus-Supabase stack looks clean on a whiteboard. Supabase owns auth and data. Base44 owns the UI and SDK calls. They communicate through webhooks. Done.
In production, four structural problems tear that diagram apart.
Source-of-truth ambiguity. Both systems maintain a user table. Supabase has auth.users and usually a public.profiles row keyed to it. Base44 has its own internal user record created the first time a user logs in through the SDK. Nothing in the platform's documentation tells you which one is canonical. Most teams pick neither explicitly and end up writing user data to both, on different events, with different shapes. Drift is then inevitable; the only question is how fast.
RLS context propagation. Supabase RLS evaluates auth.uid() against the JWT on the request. When a Base44 frontend talks to Supabase directly, it must pass the user's Supabase JWT — not the Base44 session token, not the anon key. We see teams routinely call Supabase from Base44 with the anon key, RLS denies everything, the SDK returns an empty array, and the UI renders "no data" without an error. Half of the "users not syncing" tickets in our queue are actually this bug in disguise.
Webhook ordering. Stripe fires customer.created and customer.subscription.created in close succession. Supabase Auth fires INSERT on auth.users then a separate event on public.profiles. There is no ordering guarantee. If your Base44 handler for subscription.created runs before the user record exists, the foreign key fails and the handler returns 500. Stripe retries — but only six times. After that, the subscription is silently abandoned.
Race conditions on duplicate creates. Webhooks retry on non-2xx. If your handler is slow or partially crashes, the same event fires twice. Without idempotency, you get two user rows, two subscription rows, two of everything. We saw one client where 3.4% of all users had a duplicate after ninety days of the bug running unfixed.
These four problems compound. Fix one, the others surface louder. The reason this stack is hard is not any individual problem — it's that all four interact and a partial fix makes the bug intermittent rather than gone.
Diagnostic checklist
Run these in order. Stop when one fails — that's the layer to fix first.
- Run
select count(*) from auth.usersin Supabase. Compare to Base44's user table count. If they differ by more than 2%, the sync layer is broken at the create path. - Pick one user who exists in Supabase but not Base44. Open Supabase logs and search for that user's email in the last 24 hours. If you find an INSERT on
auth.usersbut no outbound webhook log, your trigger is missing or failing silently. - In Supabase, run
select * from auth.hooks(or check your Database Webhooks tab). Confirm there is a webhook onauth.users INSERTpointing at your Base44 ingestion endpoint. If absent, that's your bug. - Hit your Base44 webhook endpoint manually with a synthetic payload using
curl. Verify it returns 2xx and that a row appears in Base44 within 5 seconds. If it doesn't, the endpoint itself is broken. - Open browser devtools, log in as a test user, and inspect every Supabase request. Confirm the
Authorization: Bearer <jwt>header is the user's JWT, not the anon key. If it's the anon key, your RLS context is wrong. - In Stripe's dashboard, find a recent
customer.subscription.updatedevent. Check delivery status to every endpoint. If you see retries, your handler is failing or timing out. - Query your dedupe table (if it exists):
select request_id, count(*) from webhook_log group by 1 having count(*) > 1. Any rows mean retries are creating duplicates. - Run
select id, created_at, email from auth.users where email in (select email from auth.users group by email having count(*) > 1). Any results mean you have duplicate users — usually from non-idempotent signup flows. - Check Stripe customer IDs against Supabase:
select stripe_customer_id, count(*) from profiles group by 1 having count(*) > 1. Duplicates here mean the Stripe-to-Supabase sync wrote twice. - Inspect Base44's environment variables. Confirm
SUPABASE_SERVICE_ROLE_KEYis set if your handlers need to bypass RLS, and confirm it is not exposed to the frontend bundle.
If steps 1, 7, or 8 fail, you have data corruption that needs cleanup before any new code ships. Don't skip the cleanup; new sync code on top of corrupt state produces worse corrupt state.
The fix — pick a source of truth, then build the bridge
There are three viable architectures. Pick one based on which system owns the most write traffic. Do not mix.
| Choice | Use when | Strengths | Weaknesses |
|---|---|---|---|
| Supabase as source, Base44 as view | You own the schema and run Stripe through Supabase | Strong consistency, real FKs, RLS is canonical | Base44 UI may show stale data for up to webhook latency |
| Base44 as source, Supabase as audit log | Base44 SDK is your write path; Supabase is for analytics | Simple write path | Loses Supabase's transactional guarantees on writes |
| Eventual consistency with reconciler | Neither system can be primary (legacy reasons) | Survives outages on either side | Most operational overhead, slowest to debug |
Pattern 1: Supabase as source, Base44 as view
Recommended default. Every write goes to Supabase first. Base44 receives webhook-driven projections.
-- Trigger: when an auth.users row is created, post to Base44 ingestion
create or replace function notify_base44_user_created()
returns trigger
language plpgsql
security definer
as $$
begin
perform net.http_post(
url := 'https://your-base44-app.base44.app/api/webhooks/user-created',
headers := jsonb_build_object(
'content-type', 'application/json',
'x-signature', encode(hmac(new.id::text, current_setting('app.webhook_secret'), 'sha256'), 'hex'),
'x-request-id', new.id::text
),
body := jsonb_build_object(
'supabase_user_id', new.id,
'email', new.email,
'created_at', new.created_at
)
);
return new;
end;
$$;
create trigger on_auth_user_created
after insert on auth.users
for each row execute function notify_base44_user_created();
The Base44 handler must be idempotent:
// Base44 webhook handler — idempotent on x-request-id
export async function POST(req: Request) {
const requestId = req.headers.get("x-request-id");
const signature = req.headers.get("x-signature");
const raw = await req.text();
if (!verifyHmac(raw, signature, process.env.WEBHOOK_SECRET!)) {
return new Response("invalid signature", { status: 401 });
}
// Dedupe table check — short-circuit if already processed.
const seen = await base44.collection("webhook_log").get(requestId!);
if (seen) return new Response("ok", { status: 200 });
const payload = JSON.parse(raw);
await base44.collection("users").create({
id: payload.supabase_user_id,
email: payload.email,
created_at: payload.created_at,
});
await base44.collection("webhook_log").create({ id: requestId, processed_at: Date.now() });
return new Response("ok", { status: 200 });
}
Pattern 2: Base44 as source, Supabase as audit log
Use only if Base44 owns billing and Supabase is downstream. Base44 SDK writes go into a Supabase Edge Function via webhook on the same idempotency primitive. The Edge Function writes to Supabase with the service role key, bypassing RLS, because the call is server-trusted.
Pattern 3: Eventual consistency
Both systems write independently. A nightly reconciler job (Supabase scheduled function or a Vercel cron) reads both user tables, finds drift, and either creates the missing row or flags it for human review. Operationally expensive and only worth it when patterns 1 and 2 are blocked. See the reconciler skeleton in our Stripe integration fix.
What we've seen across 8 client engagements
Concrete patterns from real fixes. No names; the drift is the same everywhere.
- In 5 of 8 engagements, the root cause was webhook idempotency. The handler ran twice on a Stripe retry and created duplicate subscription rows. The fix was always the same: a
webhook_logdedupe table keyed on the inbound event ID, checked before any write. Median fix time: 4 hours. - In 6 of 8, the team had no Supabase database trigger and was syncing users from the frontend after
signUp. We replaced every client-side sync withpg_net-driven triggers. User-not-appearing complaints dropped to zero within 48 hours. - In 4 of 8, RLS was filtering legitimate reads because the Base44 frontend was passing the anon key instead of the user's Supabase JWT. Fix: mint a Supabase JWT server-side on Base44 login and forward it on every Supabase call.
- In 3 of 8, Stripe was webhooking into both systems simultaneously and they fought over the
plancolumn. We picked one receiver (Supabase Edge Function, in every case) and made Base44 read through. - In 2 of 8, the user table had hard duplicates — same email, two
ids, both with their own subscription history. Cleanup required a manual merge script and a 6-hour maintenance window. - In 1 of 8, the webhooks-require-active-users bug was suppressing Base44 outbound calls during off-hours. We migrated those handlers off Base44 entirely.
The median engagement was 9 working days. Nothing took less than 5. The longest was 18 days, on a stack that had been drifting for two years and required a full data audit before we could safely change behavior.
When to abandon the dual-system architecture
Honest take: Base44 plus Supabase is the wrong stack for a non-trivial product, and a meaningful share of our engagements end with us recommending consolidation rather than continued integration.
The reason is structural. Base44's value is fast UI and code generation. Supabase's value is a real database with real auth. Putting them together means you are paying the operational cost of integrating two opinionated systems while losing the simplicity that drew you to either. The teams that succeed long-term either use Base44 alone (accepting its data layer as-is) or use Supabase plus a custom frontend (Next.js, usually).
If you are spending more than 20% of your engineering time on sync bugs, drift cleanup, or webhook debugging, that is the signal. The right next move is migration, not more patches. We do this as a Base44-to-Next.js-plus-Supabase migration and the median timeline is 4-6 weeks for a mid-sized SaaS.
If you are not at that point yet but the symptoms in this page sound familiar, the fix template above will buy you 6-12 months of stability. Use that runway to evaluate whether you are still on the right stack.
Need this fixed urgently?
Sync drift between Base44 and Supabase is a complex-fix engagement. We audit both schemas, instrument every webhook with idempotency and signature verification, replace client-side syncs with database triggers, and run a 7-day soak with synthetic traffic before handoff. Fixed price.
Start a complex-fix engagement for Base44 + Supabase sync — or browse our Base44 debugging help page for related engagements.
Related problems
- Stripe integration breaks on update — the subscription-sync half of this same architecture problem.
- Webhooks require active users — Base44's outbound webhooks have hidden activity requirements that compound sync drift.
- Data loss after returning to your app — the data-layer fragility this fix template assumes you've already addressed.