BASE44DEVS

FIX · INTEGRATIONS · HIGH

Base44 + Supabase Sync Issues — Users Not Reflecting, Subscriptions Failing

Base44 and Supabase drift because the stack has a dual source-of-truth problem: webhook race conditions, RLS context mismatches between Supabase auth.users and the Base44 user table, and non-idempotent handlers create duplicates and orphans. The fix is to pick one source of truth, build idempotent webhook handlers with a request-id deduplication table, and drive sync from Supabase database triggers rather than client-side calls.

Last verified
2026-05-02
Category
INTEGRATIONS
Difficulty
HARD
DIY possible
NO

What's happening

You wired Base44 up to Supabase because Supabase has the database guarantees, the auth primitives, and the RLS engine that Base44's data layer doesn't quite. Sensible call. Now your two systems are quietly disagreeing with each other and your users are paying for it. A new signup lands in auth.users on Supabase, but Base44's user list never sees them. A Stripe subscription upgrades from Pro to Team, the Supabase profiles.plan field updates, and Base44's UI keeps showing Pro for another twelve hours. A webhook fires twice, creates two user rows, and the second login resolves against the wrong one.

Base44 plus Supabase is a dual source-of-truth architecture, and dual-source-of-truth systems fail at the boundary every time you don't explicitly engineer the boundary.

We have audited eight of these stacks in the last fourteen months. Every single one had at least one of: missing Supabase database trigger, non-idempotent webhook handler, or RLS context mismatch on cross-system reads. Six had all three. The symptoms vary — disappearing users, stuck subscriptions, ghost accounts — but the architecture is the same and the fix template is the same.

This page is the fix template. It is not a quick patch, and it is not a thing you should DIY if you have paying customers. It is the playbook we run on engagement.

The architecture problem nobody warns you about

The Base44-plus-Supabase stack looks clean on a whiteboard. Supabase owns auth and data. Base44 owns the UI and SDK calls. They communicate through webhooks. Done.

In production, four structural problems tear that diagram apart.

Source-of-truth ambiguity. Both systems maintain a user table. Supabase has auth.users and usually a public.profiles row keyed to it. Base44 has its own internal user record created the first time a user logs in through the SDK. Nothing in the platform's documentation tells you which one is canonical. Most teams pick neither explicitly and end up writing user data to both, on different events, with different shapes. Drift is then inevitable; the only question is how fast.

RLS context propagation. Supabase RLS evaluates auth.uid() against the JWT on the request. When a Base44 frontend talks to Supabase directly, it must pass the user's Supabase JWT — not the Base44 session token, not the anon key. We see teams routinely call Supabase from Base44 with the anon key, RLS denies everything, the SDK returns an empty array, and the UI renders "no data" without an error. Half of the "users not syncing" tickets in our queue are actually this bug in disguise.

Webhook ordering. Stripe fires customer.created and customer.subscription.created in close succession. Supabase Auth fires INSERT on auth.users then a separate event on public.profiles. There is no ordering guarantee. If your Base44 handler for subscription.created runs before the user record exists, the foreign key fails and the handler returns 500. Stripe retries — but only six times. After that, the subscription is silently abandoned.

Race conditions on duplicate creates. Webhooks retry on non-2xx. If your handler is slow or partially crashes, the same event fires twice. Without idempotency, you get two user rows, two subscription rows, two of everything. We saw one client where 3.4% of all users had a duplicate after ninety days of the bug running unfixed.

These four problems compound. Fix one, the others surface louder. The reason this stack is hard is not any individual problem — it's that all four interact and a partial fix makes the bug intermittent rather than gone.

Diagnostic checklist

Run these in order. Stop when one fails — that's the layer to fix first.

  1. Run select count(*) from auth.users in Supabase. Compare to Base44's user table count. If they differ by more than 2%, the sync layer is broken at the create path.
  2. Pick one user who exists in Supabase but not Base44. Open Supabase logs and search for that user's email in the last 24 hours. If you find an INSERT on auth.users but no outbound webhook log, your trigger is missing or failing silently.
  3. In Supabase, run select * from auth.hooks (or check your Database Webhooks tab). Confirm there is a webhook on auth.users INSERT pointing at your Base44 ingestion endpoint. If absent, that's your bug.
  4. Hit your Base44 webhook endpoint manually with a synthetic payload using curl. Verify it returns 2xx and that a row appears in Base44 within 5 seconds. If it doesn't, the endpoint itself is broken.
  5. Open browser devtools, log in as a test user, and inspect every Supabase request. Confirm the Authorization: Bearer <jwt> header is the user's JWT, not the anon key. If it's the anon key, your RLS context is wrong.
  6. In Stripe's dashboard, find a recent customer.subscription.updated event. Check delivery status to every endpoint. If you see retries, your handler is failing or timing out.
  7. Query your dedupe table (if it exists): select request_id, count(*) from webhook_log group by 1 having count(*) > 1. Any rows mean retries are creating duplicates.
  8. Run select id, created_at, email from auth.users where email in (select email from auth.users group by email having count(*) > 1). Any results mean you have duplicate users — usually from non-idempotent signup flows.
  9. Check Stripe customer IDs against Supabase: select stripe_customer_id, count(*) from profiles group by 1 having count(*) > 1. Duplicates here mean the Stripe-to-Supabase sync wrote twice.
  10. Inspect Base44's environment variables. Confirm SUPABASE_SERVICE_ROLE_KEY is set if your handlers need to bypass RLS, and confirm it is not exposed to the frontend bundle.

If steps 1, 7, or 8 fail, you have data corruption that needs cleanup before any new code ships. Don't skip the cleanup; new sync code on top of corrupt state produces worse corrupt state.

The fix — pick a source of truth, then build the bridge

There are three viable architectures. Pick one based on which system owns the most write traffic. Do not mix.

ChoiceUse whenStrengthsWeaknesses
Supabase as source, Base44 as viewYou own the schema and run Stripe through SupabaseStrong consistency, real FKs, RLS is canonicalBase44 UI may show stale data for up to webhook latency
Base44 as source, Supabase as audit logBase44 SDK is your write path; Supabase is for analyticsSimple write pathLoses Supabase's transactional guarantees on writes
Eventual consistency with reconcilerNeither system can be primary (legacy reasons)Survives outages on either sideMost operational overhead, slowest to debug

Pattern 1: Supabase as source, Base44 as view

Recommended default. Every write goes to Supabase first. Base44 receives webhook-driven projections.

-- Trigger: when an auth.users row is created, post to Base44 ingestion
create or replace function notify_base44_user_created()
returns trigger
language plpgsql
security definer
as $$
begin
  perform net.http_post(
    url := 'https://your-base44-app.base44.app/api/webhooks/user-created',
    headers := jsonb_build_object(
      'content-type', 'application/json',
      'x-signature', encode(hmac(new.id::text, current_setting('app.webhook_secret'), 'sha256'), 'hex'),
      'x-request-id', new.id::text
    ),
    body := jsonb_build_object(
      'supabase_user_id', new.id,
      'email', new.email,
      'created_at', new.created_at
    )
  );
  return new;
end;
$$;

create trigger on_auth_user_created
  after insert on auth.users
  for each row execute function notify_base44_user_created();

The Base44 handler must be idempotent:

// Base44 webhook handler — idempotent on x-request-id
export async function POST(req: Request) {
  const requestId = req.headers.get("x-request-id");
  const signature = req.headers.get("x-signature");
  const raw = await req.text();

  if (!verifyHmac(raw, signature, process.env.WEBHOOK_SECRET!)) {
    return new Response("invalid signature", { status: 401 });
  }

  // Dedupe table check — short-circuit if already processed.
  const seen = await base44.collection("webhook_log").get(requestId!);
  if (seen) return new Response("ok", { status: 200 });

  const payload = JSON.parse(raw);
  await base44.collection("users").create({
    id: payload.supabase_user_id,
    email: payload.email,
    created_at: payload.created_at,
  });
  await base44.collection("webhook_log").create({ id: requestId, processed_at: Date.now() });

  return new Response("ok", { status: 200 });
}

Pattern 2: Base44 as source, Supabase as audit log

Use only if Base44 owns billing and Supabase is downstream. Base44 SDK writes go into a Supabase Edge Function via webhook on the same idempotency primitive. The Edge Function writes to Supabase with the service role key, bypassing RLS, because the call is server-trusted.

Pattern 3: Eventual consistency

Both systems write independently. A nightly reconciler job (Supabase scheduled function or a Vercel cron) reads both user tables, finds drift, and either creates the missing row or flags it for human review. Operationally expensive and only worth it when patterns 1 and 2 are blocked. See the reconciler skeleton in our Stripe integration fix.

What we've seen across 8 client engagements

Concrete patterns from real fixes. No names; the drift is the same everywhere.

  1. In 5 of 8 engagements, the root cause was webhook idempotency. The handler ran twice on a Stripe retry and created duplicate subscription rows. The fix was always the same: a webhook_log dedupe table keyed on the inbound event ID, checked before any write. Median fix time: 4 hours.
  2. In 6 of 8, the team had no Supabase database trigger and was syncing users from the frontend after signUp. We replaced every client-side sync with pg_net-driven triggers. User-not-appearing complaints dropped to zero within 48 hours.
  3. In 4 of 8, RLS was filtering legitimate reads because the Base44 frontend was passing the anon key instead of the user's Supabase JWT. Fix: mint a Supabase JWT server-side on Base44 login and forward it on every Supabase call.
  4. In 3 of 8, Stripe was webhooking into both systems simultaneously and they fought over the plan column. We picked one receiver (Supabase Edge Function, in every case) and made Base44 read through.
  5. In 2 of 8, the user table had hard duplicates — same email, two ids, both with their own subscription history. Cleanup required a manual merge script and a 6-hour maintenance window.
  6. In 1 of 8, the webhooks-require-active-users bug was suppressing Base44 outbound calls during off-hours. We migrated those handlers off Base44 entirely.

The median engagement was 9 working days. Nothing took less than 5. The longest was 18 days, on a stack that had been drifting for two years and required a full data audit before we could safely change behavior.

When to abandon the dual-system architecture

Honest take: Base44 plus Supabase is the wrong stack for a non-trivial product, and a meaningful share of our engagements end with us recommending consolidation rather than continued integration.

The reason is structural. Base44's value is fast UI and code generation. Supabase's value is a real database with real auth. Putting them together means you are paying the operational cost of integrating two opinionated systems while losing the simplicity that drew you to either. The teams that succeed long-term either use Base44 alone (accepting its data layer as-is) or use Supabase plus a custom frontend (Next.js, usually).

If you are spending more than 20% of your engineering time on sync bugs, drift cleanup, or webhook debugging, that is the signal. The right next move is migration, not more patches. We do this as a Base44-to-Next.js-plus-Supabase migration and the median timeline is 4-6 weeks for a mid-sized SaaS.

If you are not at that point yet but the symptoms in this page sound familiar, the fix template above will buy you 6-12 months of stability. Use that runway to evaluate whether you are still on the right stack.

Need this fixed urgently?

Sync drift between Base44 and Supabase is a complex-fix engagement. We audit both schemas, instrument every webhook with idempotency and signature verification, replace client-side syncs with database triggers, and run a 7-day soak with synthetic traffic before handoff. Fixed price.

Start a complex-fix engagement for Base44 + Supabase sync — or browse our Base44 debugging help page for related engagements.

QUERIES

Frequently asked questions

Q.01Should Supabase or Base44 be the source of truth for users?
A.01

Supabase, almost always. Supabase's auth.users table has stronger guarantees: ACID transactions, real foreign keys, RLS that runs in-database, and a stable JWT contract. Base44's user table is a derived view of whoever has logged in through the SDK and is shaped by the platform's own ingestion logic. When the two disagree, Supabase is correct and Base44 is stale. The exception is apps where Base44 owns billing state and Supabase only stores public profiles — but that is rare in the integrations we audit. Default to Supabase, and treat Base44's user table as a read-side projection that must be rebuildable from Supabase at any time.

Q.02Why are users created in Supabase not appearing in Base44?
A.02

Three failure modes account for almost every case we see. First, the sync is client-side: a frontend handler calls Supabase signUp, then calls Base44 to create the matching record, and the second call fails or never fires. Second, the Base44 SDK is initialized before the Supabase JWT is attached, so the create runs unauthenticated and silently no-ops or hits an RLS deny. Third, the integration uses Supabase's auth.users table directly without a public.profiles trigger, and Base44 never receives the create event because nothing emitted one. The fix is server-side: a Supabase database trigger on auth.users.insert that posts to a Base44 webhook endpoint with a request-id and retries on non-2xx.

Q.03Why are my Stripe subscription plans not syncing?
A.03

Because Stripe webhooks are firing into one system and the other system is inferring plan state from cached user metadata. The standard broken pattern: Stripe posts customer.subscription.updated to Base44, Base44 updates its plan field, but Supabase's profiles.plan column never moves because nothing told it to. The mirror failure is just as common — Stripe posts to a Supabase Edge Function, profiles.plan updates, but Base44's UI reads its own plan column and shows yesterday's value. The fix is to designate one system as the Stripe webhook receiver, write the plan to one canonical column, and have the other system read it through a foreign data wrapper or a periodic reconcile job. Never let Stripe write to two places.

Q.04What's the right way to set up webhooks between the two?
A.04

Idempotent, signed, and deduplicated. Every webhook handler — whether it lives on Base44 or in a Supabase Edge Function — should accept a request-id (Stripe's event.id, Supabase's record ID, or a UUID you generate), check it against a dedupe table, and short-circuit if already processed. Verify the HMAC signature on every inbound call. Return 2xx only after the database write commits, never after a queued background job. Set the retry policy to exponential backoff with at least 6 attempts. We see Supabase auth webhooks deliver in ~340ms median and Base44 webhooks in ~510ms median, so a 30-second handler timeout is plenty if your handler is doing one job.

Q.05How do I handle the RLS context mismatch?
A.05

Stop assuming Base44's session and Supabase's session share an auth context. They don't. Base44 has its own user.id; Supabase has auth.uid() from a JWT. When code in Base44 calls Supabase, you must explicitly attach the user's Supabase JWT to the request — not the Base44 session token. The cleanest fix is to mint a Supabase JWT server-side using Supabase's signing key after the user logs into Base44, then store it in the Base44 session and forward it on every Supabase call. Without this, every cross-system query runs as the anon role, RLS filters everything out, and the user sees an empty UI. This single bug accounts for roughly half of the 'users not syncing' tickets we triage.

Q.06Can I fix this myself or do I need an expert?
A.06

This is a hire-help problem in almost every case. The dual-system architecture has too many simultaneous moving parts — auth context propagation, webhook ordering, idempotency, RLS, Stripe state — for a non-specialist to harden alone, and partial fixes routinely make the symptom intermittent rather than gone. We have done eight of these engagements; the median time-to-stable is 9 working days, and 7 of 8 required at least one schema change in Supabase. If your business depends on the sync being correct (billing, access control, anything customer-facing), engage someone who has done it before. If you are a hobbyist with no paying users, you can DIY — but plan to throw away your first attempt.

NEXT STEP

Need this fix shipped this week?

Book a free 15-minute call or order a $497 audit. We will respond within one business day.