From prototype to production.
The current portal stores files as session metadata — enough for a demo. To deploy this for real staff with actual sensitive documents (I-9s, IDs, bank info), you need proper backend storage, encryption, and access controls. Here's the shortest path there.
On this page
The architecture, at a glance.
You need four things talking to each other: the portal (what the staff member sees), an API (to authenticate and coordinate), object storage (where the actual files live), and a database (to track status, ownership, and metadata). The portal never touches the storage bucket directly — it asks the API for a time-limited upload link, and the file goes straight from the user's browser to the bucket.
Files never pass through your API server, which keeps it cheap and fast. Your API only hands out short-lived signed URLs (typically 5–15 minutes) that grant temporary write access to a specific path in the bucket. Standard pattern for Dropbox, Google Drive, and most file-upload SaaS.
Storage options, matched to your stack.
Since you're already on Google Workspace with Zapier in the mix, you have a few natural choices. Here's how they compare:
Google Drive
- Already in your stack — no new vendor
- Native Zapier triggers for new files
- Shared drives for team access
- Admin audit log out of the box
- Folder-per-hire is a clean pattern
AWS S3
- Industry standard for this exact use case
- Bucket policies + KMS encryption
- Lifecycle rules (auto-archive old hires)
- Requires AWS account & IAM setup
- Zapier integration available
Supabase Storage
- Storage + Postgres + Auth in one
- Row-level security for access control
- Signed URLs built in
- Fastest path if starting from scratch
- Generous free tier
If you already live in Google Workspace, start with Drive. It's the lowest-friction option and your team already has access controls figured out. Move to S3 or Supabase only if you outgrow Drive's API limits or need per-file audit trails.
The upload flow, step by step.
Here's what happens when a staff member clicks "Upload" on their W-4:
1. Request a signed URL
The portal sends the doc type and filename to your API. The API verifies the hire's session, generates a unique storage path, and returns a time-limited signed upload URL.
// Portal → API POST /api/uploads/request { "hire_id": "h_8f3k2", "doc_type": "w4", "filename": "w4-2026.pdf", "mime": "application/pdf", "size": 284511 } // API → Portal { "upload_url": "https://storage.../signed?token=...", "storage_key": "hires/h_8f3k2/w4/2026-04-14-w4.pdf", "expires_at": "2026-04-14T15:32:00Z", "upload_id": "up_a9b2c" }
2. Upload directly to storage
The browser PUTs the file straight to the signed URL. This is the heavy part of the transfer, and it never touches your server.
3. Confirm the upload
Once done, the portal tells the API the upload finished. The API verifies the file exists at the expected path, runs a virus scan if you have one wired up, and writes the metadata row to the database.
// Portal → API POST /api/uploads/confirm { "upload_id": "up_a9b2c", "checksum": "sha256:4f2a..." } // API writes to DB INSERT INTO documents ( id, hire_id, doc_type, storage_key, original_filename, mime, size, uploaded_at, status ) VALUES (...);
4. Fire the nudge-system event
After a successful upload, your API publishes an event like document.uploaded. That's what the nudge rules (previous page) subscribe to — closing the loop between upload activity and downstream notifications.
API endpoints you'll need.
Minimum viable set. You can build these in anything — Next.js API routes, Node/Express, Python/FastAPI, or even a no-code layer like Xano if you're moving fast.
POST /api/uploads/request— generate signed URLPOST /api/uploads/confirm— finalize upload, write metadataGET /api/hires/:id/documents— list uploaded docs for a hireGET /api/documents/:id/download— signed download URL (admins only)DELETE /api/documents/:id— soft delete (keep audit trail)GET /api/hires/:id/status— tasks, stage, progress (powers the portal)PATCH /api/hires/:id/tasks/:task_id— mark task completePOST /api/webhooks/events— internal event bus for nudges
Supabase gives you 80% of this for free via auto-generated REST APIs on top of your Postgres schema. You'd only need to write the signed-URL endpoint and the event publisher.
Security & compliance essentials.
You're handling I-9s, government IDs, and bank details. This isn't a blog comment form. These are the non-negotiables:
Encryption
Encryption at rest (server-side encryption on the bucket) and in transit (TLS only). All three storage options above handle this by default. Don't disable it.
Access control
Staff can only access their own documents. Admins can access all. Nobody on the engineering side should have read access to the bucket by default — grant it temporarily when debugging, revoke when done.
Retention policy
I-9s must be retained for 3 years after hire date or 1 year after termination, whichever is later. W-4s for 4 years. Build a lifecycle rule that archives to cold storage at those boundaries rather than deleting.
Audit log
Every document access — views, downloads, deletes — gets logged with who, what, when. This saves you during any employment dispute or audit.
Virus scanning
Staff upload from personal devices. ClamAV in a Lambda trigger (for S3) or a Drive API scan hook is cheap insurance.
If your team grows past ~50 people or you take on healthcare clients, you'll need SOC 2 readiness. Document these controls now — it's much easier to build the audit trail from day one than retrofit it later.
Wiring it to Zapier & Twilio.
You already have Zapier and Twilio in your stack. Here's how they slot into this system:
Zapier as the nudge executor
Your API's event bus posts to a Zapier catch hook. Zapier branches on the event type (document.uploaded, stage.stalled, etc.) and fans out to the right channel — Gmail for emails, Twilio for SMS, Slack for team pings. The nudge-rules page (previous build) already outputs the exact payload format Zapier expects.
Twilio for SMS
Skip Zapier as the middleman for SMS if you're sending at volume — Twilio's API is cheap and reliable, and you can call it directly from your webhook handler. Use Twilio Studio for anything conversational (e.g., SMS-based "reply YES to confirm your start date").
// Direct Twilio call from webhook handler POST https://api.twilio.com/2010-04-01/Accounts/{SID}/Messages.json { "To": "+15551234567", "From": "+15559876543", "Body": "Hi Anthony, your I-9 is still pending..." }
Notion sync (optional but useful)
Since your team runs on Notion, a nightly Zap can mirror the hires table into a Notion database so the team can comment, assign owners, or triage from inside the tool they already use.
Launch checklist.
When you're ready to flip this from prototype to production, walk through these in order:
window.storage calls with fetch() to your endpoints.If you're the only engineer on this, a realistic build timeline is 2 weeks for MVP (Drive + a thin Next.js API + basic auth), 4–6 weeks if you want polished admin tooling, retention automation, and the full Zapier flow. Start small and ship.