Skip to content
← The CineLog Journal

Workspaces, Projects, and Passwordless Auth

· August – September 2025

The third problem: laying down the infrastructure that everything else in CineLog would have to stand on.

By September 2025, CineLog could parse a screenplay and generate a starter shot list, both running entirely on the writer’s machine. That was fine for one person — but the moment a director said “I want my AD to see this,” there was no way forward without real infrastructure underneath.

Every SaaS has a backend. That’s not the interesting part. The interesting part is which backend, which auth flow, which data model, and which trade-offs you make when you sit down to design the layer that everything else is going to depend on for years.

You can buy most of this. There are auth-as-a-service products and managed backends specifically for “I just want users and data.” We seriously considered that route. It would have shipped faster. We didn’t take it, and this article is about the choices that follow from that decision.

The data model that fits how film teams work

Watch how a film crew uses software for a week and you notice two things that don’t fit a generic backend cleanly:

  1. A “project” is shared, but loosely. A director might own the project, but the AD, the DP, the production coordinator, and twenty department heads all need different slices of it. Generic “this account owns this thing” models don’t really capture that — you need per-project membership, role-aware permissions, and a workspace concept that holds multiple projects together for a production company.
  2. Crew turnover is constant. A camera assistant joins for three days. A second AD is on for a week. Asking each of them to create an account, set a password, and recover it from a password manager they don’t have on their phone — that’s a path to abandonment.

A generic SaaS backend isn’t wrong for those needs — it just doesn’t fit them naturally. You end up writing a lot of glue code to make rigid SaaS primitives feel right for a crew that works the way crews work.

We decided to write our own server and shape it around how productions actually behave. The cost was real (we’ll get to that). The benefit was that every architectural choice could be made for filmmakers instead of for generic users.

Email codes, not passwords

The first concrete decision that fell out of the crew-turnover constraint was authentication. We made an opinionated choice immediately: no passwords.

We send a verification code to their email. They type it in. They’re in. The system either creates an account on the fly or recognizes the existing one. If their email is on an invitation list, registration is automatic.

It’s not novel — plenty of consumer apps do this. It was novel for our context. And it removed an entire category of “how do I log in” support questions before we ever had them.

Workspaces and projects, from day one

Most apps grow into multi-tenancy painfully, after they realize their users have multiple “things.” We made the call upfront: a user belongs to a workspace, a workspace has projects, a project has all the production data.

That separation lets us do things later that would have been excruciating to retrofit:

  • A production company workspace with multiple films inside it.
  • Per-project membership, so an editor invited to one project doesn’t see another.
  • A reusable Rolodex of contacts at the workspace level — so when you hire the same gaffer twice, you don’t re-enter their phone number.

The first version of all of this was bare-bones: create a project, list projects, switch between them. But the data model assumed multi-project from the start, which made everything we built on top of it cheaper.

Cache, not source of truth

It’s worth being explicit about what wasn’t yet true at this stage. The app kept a local copy of project data, but mostly as a cache. The server was the canonical state. On every app launch, CineLog pulled the full payload down from the server and rehydrated the UI from it. Writes hit the server first and the cache afterwards.

That worked, because the use case was still mostly one-person, mostly-online. It would stop working later — first when multi-device editing arrived (which is the next article’s problem), and then again when filmmakers tried to use the app on set with no signal (an architectural rebuild that happens further down the series). Neither of those was the bottleneck yet. The foundation was. So that’s what we built.

The honest part: doing your own auth is expensive

Building your own authentication is a tax that doesn’t show up in a feature list. Most of the obvious work — issuing email codes, verifying them, rotating tokens, cleaning up old sessions — is well-understood and isn’t actually the hard part. We did that, and it worked.

The expensive part came a few months later, and at this stage in the project we didn’t see it coming. Once we added a persistent WebSocket connection for real-time sync (the subject of the next article in this series), the auth system had to learn to cooperate with a connection that was supposed to never drop. Refreshing a token mid-session is one thing when every request is its own round-trip. It’s another when there’s a long-lived channel broadcasting state changes to every connected device, and it has to stay alive across token rotations without anyone noticing.

That orchestration — token refresh, WebSocket lifecycle, sync state, multiple sessions per user — is where most of the actual time went. The basic auth flow itself was quick to build. Getting it to coexist cleanly with a persistent connection took months.

We went through several iterations of it. Early versions had a dedicated layer for session refresh that the WebSocket layer would consult before each operation. It worked, but bugs in it caused subtle, hard-to-reproduce sync issues. Later versions collapsed the refresh logic into the same path as every other network call, with the connection layer subscribing to token-change events directly. The general lesson: every clever abstraction we added for “robustness” we eventually removed in favor of something simpler that fewer things could break.

You also have to handle the smaller things, none of which is glamorous: session renewal silently in the background so users don’t get logged out mid-edit, a clear path back in when renewal fails (instead of silently dropping their work), cleanup of old sessions on logout and on device switch. Per-user session limits — we settled on 15, enough for a power user with multiple devices, low enough that nothing accumulates forever.

If you’re choosing this for a future product, go in with eyes open. You will spend more weeks on authentication than you think — and most of those weeks will be where auth meets long-lived connections, not where it meets login forms.

What it looks like today

A user signs up with email — no password. They land directly in a workspace with a first project already created for them. They invite collaborators by email. The collaborator gets a link, taps it, and lands inside the project, on whatever device they happen to be using.

Behind the scenes, the foundation tracks workspaces, projects, members, invitations, and the data tied to each project. It runs on a Node-based stack with a PostgreSQL database, deployed on Google Cloud. We exposed both a REST API and a GraphQL endpoint at the same time — not because we had a strong opinion about which was the right answer, but because we wanted to try both and see which one fit our patterns as we built features on top of them. It was honest exploration, not a final decision. The answer evolved over the next several months, but the foundation could carry both styles while we learned which one to keep.

That last point — needing batched payloads — became the seed of the next architecture problem. The foundation worked. It was reliable, secure, and fast enough for a single user. But the moment two devices touched the same project at once, “fetch from server, render, send changes back” stopped being a workable model. The foundation was solid. Now it had to hold up something much harder: real-time sync.


Next: We Rebuilt Our Sync Engine Three Times. Here’s What Each Version Taught Us. — how we tried to solve real-time collaboration for film teams.