This document explains why the Interactive curriculum is structured the way it is, how it connects to the other two orientations (Brand and Motion), and how the micro-studio team model works across the 12-week cohort. It is written for anyone who needs to understand the full logic of the program: Tyden Rickard (Executive Director and Cohort Lead), the orientation leads, program evaluators, and potential Year 2 funders.
The cohort runs 12 weeks of client work, bookended by two in-person experiences and preceded by a structured pre-work package:
The opening sets the craft and the standard. The closing builds the path forward. The 12 weeks in between are where the work happens.
The cohort is 12 students plus 3 teachers. Across the program the cohort rotates between three group formats, each with a distinct purpose:
Each studio delivers a complete creative overhaul for each client engagement. Brand identity, motion content, and digital experience all ship together as one integrated system.
This structure is intentional. It mirrors how professional creative studios operate: strategy drives identity, identity drives experience, and experience drives content. The cohort members learn to work as a functioning creative team, not as individuals producing isolated deliverables.
In Cycle 1, the lead instructor serves as team lead and has final authority on what gets locked. This protects the quality of the first engagement while the team builds trust and working rhythm.
Starting in Cycle 2, each team votes to select a team lead from among their three members. If the team cannot reach a decision, the lead instructor selects one. The team lead coordinates internal deadlines, runs team-level syncs, and ensures the three orientations converge into a unified delivery.
There is no client kickoff meeting. The Discovery Workbook is the structured client intake. It is sent to the client on Monday of Week 1 and Week 7, and returned by end of that week. While the client works on the workbook, all three orientations run independent research in parallel: category, competitors, audience, and digital landscape.
The Interactive director does not wait for the workbook to start thinking. They run research in parallel: audience behavioral patterns, devices, digital touchpoints, competitive site patterns. Insights from the interactive lens often surface faster than from the brand lens and feed directly into the team's Monday strategic positioning meeting in Week 2 / Week 8.
The distinction is that Brand locks the official strategy at the Friday Office Hours of Week 2 and Week 8. Once strategy is locked, Interactive uses it as the anchor for every experience design decision through the remainder of the cycle.
The week has the same shape across all 12 weeks. Three touchpoints — Mon Studio Convergence (studios meet amongst themselves), Wed Video Update (1-minute async progress update), Fri Office Hours (15-minute mentor check-in, as needed, for up to 8 cohort members). Heavy synchronous load is reserved for Presentation Weeks (Wks 4 and 10) and Delivery Weeks (Wks 6 and 12). Mentor commitment outside of Boot Camp is light: weekly Friday office hours as needed.
The orientations are collaborative throughout, but the official output flows in one direction:
SHARED DISCOVERY (all three orientations receive the same client input)
|
v
BRAND locks strategy and positioning (end of Week 2 / Week 8)
|
+-----> Brand develops visual identity system
|
+-----> INTERACTIVE begins digital experience design
| (informed by brand strategy + identity)
|
+-----> Motion begins animation and video content
(informed by brand strategy + identity)
|
v
ALL THREE converge in integrated client delivery (Week 6 / Week 12)
Interactive is downstream of Brand in terms of anchor assets, but upstream of Motion in terms of where motion assets get placed. The interactive director does not invent a visual language. They extend the one the Brand director established into a working, navigable digital experience.
Brand to Interactive:
Interactive to Brand:
Interactive to Motion:
Motion to Interactive:
This coordination happens organically within the team. It is not led by the orientation leads.
Within each 6-week engagement cycle:
| Cycle Week | Phase | What Happens |
|---|---|---|
| Week 1 | Client Discovery + Independent Research | No kickoff meeting. The Discovery Workbook is sent to the client Monday and returned by end of week. All three orientations run independent research in parallel. Interactive audits the digital landscape and gathers reference sites. End-of-week state: completed workbook + research dossier in hand. |
| Week 2 | Strategic Positioning + Strategy Lock | Monday Studio Convergence: team reviews workbook + research and decides strategic positioning together. Brand drafts the official strategy and presents at Friday Office Hours, where the mentor confirms the lock. Interactive locks information architecture and wireframes the same week. |
| Week 3 | Exploration (Divergence) | All three orientations explore in parallel. Brand explores color, type, and logo directions. Interactive generates multiple Figma directions. Motion drafts content concepts. Standard cadence: Mon Studio Convergence, Wed Video Update, Fri Office Hours. |
| Week 4 | Presentation Week · Convergence | Each orientation narrows to its strongest direction. Interactive locks the Figma design and begins Claude Code build. Wed Mentor Presentation to Ben/Tyden over Zoom. Fri Client Presentation directly to the client. |
| Week 5 | Final Convergence | All three orientations incorporate Wk 4 client feedback and finalize. Interactive completes the build, integrates motion, runs systematic QA. Pre-delivery rehearsal. |
| Week 6 | Final Delivery | One integrated presentation to the client. Interactive demonstrates the live website with brand identity and motion assets in context. The three orientations present as one vision. Debrief and reflection close the cycle. |
The 12 weeks are organized as two 6-week client cycles, bookended by two in-person experiences. The Opening Experience on Sat Jul 18 is full-cohort onboarding with no client work — the only window to establish the tool workflow (Figma to Claude Code to deployment) as end-to-end muscle memory before real stakes arrive.
Every engagement ends with a live website deployed to a clean URL on Netlify or Vercel. That deployment is the non-negotiable outcome. A beautiful Figma file that did not ship is not a deliverable in this program.
Each engagement uses the same six-phase arc: Client Discovery, Strategy Lock, Exploration (Divergence), Convergence, Final Production, Delivery. This is deliberate. Repetition is how non-professionals build professional instincts. The phases do not change. The standard does.
By the second time an interactive director runs through the arc, the mechanics should be automatic. Discovery is not a process they follow because the checklist says to. It is something they do because they understand that the quality of everything downstream depends on the quality of the experience strategy.
The orientation lead's involvement decreases across the two cycles:
| Cycle | Lead's Role | Director's State |
|---|---|---|
| Cycle 1: Calibration (Wks 1–6) | Heavily present. Models the experience thinking. Reviews Figma before development begins. Walks through QA side by side. Nothing deploys to the client without review. | Learning the tools. Uncertain about design decisions. First time using AI to build. |
| Cycle 2: Studio (Wks 7–12) | Oversight and coaching. Reviews designs but does not direct. Responds to self-critique. Pushes on QA rigor. Advisory only by Week 12 — intervenes only when the standard drops. | Building fluency, then emerging independence. Faster in Figma. More precise with Claude Code. Self-QA becomes the default. By Week 12 designs, builds, QAs, and delivers a site with minimal guidance. |
This arc is the core design principle of the program. If the lead is still running critiques the same way in Week 12 as they were in Week 1, the transfer has not happened. The goal is not for directors to produce work the lead approves of. It is for directors to produce work they can defend and ship without the lead in the room.
Cycle 1 uses a nonprofit client at no fee for the same reasons as Brand: it protects the program's reputation while cohort members make their biggest mistakes, and it protects cohort members by removing the revenue pressure on a first engagement.
For Interactive specifically, there is a second reason: shipping a real live website is hard the first time. Domain issues, deployment errors, responsive breakpoints that only reveal themselves on a real device, content that does not fit the layout the way the Figma file predicted. A nonprofit engagement gives the director room to hit these walls and learn from them without risking a paying client relationship.
Cycle 2 activates revenue share because the quality bar needs to be real. A site shipped at Cycle 2 quality has to hold up outside the program context. It has to load fast. It has to work on mobile. It has to feel like the client's brand.
An interactive director who cannot get a site live is not yet an interactive director. The deployment is the final test of every skill in the chain: can you get what you designed, built, and QA'd into the world as a working URL that the client can share with anyone? Two live deployments across 12 weeks is the minimum evidence of competence.
The Interactive Curriculum Conceptual Framework introduces two layers of first principles that run through every week of the program:
Layer 1: Experience First Principles
Layer 2: Design First Principles
These are diagnostic tools, not style guides. When something feels wrong about a website, the diagnosis starts at the center of the systems map (strategy) and works outward. A site that feels off is usually an experience strategy problem that has not been solved yet, not a color or layout problem.
The Interactive Systems Map (Brand Strategy > Experience Strategy > Information Architecture > Visual Design > Development and Launch) is the shared diagnostic framework across the team. When the Brand director asks whether a visual choice serves positioning, they are using the same diagnostic mode. When a Motion director asks whether an animation serves the experience, same mode. The shared map is what makes a three-person micro-studio feel like one vision.
Interactive directors work AI-first in two distinct phases: design and development.
In design (Figma): AI can generate layouts, suggest type pairings, and produce visual explorations quickly. The danger is the same as in brand work: polished and intentional are not the same thing. A generated layout that looks professional but does not serve the experience strategy is worse than a rough sketch that does.
In development (Claude Code): AI writes the code. This is the single biggest leverage point in the program. Cohort members do not need to become software engineers. They need to become directors who can describe exactly what they want, evaluate whether the output matches their design, and systematically fix what does not match.
The measure of an interactive director is not whether they can write code. It is whether they can get the code written right. That means knowing what "right" looks like, identifying when it does not look right, and communicating the difference clearly enough that the tool can fix it.
This is why QA is treated as a core skill in the Interactive curriculum, not an afterthought. Without disciplined QA, AI-assisted development is a sketch generator, not a production tool.
Across all 12 weeks, every cohort member produces a required weekly vlog. The vlogs serve three purposes at once:
The assessment rubric evaluates six categories: Experience Strategy, Visual and Structural Design, Build and QA Rigor, AI Fluency, Client Communication, and Team Integration. Each is scored at two checkpoints (end of each cycle, Week 6 and Week 12) on a 1–4 scale.
The assessment is designed to measure growth, not absolute quality. A director who moves from Level 1 to Level 3 across two cycles has demonstrated more growth than a director who starts at Level 3 and stays there. The narrative assessment at the end of Cycle 2 is required because numbers alone do not capture what changed in how someone thinks.
The question at the end of the program is not "did they produce good websites?" The question is: "can they walk into a room, diagnose an experience problem, design a site that solves it, get it built with AI tools, QA it to a professional standard, and ship it, without someone telling them how?"
At the public showcase on Sat Oct 17, each interactive director should be able to articulate three things:
The evidence is two live websites. The outcome is the clarity.