This is your calibration cycle. The nonprofit client receives a real, live, professional-quality website, but the stakes are intentionally lower so you can find your rhythm. Your orientation lead asks the experience questions out loud so you learn the language. Every critique is also a teaching moment.
Your lead will review your Figma design before you touch Claude Code, walk through QA side by side, and review the staging deployment before client delivery. Your job is to absorb the framework, ask questions, and start building the muscle memory you will rely on in Cycle 2.
Cycle 1 timeline: Opens after the in-person Opening Experience on Sat Jul 18. Cycle 1 work runs Mon Jul 20 through Sun Aug 30. Midpoint regroup the weekend of Aug 29–30.
Brand sends the Discovery Workbook to the client on Monday. Confirm it has gone out and that the client has a clear return-by date. There is no kickoff meeting — the workbook is the structured intake.
Independent research while the client works on the workbook. Visit the websites of likely competitors and adjacent organizations. Document the patterns: page structures, navigation styles, content density, interaction patterns, performance. Your job is not to copy. It is to build vocabulary for what is possible, and to find where the experience can feel different.
Interactive-Skills / interactive-discoveryFind sites that do something relevant well. An interaction pattern, a content structure, a visual treatment. Document what you are borrowing and why. References build your vocabulary before you design.
Without waiting for the workbook, research how this kind of audience uses the internet: typical devices, expected user jobs, common drop-off points in the category. By end of week, you should have a research dossier you would feel comfortable defending alone.
The client returns the workbook by end of week. Read it once on receipt to confirm completeness. Do not begin synthesizing yet — team-level synthesis with Brand and Motion happens Monday in Week 2.
Once the workbook is in hand, feed Claude the completed workbook and your Week 1 research. Ask it to pull out what is relevant to the digital experience: audience behaviors online, specific user jobs, competitor website patterns, and potential content structures. Claude surfaces raw material. You still decide what matters in Week 2.
Required weekly. Document what you learned from your independent research before the workbook landed, and what stood out on first read of the client's answers. Vlogs serve internal progress reporting, accountability, and the program's public content engine.
The team meets Monday with Brand and Motion. Read the completed Discovery Workbook together alongside everyone's Week 1 research. Surface the audience and behavior signals that should drive both positioning and experience strategy. Strategic decisions get made together — not handed down by Brand alone.
Brand presents and locks strategy at the Friday QC review. Pull the positioning, value proposition, and audience definition as soon as it lands. Your experience strategy extends the brand strategy. Highlight the language you will reuse on the site.
Write these down clearly: (1) Who is coming to this site, and what do they need to do or understand when they get there? (2) What should the user feel during the experience, and how does that feeling connect to the brand's strategic position? (3) What is the single most important action the user should take, and does every element on the page support or distract from it? These answers are your design brief.
Interactive-Skills / interactive-experience-strategyDecide what pages the site needs, what content goes on each page, and how navigation works. Cycle 1 scope is 1–3 pages maximum. Every page on the site must be justifiable against the three experience questions. If a page does not serve one of the answers, it does not need to exist.
Layout-only wireframes: content blocks, type hierarchy placeholders, basic flow. No color, no imagery, no visual treatment. The wireframe tests whether the information architecture actually serves the audience. Design desktop and mobile wireframes.
Interactive-Skills / interactive-wireframingDrop the experience answers and wireframes-in-progress in your team channel. Your lead reviews and responds asynchronously. Use the feedback to sharpen before Friday's formal review.
Formal end-of-week quality control review. Walk your lead through the three experience answers and the wireframe layout. Your lead evaluates whether the site structure actually serves the audience before visual design begins. If the strategy is unclear or the architecture does not hold, they send you back to refine before Monday.
Once wireframes are approved at the Friday QC, lock them. Visual exploration in Figma begins from this locked structure. Do not restructure pages mid-design without re-reviewing with your lead. The goal is to fail at wireframes, not at full mockups.
Document the strategy lock. What did the wireframes reveal that the workbook did not? What did your lead push back on?
You need the logo files, color palette (with hex values), typography selections (Google Fonts URLs or typeface files), and imagery direction before you can begin full exploration. If anything is missing, flag it to your team immediately.
Volume over precision. Take the locked wireframes and produce 3–5 distinctly different visual directions for the home view: different type treatments, different hero treatments, different image strategies. The directions should feel different, not be variations of one idea.
Interactive-Skills / interactive-visual-designBegin building the type system, color samples, button styles, and primary hero treatments as Figma components. These are throwaway-quality at this stage, but they will accelerate convergence in Week 4.
Share each direction with Claude alongside the experience strategy. Ask which direction best serves which of the three experience answers, and where each one breaks down. Claude flags weak hierarchy or buried CTAs across directions. You decide what survives.
End-of-week studio session across the cohort. You present your divergent directions out loud. The cohort sees your range. You see theirs. The session is calibration, not critique — you have not converged yet.
Baseline how you are using AI tools so far. Are your prompts specific? Are you using Claude as input or output? How many iterations did it take to get the directions you wanted?
Document the exploration. What surprised you in the divergent directions? What was hardest about generating volume?
Choose the direction that best serves the three experience answers. Defend the choice in writing. The other directions are not deleted — they become reference material if convergence drifts.
Apply the brand identity to the chosen direction. Full-fidelity mockups: real content, real type sizes, real spacing, real colors. Design desktop and mobile layouts for every page. The design should look like the built site will look, not like a loose concept.
For every interactive element (buttons, links, form fields, nav items), show the default, hover, active, and disabled states where relevant. Document transitions briefly (what animates, how long, what easing). Claude Code builds what you document. If it is not in Figma, it will not be in the build.
Before your lead review, share your Figma with Claude and ask it to evaluate whether the design serves the three experience questions. Claude flags places where the hierarchy is weak, the CTA is buried, or the page structure drifts from the stated strategy. You decide what to fix.
Present the full design before development begins. Your lead evaluates design first principles: Is the type hierarchy clear? Is the layout on a grid? Does the visual treatment serve the brand positioning? Does the design serve the experience strategy? If you cannot defend every decision, the design goes back.
Once your lead approves, lock the design. Every development decision must trace back to this file. Changing the design after development begins is expensive. If you need to change something, change it in Figma first, then describe the change to Claude Code.
Create the project repo on GitHub. Connect it to Netlify or Vercel for continuous deployment. Now every commit auto-deploys to a staging URL your team and lead can check throughout the build.
Work one section at a time, not the whole page at once. Describe what you want with specificity: exact font sizes, weights, line heights, hex codes, spacing values. After each section, compare the built output to the Figma before moving on. Section-by-section QA is faster than end-of-build QA.
End of week, share a staging link or Figma preview with the client. Confirm the direction feels right before Final Production. This is a temperature check, not a review — you do not want to surface major changes in Week 5.
Document the convergence. Which direction won and why? What did Critique #1 surface that you missed?
Your lead sits with you for the first focused Claude Code session of Week 5. They watch how you prompt, how you describe the design, and how you QA each output. This is the most important coaching moment in the build. Take notes on how they describe precision.
Every page built, every interaction working, responsive behavior implemented for desktop, tablet, and mobile breakpoints. Deploy to the staging URL continuously so your team can see progress.
Receive final motion files in the correct dimensions and formats. Place them in the live site where you specified. Test that they load, perform, and do not break responsive behavior on mobile. If there is a problem, coordinate with Motion directly, not through your lead.
Open Figma on one screen, the staging URL on the other. Go section by section, element by element. Check: type sizes, font weights, line heights, letter spacing, spacing between elements, color hex values, image sizing, hover states, responsive behavior at every breakpoint, load performance. Document every discrepancy.
Interactive-Skills / interactive-qa-deliveryDescribe each discrepancy to Claude specifically enough that it can fix it in one pass. "The hero heading is 48px in Figma, 44px on the built site — change to 48px." Not "the hero looks off." Specificity is the whole skill.
Walk your lead through the staging URL alongside the Figma design. Be ready to explain every discrepancy you found and fixed, and every one that remains. Your lead walks through each design first principle live. If the build has drifted from the design in meaningful ways, you will be sent back.
Final internal pass. Performance audit (Lighthouse or equivalent), accessibility quick check, broken link check, cross-browser smoke test. Document the results.
Present the site as if delivering to the client. Your lead plays the client and asks questions. If you cannot defend a design or experience decision here, you will not be able to defend it in the room.
Document the production. What was the gap between Figma and build? What did Critique #2 catch?
Final deployment on Netlify or Vercel with the client's domain or a clean production URL. Verify HTTPS, verify the site loads on multiple devices, verify the URL can be shared and opened by anyone.
Build your presentation deck walking through the experience strategy, the design rationale, and a live walkthrough of the deployed site. The presentation should tell a story. The client should understand the "why" behind the experience, not just see pages.
Walk the client through the deployed site on a live URL. Your lead observes but does not rescue. Show the experience, not the mockups. Answer questions with rationale grounded in strategy. If something is below standard, that is the lesson.
Deliver everything the client needs to own the site going forward: production URL, GitHub repository access (or file archive), deployment credentials, a brief guide on how to update content. The handoff should be self-explanatory to someone who was not in the project.
Within a few days of delivery, capture the client's reactions and any early performance signals. This is part of the deliverable trail and part of the Cycle 1 case study.
Compile a clean version of your QA findings from Week 5: what you caught, how you fixed it, final state verified. This report is part of your deliverable and also part of your learning record. What did you catch the first time? What did your lead catch that you missed?
What worked? What did not? Where did the design hold up in development and where did it break down? What was hardest about QA? How did the client respond to the live site? This debrief is as important as any session in the cycle.
One page: what you understand about interactive design now that you did not on Day 1. Where did your thinking change? Where do you still have questions? This is for you, not for show.
Document the delivery. How did the client respond? What did you learn about your own process across the full cycle?
Full cohort regroups in person or remotely the weekend after delivery. Share the work and the lessons. Reset expectations for Cycle 2. You start Cycle 2 the next day, Mon Aug 31.