A paying client is paying for work that holds up outside the program context. Your lead is an attending observer, not a co-designer. You initiate the critique questions. You self-critique before presenting. Vague experience rationale gets sent back without coaching.
The scope grows: 2–4 pages, real interaction states, more polished responsive behavior. The standard rises. You should be catching more QA issues than your lead does. By the end of the cycle, you should be shipping work that would not look out of place on a professional agency's portfolio page.
Cycle 2 timeline: Cycle 2 work runs Mon Aug 31 through Sun Oct 11. Closing Experience and public showcase Sat Oct 17.
Brand sends the Discovery Workbook to the client on Monday. Confirm it has gone out with a return-by date. There is no kickoff meeting.
Interactive-Skills / interactive-discoveryDirect Claude to pull audience behavior patterns, competitive site patterns, and content structure references. Your prompts should be specific to the client's industry and audience, not generic. If Claude produces generic output, your prompt was generic.
Sharper than Cycle 1. Visit competitor sites with a targeted question: where is the audience underserved? What patterns are overplayed in the category? Identify the white space the experience can move into.
The client returns the completed workbook by end of week. Read it once on receipt. Synthesis happens with the team on Monday in Week 8.
Document Cycle 2's discovery. What is different about a paying engagement? What did you sharpen in your research approach from Cycle 1?
The team meets Monday with Brand and Motion. Read the completed Discovery Workbook together alongside everyone's Week 7 research. Faster, more focused review than Cycle 1. Strategic positioning gets decided together — you do not wait on Brand to hand it down.
Brand locks at Friday Sep 11 QC. Pull positioning and identity as soon as it lands. No waiting. Your experience strategy can begin before every brand detail is resolved, as long as the positioning is clear.
Answer the three experience questions without scaffolding. Write out your rationale and defend it to yourself before anyone else sees it. If you cannot defend it in one pass, rewrite it. Your lead is not going to coach you through vague rationale this cycle.
Interactive-Skills / interactive-experience-strategyCycle 2 scope is 2–4 pages. The architecture should be more resolved than Cycle 1: clearer navigation, more developed content hierarchy, interaction patterns defined at the structural level. Every page must earn its place against the experience strategy.
Tighter, faster, more resolved than Cycle 1. Layout, type hierarchy placeholders, basic interaction indicators. Desktop, tablet, and mobile. The wireframe should convince your lead before a single visual decision is made.
Interactive-Skills / interactive-wireframingDrop strategy and wireframes-in-progress in your team channel. Your lead reviews asynchronously. Use the feedback to sharpen, not to start over.
Your lead reviews how you are using AI tools so far in Cycle 2. Are you prompting more specifically than Cycle 1? Are you reaching your vision in fewer iterations? Are you still generating options and picking one, or are you directing toward a specific outcome?
You initiate. You anticipate the questions. If your lead has to originate the critique, that is the coaching note. Strategy and IA must be defensible end-to-end before lock.
Once approved at Friday QC, lock it. Do not restructure mid-design. Your rate of iteration should be faster than Cycle 1 because you are making fewer foundational changes.
Document the strategy lock. How is your strategy thinking different from Cycle 1?
3–4 distinctly different visual directions. Cycle 2 standard: each direction must look like a real production-grade home view, not a quick sketch. The bar is what would survive a portfolio page.
Interactive-Skills / interactive-visual-designType system, color, components, and key interaction patterns built as proper Figma components. These survive into Convergence.
Tighter prompts than Cycle 1. Direct Claude with the strategy and ask for specific structural critiques per direction. Decide what survives, fast.
Mid-week sync. Does the digital experience extend the identity? Does the motion content fit the experience? Flag integration gaps now, not on delivery day.
Cohort-wide presentation of divergent directions. Your role is to articulate why each direction is different, not to pre-converge.
Document the exploration. How is your divergence different in Cycle 2?
Choose with conviction. Defend in writing in two paragraphs. Cycle 2 convergence should be faster than Cycle 1 because the divergent directions were sharper.
Apply the brand identity. Design every page at every breakpoint: desktop, tablet, mobile. Document every interaction state: default, hover, active, disabled, focus. Transitions defined: what moves, how long, what easing. If it is not in Figma, it will not be in the build.
Present the design to your lead, but start with your own critique. Walk through each design first principle: type hierarchy, grid, visual hierarchy, interaction, responsive integrity, performance. Identify what is working and what you are unsure about. Your lead responds to your analysis, not just your design.
Once approved, lock. Every dev decision traces back to this file. Changes require re-review. At Cycle 2 standards, this should be the last major design change of the cycle.
Same process as Cycle 1, faster. Repo created, auto-deploy connected, staging URL ready. Muscle memory this time.
Your lead does not sit with you for the first build session. You run it. Prompts should reference exact Figma values: font sizes in px, colors as hex, spacing as rem or px, interaction timing in ms. Specificity is the difference between 1 iteration and 5.
Brief touchpoint with the client to confirm direction before final polish. Share the staging URL or Figma preview. Gather feedback. Use this to catch anything the client was hoping to see that is missing from the design.
Document the convergence. What got cut and why?
Every page, every breakpoint. Continuous deployment to staging means every commit updates the staging site automatically. Share the URL with your team early so Motion can test assets in context.
Receive motion files. Place them per the specifications you defined in Week 9. Test that they load fast, do not break on mobile, and feel embedded in the experience rather than tacked on.
No walk-through with your lead this cycle. You open Figma and the staging URL, go section by section, and document every discrepancy. Type sizes, line heights, letter spacing, color exactness, spacing values, responsive breakpoints, load performance. Close each gap with Claude Code.
Interactive-Skills / interactive-qa-deliverySpecifically describe each discrepancy. Cycle 2 standard: most gaps closed in one prompt. Track how often you have to iterate — that is your fluency metric.
Performance audit, accessibility check, broken links, cross-browser smoke test. Higher standard than Cycle 1.
You go first. Walk your lead through what you found, how you fixed it, and what is still open. Your lead identifies what you missed. The question is whether you caught your own gaps before someone told you.
How many Claude Code iterations to match the design? Are you describing issues specifically enough that Claude can fix them in one pass? What have you learned about how to direct AI tools that you did not know in Cycle 1?
Full rehearsal. You should anticipate the hard questions before they are asked.
Document the production. What did your independent QA catch that nobody else would have?
Final deployment on the client's domain or a clean production URL. Verify HTTPS, load performance, cross-device behavior. Staging and production should be indistinguishable.
No templates handed to you. You build the deck. Structure: experience strategy, design rationale, live site walkthrough, key decisions defended. The client should leave understanding why the site is what it is.
You lead entirely. Your lead observes. Walk the client through the deployed site, answer questions with rationale grounded in discovery and strategy, handle pushback by referencing the work. If the presentation feels like a read-through of slides, the prep was wrong.
Production URL, GitHub repo, deployment credentials, content update guide. Walk the client through it so they feel confident owning the site without you.
Within a few days of delivery, capture client reactions and any early performance signals. This becomes part of the showcase narrative on Oct 17.
Your lead re-enters here. Harder questions than Cycle 1: how did the client respond? Did the experience feel like it belonged to the brand? Where did QA fall short? What was different about running a paying engagement?
Start preparing your case study presentation for the public showcase on Sat Oct 17. Two engagements, the growth in your thinking, the live sites. The showcase is your monetization springboard.
Compare yourself to Cycle 1. Are you faster? Are you more precise? Are you thinking differently, or just producing differently? Be honest about where you grew and where you plateaued.
Final cycle vlog. Document the delivery and the arc from Week 1 to here.