AlabamaCreates Studio
Interactive Orientation Assessment Rubric
Cohort 1 · Both Project Cycles
12 Weeks · 2 Project Cycles · Opens Sat Jul 18, 2026 · Closes Sat Oct 17, 2026
How This Rubric Works
This rubric is used by the orientation lead to assess each interactive director at two checkpoints: end of Cycle 1 (Week 6, Sun Aug 30, 2026) and end of Cycle 2 (Week 12, Sun Oct 11, 2026). The same categories are evaluated each time. The standard rises between cycles.
Each category is scored on a four-level scale:
| Level | Label | Meaning |
| 1 | Not Yet | Has not demonstrated this capability. Needs direct instruction. |
| 2 | Emerging | Shows early signs but inconsistent. Needs regular coaching. |
| 3 | Developing | Demonstrates capability with occasional gaps. Needs light guidance. |
| 4 | Independent | Demonstrates capability consistently without prompting. Ready to practice. |
The goal is not for every interactive director to reach Level 4 in every category by Week 12. The goal is visible, documentable progression from wherever they started. A director who moves from 1 to 3 across two cycles has demonstrated significant growth. A director who stays at 2 across both has not.
Category 1: Experience Strategy
Can they define who a site is for, what it should feel like, and what it should accomplish, grounded in the brand positioning?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | Cannot answer the three experience questions. Jumps to layout before strategy. | Still dependent on lead for experience direction. No meaningful progression from Cycle 1. |
| 2 | Can answer the questions with coaching. Answers are vague or borrowed from the brand strategy verbatim. | Experience rationale is adequate but not directional. Still occasionally vague. |
| 3 | Answers are defensible and grounded in the brand strategy and discovery findings. Can explain why the experience structure matters. | Experience rationale is specific, ties directly to positioning, and drives every layout decision. |
| 4 | Experience strategy is clear, specific, and directly informs all downstream design decisions. Can defend it under pressure. | Anticipates every question. Defends experience decisions with evidence and conviction. Needs no coaching. |
What to look for
- Do they start with the three experience questions or jump to Figma?
- Can they tie the information architecture back to the audience and the positioning?
- Is their experience rationale a defensible stance, or is it generic best-practices language?
Category 2: Visual and Structural Design (Figma)
Does the Figma design serve the experience strategy, and does it apply the six design first principles consistently?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | Visual choices are arbitrary. No grid, inconsistent type, broken hierarchy. Cannot explain why any element is placed where it is. | Visual work does not demonstrate growth from Cycle 1. |
| 2 | Some choices connect to the experience rationale but the design lacks coherence. Type hierarchy is unclear. Grid is applied inconsistently. | Design is competent but lacks the precision that distinguishes professional work. |
| 3 | Most choices are traceable to strategy. Type hierarchy is clear. Grid is intentional. Interaction states defined. Can apply visual first principles when prompted. | Strong design with clear rationale. Self-critiques effectively before presenting. Design is portfolio-quality. |
| 4 | Every choice serves the experience strategy. All six first principles applied without prompting. Design is clearly a system, not a collection of pages. | Work is indistinguishable from agency-produced design. Defends every decision. The design speaks for itself. |
What to look for
- Can they walk through type hierarchy, grid, visual hierarchy, interaction, responsive, and performance in their own design?
- Does the design feel like a system across pages, or a set of unrelated mockups?
- Is every spacing value, font size, and color traceable to a decision, or is anything arbitrary?
Category 3: Build and QA Rigor
Does the built site match the Figma design, and can they systematically close the gap between what they designed and what got built?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | Built site diverges significantly from Figma. Cannot describe discrepancies specifically. QA is absent or surface-level. | QA has not developed as a skill. |
| 2 | Built site mostly matches Figma at desktop but drifts at tablet/mobile. QA catches the obvious issues and misses the subtle ones (spacing, line heights, letter spacing). | QA is consistent but not yet comprehensive. Lead still finds issues they missed. |
| 3 | Built site matches Figma closely at every breakpoint. QA is systematic. Director catches most discrepancies before lead review. | QA is comprehensive and self-directed. Lead finds few or no issues the director missed. |
| 4 | Built site is a precise match to Figma. QA process is documented, systematic, and internalized. Director catches everything before lead reviews. | QA is indistinguishable from a senior practitioner. Could teach the process to someone else. |
What to look for
- When they find a discrepancy, can they describe it specifically enough for Claude Code to fix it?
- Does the built site match the Figma at every breakpoint, or only on their own laptop?
- Do they run QA as a continuous practice during the build, or only at the end?
- How many issues does the lead find that the director missed?
Category 4: AI Fluency
Are they directing AI tools to execute their vision with precision, or curating output and hoping something works?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | Prompts are vague. Claude Code output requires many iterations to approximate the Figma. Accepts output that does not match the design. | No meaningful progression from Cycle 1. |
| 2 | Prompts are more specific but still broad. Takes 5+ iterations to reach each section. Does not consistently reference Figma specs in prompts. | Prompts are specific but not consistently vision-led. |
| 3 | Prompts reference Figma values (font sizes, spacing, hex codes, interaction specs). Reaches design match in 2–3 iterations per section. Knows when to redirect. | Precise, iterative, vision-led prompting. Uses AI as a tool, not a creative partner. |
| 4 | Directs Claude Code with the specificity of a senior developer brief. Output matches Figma on first or second iteration. Knows what they are describing before touching any tool. | Full directorial control. Could teach someone else how to prompt Claude Code for interactive work. |
What to look for
- How many iterations does it take to match the Figma design in a given section?
- Do they describe what they want before they start prompting?
- Are they referencing exact values (12px, #1a1a1a, 24px line-height), or vague directions ("make the text bigger")?
- Can they diagnose why Claude Code produced the wrong output and describe the correction clearly?
Category 5: Client Communication and Presentation
Can they present work to a client, defend their decisions, and walk through a live site with confidence?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | Cannot present work clearly. Does not connect the live site to the experience strategy. Reads from slides. | Presentation skills have not improved. |
| 2 | Presents but does not tell a story. Connects deliverables to strategy when prompted. Struggles with client questions. | Competent presenter but lacks conviction on difficult questions. |
| 3 | Presents clearly with strategy-to-site connection. Walks the client through the live site with confidence. Handles most questions with rationale. | Strong presenter. Leads the room. Defends decisions with evidence. |
| 4 | Commands the room. Every part of the live site is framed in terms of what the audience needs and why. Handles pushback with ease. | Could present to any client in any context. The program's proof of concept. |
What to look for
- Do they show mockups, or do they walk through the live site with intention?
- When the client pushes back, do they cave, defer, or defend with rationale grounded in discovery?
- By Cycle 2, does the presentation feel like a professional agency handoff?
Category 6: Team Integration
Does the interactive work connect to brand and motion, or is it operating in isolation?
| Level | Cycle 1 (End of Wk 6 · Sun Aug 30) | Cycle 2 (End of Wk 12 · Sun Oct 11) |
| 1 | No coordination with teammates. Digital experience exists in isolation. Brand assets not correctly applied. | Team integration has not developed. |
| 2 | Applies brand identity but does not coordinate on how it extends. Adds motion assets late with no specification handoff. | Coordinates but does not drive integration. |
| 3 | Receives brand strategy early. Defines motion specs (where, when, how long) for the Motion director. Deploys staging URL so teammates can test in context. | Full integration. The team output feels like one vision, not three tracks. |
| 4 | Interactive is the connective tissue between Brand and Motion. Proactively ensures the three outputs feel like one system. | The team is indistinguishable from a professional creative team. |
What to look for
- Is the brand identity correctly applied in the built site, or are there visible breakdowns?
- Does the motion content feel embedded in the experience, or tacked on?
- Does the interactive director see themselves as the delivery point for the whole team's work?
Assessment Record Template
Director Name: ____________________
| Category | Max | Cycle 1 (End Wk 6 · Aug 30) | Cycle 2 (End Wk 12 · Oct 11) |
| Experience Strategy | /4 | | |
| Visual and Structural Design | /4 | | |
| Build and QA Rigor | /4 | | |
| AI Fluency | /4 | | |
| Client Communication | /4 | | |
| Team Integration | /4 | | |
| Total | /24 | | |
Narrative assessment (required at end of Cycle 2)
- Where did this person start?
- What changed?
- What is their realistic 90–180 day path forward as a working creative?