AlabamaCreates Studio

Interactive Orientation Assessment Rubric

Cohort 1 · Both Project Cycles
12 Weeks · 2 Project Cycles · Opens Sat Jul 18, 2026 · Closes Sat Oct 17, 2026

How This Rubric Works

This rubric is used by the orientation lead to assess each interactive director at two checkpoints: end of Cycle 1 (Week 6, Sun Aug 30, 2026) and end of Cycle 2 (Week 12, Sun Oct 11, 2026). The same categories are evaluated each time. The standard rises between cycles.

Each category is scored on a four-level scale:

LevelLabelMeaning
1Not YetHas not demonstrated this capability. Needs direct instruction.
2EmergingShows early signs but inconsistent. Needs regular coaching.
3DevelopingDemonstrates capability with occasional gaps. Needs light guidance.
4IndependentDemonstrates capability consistently without prompting. Ready to practice.

The goal is not for every interactive director to reach Level 4 in every category by Week 12. The goal is visible, documentable progression from wherever they started. A director who moves from 1 to 3 across two cycles has demonstrated significant growth. A director who stays at 2 across both has not.


Category 1: Experience Strategy

Can they define who a site is for, what it should feel like, and what it should accomplish, grounded in the brand positioning?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1Cannot answer the three experience questions. Jumps to layout before strategy.Still dependent on lead for experience direction. No meaningful progression from Cycle 1.
2Can answer the questions with coaching. Answers are vague or borrowed from the brand strategy verbatim.Experience rationale is adequate but not directional. Still occasionally vague.
3Answers are defensible and grounded in the brand strategy and discovery findings. Can explain why the experience structure matters.Experience rationale is specific, ties directly to positioning, and drives every layout decision.
4Experience strategy is clear, specific, and directly informs all downstream design decisions. Can defend it under pressure.Anticipates every question. Defends experience decisions with evidence and conviction. Needs no coaching.
What to look for

Category 2: Visual and Structural Design (Figma)

Does the Figma design serve the experience strategy, and does it apply the six design first principles consistently?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1Visual choices are arbitrary. No grid, inconsistent type, broken hierarchy. Cannot explain why any element is placed where it is.Visual work does not demonstrate growth from Cycle 1.
2Some choices connect to the experience rationale but the design lacks coherence. Type hierarchy is unclear. Grid is applied inconsistently.Design is competent but lacks the precision that distinguishes professional work.
3Most choices are traceable to strategy. Type hierarchy is clear. Grid is intentional. Interaction states defined. Can apply visual first principles when prompted.Strong design with clear rationale. Self-critiques effectively before presenting. Design is portfolio-quality.
4Every choice serves the experience strategy. All six first principles applied without prompting. Design is clearly a system, not a collection of pages.Work is indistinguishable from agency-produced design. Defends every decision. The design speaks for itself.
What to look for

Category 3: Build and QA Rigor

Does the built site match the Figma design, and can they systematically close the gap between what they designed and what got built?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1Built site diverges significantly from Figma. Cannot describe discrepancies specifically. QA is absent or surface-level.QA has not developed as a skill.
2Built site mostly matches Figma at desktop but drifts at tablet/mobile. QA catches the obvious issues and misses the subtle ones (spacing, line heights, letter spacing).QA is consistent but not yet comprehensive. Lead still finds issues they missed.
3Built site matches Figma closely at every breakpoint. QA is systematic. Director catches most discrepancies before lead review.QA is comprehensive and self-directed. Lead finds few or no issues the director missed.
4Built site is a precise match to Figma. QA process is documented, systematic, and internalized. Director catches everything before lead reviews.QA is indistinguishable from a senior practitioner. Could teach the process to someone else.
What to look for

Category 4: AI Fluency

Are they directing AI tools to execute their vision with precision, or curating output and hoping something works?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1Prompts are vague. Claude Code output requires many iterations to approximate the Figma. Accepts output that does not match the design.No meaningful progression from Cycle 1.
2Prompts are more specific but still broad. Takes 5+ iterations to reach each section. Does not consistently reference Figma specs in prompts.Prompts are specific but not consistently vision-led.
3Prompts reference Figma values (font sizes, spacing, hex codes, interaction specs). Reaches design match in 2–3 iterations per section. Knows when to redirect.Precise, iterative, vision-led prompting. Uses AI as a tool, not a creative partner.
4Directs Claude Code with the specificity of a senior developer brief. Output matches Figma on first or second iteration. Knows what they are describing before touching any tool.Full directorial control. Could teach someone else how to prompt Claude Code for interactive work.
What to look for

Category 5: Client Communication and Presentation

Can they present work to a client, defend their decisions, and walk through a live site with confidence?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1Cannot present work clearly. Does not connect the live site to the experience strategy. Reads from slides.Presentation skills have not improved.
2Presents but does not tell a story. Connects deliverables to strategy when prompted. Struggles with client questions.Competent presenter but lacks conviction on difficult questions.
3Presents clearly with strategy-to-site connection. Walks the client through the live site with confidence. Handles most questions with rationale.Strong presenter. Leads the room. Defends decisions with evidence.
4Commands the room. Every part of the live site is framed in terms of what the audience needs and why. Handles pushback with ease.Could present to any client in any context. The program's proof of concept.
What to look for

Category 6: Team Integration

Does the interactive work connect to brand and motion, or is it operating in isolation?
LevelCycle 1 (End of Wk 6 · Sun Aug 30)Cycle 2 (End of Wk 12 · Sun Oct 11)
1No coordination with teammates. Digital experience exists in isolation. Brand assets not correctly applied.Team integration has not developed.
2Applies brand identity but does not coordinate on how it extends. Adds motion assets late with no specification handoff.Coordinates but does not drive integration.
3Receives brand strategy early. Defines motion specs (where, when, how long) for the Motion director. Deploys staging URL so teammates can test in context.Full integration. The team output feels like one vision, not three tracks.
4Interactive is the connective tissue between Brand and Motion. Proactively ensures the three outputs feel like one system.The team is indistinguishable from a professional creative team.
What to look for

Assessment Record Template

Director Name: ____________________

CategoryMaxCycle 1 (End Wk 6 · Aug 30)Cycle 2 (End Wk 12 · Oct 11)
Experience Strategy/4  
Visual and Structural Design/4  
Build and QA Rigor/4  
AI Fluency/4  
Client Communication/4  
Team Integration/4  
Total/24  

Narrative assessment (required at end of Cycle 2)