Six Characters, Six Developmental Stages: How We Designed the Dragonflies
Each Dragonfly wasn’t a personality we invented. It was a portrait of a real child at a specific developmental moment — and AI tools transformed how we could paint that portrait.
There’s a design decision that looks simple from the outside and turns out to be one of the most consequential you can make in an educational product: who does the child meet first?
In Dragonfly, an AI-driven literacy system for K–5 learners, that answer was a cast of six characters — one per grade level, each named, each designed with a specific emotional register, each built to be a mirror for the child looking at the screen. Sky, Twilight, Ripple, Meadow, Sunny, Ember. Not mascots. Not decorations. Developmental portraits.
The principle guiding the whole system: each Dragonfly represents a developmental stage, not just a personality. The design work wasn’t to invent charming characters. It was to accurately render what it feels like to be a specific age — and then build a character that makes a child that age feel recognized.
This is the story of how we designed that cast, and how the AI tools we used to bring them to life transformed so dramatically over 18 months that the process we finished with would be almost unrecognizable to the team that started.
The Cast
Six characters. Six grades. Each one a specific answer to the question: what does it feel like to be this child, right now?
Sky — Kindergarten
Sky is sky blue with huge, wide-open innocent eyes and a striped shirt. The design brief for Sky was a single phrase: pure wonder. Kindergarteners exist at the very beginning of everything — school, reading, the entire structured world — and what defines that moment isn’t readiness, it’s curiosity. Sky blows bubbles. Sky kicks a soccer ball just to see where it goes. Sky’s defining developmental sentence is “I can do it myself” — not as bravado, but as genuine, thrilling discovery. The feeling Sky is designed to evoke isn’t excitement or energy. It’s the particular wonder of a child who has just realized the world is enormous and navigable and interesting, and who hasn’t yet been told what they can’t do.
Twilight — Grade 1
Twilight is purple, wears a pink hoodie, and holds a glowing star. Where Sky is wide open, Twilight is quietly intent. First graders have passed the threshold — school is no longer entirely new — and something more specific begins to emerge: a habit of noticing. Twilight gazes at constellations. Twilight is the child who hugs their teacher every single morning, not from shyness, but from genuine warmth. Inquisitive, observant, affectionate. The feeling Twilight evokes is quiet confidence — the particular security of a child who has a routine now and finds comfort in it, who is starting to understand that the world has patterns, and that noticing them is a form of power.
Ripple — Grade 2
Ripple is cerulean blue with an orange zip hoodie and a wide, enthusiastic stance. The body language was intentional from the start: feet apart, arms open, ready to go. Second grade is the first time peer learning really kicks in — children this age have started to understand that other kids are resources, that watching someone else figure something out is one of the best ways to learn. Ripple’s core message is “get back up and try again,” and the delivery is never stern — it’s contagiously energetic. The feeling Ripple evokes is enthusiasm as a shared resource. Not just “I’m excited” but “I’m excited and so are you and let’s go.”
Meadow — Grade 3
Meadow is green, holds a book, and wears a pink hoodie. The book is not an accident — Meadow reads for pleasure. Third grade is when children cross into a different kind of competence: they stop learning to read and start reading to learn, and it changes everything about how they move through the world. Meadow does a cannonball off the diving board when parents are watching. Meadow fits three playdates into a single Saturday. Meadow sees a new student eating alone and invites them over without being asked. The feeling Meadow evokes is confident capability — not showing off, just operating from a genuine sense that problems have solutions and you’re the kind of person who can find them.
Sunny — Grade 4
Sunny is golden orange with a green plaid jacket. Fourth grade is when children start asking “why” in a more serious way — not just to understand, but to debate. Sunny argues ideas at dinner. Sunny organizes a backyard campout with a real plan. On a rainy day when nothing works out, Sunny invents a new indoor game. The design challenge with Sunny was distinguishing inquiry from restlessness: Sunny isn’t scattered, Sunny is focused curiosity directed outward. The feeling Sunny evokes is collaborative intelligence — the pleasure of thinking hard about something together, the sense that good questions are more valuable than quick answers.
Ember — Grade 5
Ember is crimson red with a teal hoodie, red sneakers, and red dragonfly veins that pulse through the whole design. Ember is the oldest Dragonfly, and the design reflects it: this is a character who has accumulated a self. Ember feeds the dog without being asked. Ember waits for a little brother even when it’s inconvenient. Ember practices the same free-throw shot for hours until it lands. And Ember’s clothes show personality — specific choices, a developing point of view. The feeling Ember evokes is earned identity. Not performance, not rebellion — the quieter confidence of a child who is starting to know who they are, and who chooses to be reliable because it matters to them.
The Pairs
The six characters weren’t designed in isolation. They were designed in pairs, because developmental stages don’t have clean edges — children at the boundary of one stage and the next share qualities, and the character transitions needed to feel like growth, not replacement.
Sky and Twilight (K–1) share the register of curiosity and wonder, but Twilight adds reflection. A kindergartener meeting Sky sees themselves; a first grader meeting Twilight sees who they’re becoming. Ripple and Meadow (Grades 2–3) share energy, but Meadow channels it differently — less “let’s go” and more “here’s how.” Sunny and Ember (Grades 4–5) both operate from internal motivation, but Sunny’s curiosity is outward-facing while Ember’s is turning inward toward identity.
The transition from one character to the next isn’t a level-up. It’s an introduction. A child meeting Twilight after Sky isn’t being told they cleared a threshold — they’re meeting someone new who feels like a version of themselves they recognize.
Progress experienced as identity rather than metrics doesn’t threaten. It invites.
The Tools Problem: End of 2024
We had six fully developed character personalities. We had precise color palettes, specific clothing choices, body language specifications, emotional registers. What we didn’t have, at the end of 2024, was a reliable way to generate those characters consistently at scale across the volume of content Dragonfly needed.
The problem wasn’t making one good image of Sky blowing bubbles. The problem was making Sky look like Sky across hundreds of poses, scenarios, emotional states, and formats — video segments, classroom activity cards, app UI illustrations, animated sequences — in a way that a kindergartener would recognize as the same character every time.
Character consistency at scale is genuinely hard. Human illustrators solve it through a style guide, reference sheets, and accumulated muscle memory across a team. In early 2025, the AI tools available didn’t have a robust equivalent. We spent the first months of the year working around that gap.
February 2025: Midjourney and the 80-Word Prompt
By February 2025 we had AI image tools, and Midjourney had become the primary generation engine for character assets. What we quickly learned was that getting consistent results required encoding everything — every physical detail, every stylistic parameter — directly into every single prompt. The character’s knowledge of itself lived entirely in the text you fed it.
A typical Twilight prompt looked something like this:
Every prompt carried the entire physical specification. Every. Single. One. And even then, consistency wasn’t guaranteed — the wing texture might shift, the eye size might vary, the color saturation would drift across a generation batch. We developed a review process that was more editorial than technical: a human reviewer comparing each output against a reference sheet, flagging deviations, regenerating. It worked. It was slow. And it was clearly a transitional state.
May 2025: Video Gets Better
The next significant shift came in May 2025 with improvements to AI video generation. This mattered for Dragonfly because the characters needed to move — animation was a primary content format, not an edge case — and the gap between a still illustration and a coherent animated character is enormous.
Better AI video meant we could start building out animated sequences without the full traditional animation pipeline. Ripple doing the enthusiastic stance. Sky blowing bubbles. Ember waiting patiently. The characters began to feel alive in a way that still images couldn’t fully achieve. But the consistency problem hadn’t been solved — we were still generating from elaborate text descriptions, and the model still had to reconstruct the character from scratch with each prompt.
September 2025: The Leap
September 2025 was the inflection point. A new class of tools — we were using Nano Banana as part of the pipeline — introduced a fundamentally different model of character consistency. Instead of encoding the character into every prompt, you could establish the character as a persistent reference and then direct behavior from there.
The change in prompting practice was almost comical in how dramatic it was. An early 2025 Midjourney generation required an 80-word base description plus pose plus URL variables plus parameter flags. A September 2025 Nano Banana generation for the same character might look like this:
Five words. The character’s identity — the exact wing texture, the hoodie, the eye size, the color palette — was preserved in the model’s understanding of that character, not reconstructed from text each time. Consistency that had required elaborate prompt engineering and manual review now emerged from a minimal directive.
This wasn’t just a workflow improvement. It changed what was possible. When you’re spending most of your cognitive energy ensuring fidelity, you have less left for directing feeling. When consistency becomes cheap, you can spend the prompt on what actually matters: what is this character doing, and how do they feel about it?
November 2025: The Full Pipeline
By November 2025, the tools had evolved into a genuinely interconnected generation system. Character image generation, face animation, and audio sync had become a pipeline rather than a series of disconnected manual steps.
This is what it meant in practice for Dragonfly: a character like Sky could be generated in a specific pose and emotional state, animated with facial movement and natural expression, and synced to voice audio — in a fraction of the time any of those individual steps had required a year earlier. And because consistency was now handled at the model level rather than the prompt level, the Sky in that animated segment was the same Sky in every other asset in the system.
The character system we had designed — six distinct developmental portraits, each with precise emotional registers and physical identities — could finally be generated with the fidelity those designs deserved. The creative constraints we had set in 2024, when the tools barely existed, turned out to be exactly the right constraints for a 2025 pipeline to execute.
The creative constraints we set before the tools existed turned out to be exactly right for the tools that followed.
What the Characters Are Actually For
The AI tools story is genuinely interesting, and the pace of change was remarkable to live through. But it’s worth remembering what all of it was in service of.
A kindergartener opening Dragonfly meets Sky. In a few seconds — before a single word of instruction has been delivered — that child is either looking at something that feels like them, or they’re not. The huge innocent eyes, the wonder-forward energy, the sense that everything is interesting and anything is possible: that’s not decoration. That’s the emotional contract of the product. Sky says: you belong here, this is made for you, it’s okay to not know things yet.
The same is true at every grade level. Twilight’s quiet observation is a mirror for the first grader who notices things but isn’t always loud about it. Ripple’s enthusiasm validates the second grader who wants to participate and sometimes falls down and gets back up. Meadow’s competence shows the third grader that confidence is available to them. Sunny’s inquiry tells the fourth grader that asking hard questions is the point. Ember’s earned reliability reflects back to the fifth grader that who they’re becoming is worth taking seriously.
The tools kept changing. The question stayed the same: does this child see themselves?
That question — and the rigorous developmental thinking behind it — is what makes a cast of six cartoon dragonflies into something that actually works.
Linda Brown
Systems Architect building intelligent structures for creative teams — at the intersection of design systems, AI infrastructure, and the stubbornly human parts of creative practice.