People keep asking whether AI can tell stories, as if the whole argument turns on output.
It does not.
The harder question is whether AI can exercise taste. Whether it can tell the difference between a scene that merely functions and a scene that wounds. Whether it can feel when a line is too clean, when a reveal arrives too early, when a character’s silence is carrying more truth than the paragraph that would explain it.
That is why Kathleen Kennedy’s recent comments landed with more force than the usual industry handwringing. She did not frame the problem as “technology versus artists.” She framed it around the thing people keep trying to skip past.
“Taste is fundamental to the process of creating things.”
Kathleen Kennedy at Runway’s AI Summit, as reported by The Hollywood Reporter
Coming from a nervous gatekeeper, that line would read like theater. Coming from Kathleen Kennedy, it reads more like diagnosis. This is someone who started on E.T., went on to produce or executive produce more than seventy films, and has spent decades making decisions in the space where story, commerce, craft, and culture all collide. Earlier in 2026, she also made room for responsible AI use, suggesting that “augmented intelligence” may be a better frame if the tools accelerate work without displacing the human point of view. (AFI)
So when Kennedy draws a line, she is not drawing it from purity. She is drawing it from experience.
Taste is the layer that does the choosing
The AI debate gets blurry the moment people start treating taste like polish.
They talk as if taste arrives near the end, after the “real” work is done. First the plot. Then the structure. Then the pages. Then, if there is time, the stylish part. But storytelling has never worked that way. Taste is not the garnish. It is the faculty that tells a writer which conflict deserves emphasis, which image belongs in this story and not another, which emotional turn should stay raw, and which should be withheld until it can actually land.
Paul Graham once described taste as the difference between simply being technically competent and being able to make something beautiful. That matters here because storytelling is full of technically competent dead things. A draft can be coherent, legible, even well-paced and still feel strangely absent. The missing layer is often not intelligence in the abstract. It is judgment. It is proportion. It is the ability to recognize what matters inside the available options. (Paul Graham)
Research in neuroaesthetics points in the same direction. Aesthetic judgment is not random decoration floating above cognition. It is learned. It is shaped by memory, exposure, context, training, and experience over time. Taste gets built. Which means it also carries biography. It carries the strange accumulation of what a person has survived, studied, admired, rejected, and learned to notice. (PMC)
That is exactly why two writers can start from the same premise and produce stories with completely different gravitational centers. One writer sees a betrayal. Another sees a liberation. One notices the Objective Story pressure first. Another is pulled toward the shame inside the Main Character’s private justifications. One builds toward spectacle. Another toward inevitability. The structure may still be diagnosable in both cases. Taste is what decides what the structure is there to reveal.
Storytelling is not a generation problem
This is where a lot of current AI discourse still feels unserious.
Generative systems are astonishing at producing permutations. Give them enough examples and they can produce endless plausible candidates. They can draft, imitate, summarize, continue, remix, and surface variations faster than any writer. But storytelling is not difficult because human beings struggle to produce options. Storytelling is difficult because good stories require someone to keep making meaningful selections under pressure.
Researchers working on human-centered creative collaboration at Stanford make this point more directly than most product pages do. Original work requires opinions. It requires choices. Their critique of current generative systems is not that the outputs are unimpressive. It is that the systems are often weak collaborators because creators still cannot direct them with the nuance real projects demand. The bottleneck is not supply. The bottleneck is judgment. (Stanford News)
“Understanding, context, and emotional connection are still human terrain.”
David Droga, via Wharton Human-AI Research
That line gets closer to the actual fault line than most of the “AI can never create” rhetoric does. The problem is not that models can never produce a compelling sentence. They obviously can. The problem is that a compelling sentence is not yet a committed point of view. It is not yet authorship. The work still depends on someone deciding why this sentence belongs here, why this conflict and not its cleaner cousin, why this scene should remain unresolved long enough to matter.
Dramatica makes the distinction harder to ignore
For anyone working with Dramatica, this should sound familiar.
A Storyform is not a story. It is the underlying structural argument. It tells you how the inequity is being explored across the Objective Story, Main Character, Influence Character, and Relationship Story Throughlines. It can reveal whether the argument holds. It can show you where a draft is lopsided, where a thematic relationship is drifting, where the pressure of a Signpost is getting blurred by the Storytelling.
What it cannot do is decide what expression of that argument is worth committing to.
That distinction matters because people routinely confuse structural clarity with authorship. They assume that if a model, template, or framework can identify the right Storypoints, then the creative problem is basically solved. But the real work starts once the structure becomes visible. Which image carries the inequity? Which scene order preserves the reveal? Which line should sharpen the Influence Character’s pressure, and which one would over-explain it? Which turn earns silence? Those are taste questions.
This is one reason Dramatica becomes more valuable, not less, in an AI-saturated culture. The clearer the structure gets, the more obvious the role of human judgment becomes. A machine may help a writer see the shape of the argument. It still cannot be the one who decides what the argument should feel like in this particular story, for this particular audience, through this particular set of chosen expressions.
Without taste, structure flattens into template. With taste, structure becomes expression.
The industry’s labor fights were really about authorship
This is also why the Hollywood labor response to AI has been more precise than the loudest online arguments.
The WGA’s 2023 MBA did not treat the issue as simple machine panic. It established that AI is not a writer, that AI-generated material is not literary material, that companies must disclose AI-generated source material when relevant, and that writers cannot be required to use AI software. SAG-AFTRA’s agreements around digital replicas and synthetic performers followed the same underlying instinct: human authorship, consent, and accountability cannot be treated as optional implementation details. (WGA West)
Steven Spielberg made a similar point at SXSW this year when he said he is for AI in many disciplines, but not when it replaces a creative individual. That is the cleaner formulation. The issue is not whether tools can participate. The issue is where responsibility remains once the tool has spoken. (Los Angeles Times)
Kennedy’s comments belong in that same line of thought. Use systems where they extend the process. Use them where they preserve room for selection, revision, framing, and refusal. But do not confuse acceleration with authorship. Do not confuse generation with taste.
And do not confuse predictability with story sense.
Kennedy also reportedly pointed to the “beautiful unpredictability” of the creative process and suggested that AI has trouble preserving it. That observation matters because the best story turns are not arbitrary. They feel surprising on the surface and inevitable underneath. They emerge from shame, longing, contradiction, memory, ideology, desire, and all the other human distortions that make one choice feel truer than another. AI can approximate the outer pattern. The inner necessity is still much harder to counterfeit.
The future belongs to the people who can choose
One recent paper on AI and aesthetic judgment argues that as aesthetic production scales up, taste re-emerges as a kind of precaution. That feels exactly right. The cheaper generation gets, the more valuable discernment becomes. The more options a system can produce, the more precious it becomes to recognize which option means anything. (Northwestern EECS Users)
That is why Kennedy’s point matters beyond one summit quote or one executive’s stance on AI. She is naming the layer most people keep trying to automate past.
Storytelling does not break at the moment a model produces a paragraph. It breaks when nobody remains willing or able to decide what matters inside the flood of plausible outputs. It breaks when the writer’s role gets reduced to approving near-misses. It breaks when a story starts sounding complete before anyone has risked enough to make it true.
For a Dramatica audience, the implication is practical. Structure still matters. Storyform still matters. Throughlines, Signposts, and Storypoints still matter. But those things do not replace the storyteller. They clarify the space in which the storyteller has to exercise taste.
That is the part AI still cannot automate.
And if we are serious about protecting art rather than merely romanticizing it, that is the part we should keep under human authority.
Sources
- The Hollywood Reporter on Kathleen Kennedy at Runway’s AI Summit
- AFI: Kathleen Kennedy on producing and AI
- Paul Graham, “Taste for Makers”
- PMC article on neuroaesthetics and aesthetic experience
- Stanford News on generative AI and creative collaboration
- Wharton Human-AI Research with David Droga
- Writers Guild of America West AI guidance
- Los Angeles Times on Steven Spielberg at SXSW 2026
- AI and Aesthetic Judgment paper