Are We Sacrificing Creativity and Critical Thinking for Speedy GenAI?
- Brittany Luckham
- Jun 14
- 5 min read
Part 3: Examining Generative AI
Over the last few days, I’ve been writing about generative AI. First, its environmental impact, second, how it’s trained, and finally, how it impacts creativity and critical thinking skills.
How Does Creativity Actually Work?
Before jumping into AI, I wanted to first understand how creativity works, what parts of the brain are in use.
In an American Psychological Association article, first published in 2022, John Kounios, PhD, explained that “There are different routes to a creative spark.” Two systems, to be specific. System 1 is “quick, unconscious thoughts — aha moments — that burst into consciousness.” System 2 is about thinking slowly and deliberately. When it comes to creativity, you can use one or the other, or both systems.
Cognitive neuroscientist at Georgetown University, Adam Green, PhD, continues, stating that “creativity often involves coordination between the cognitive control network (executive functioning, planning, and problem-solving), and the default mode network (mind-wandering or daydreaming).” According to him, “these two systems are usually antagonistic.” Yet, creativity is an instance where they, in fact, do work well together.
Creativity and art are core parts of humanity. There are cave paintings dating back thousands of years, and tools that have withstood the test of time to teach us how humans and civilization evolved. Jonathan Schooler, PhD, observes that “Creativity is at the core of innovation. We rely on innovation for advancing humanity. Creativity underlies so much of what humans value.”
I couldn’t agree more. As a creative person, as a writer, a daydreamer, a storyteller, I thrive in creative environments. I look forward to those moments, that can turn into hours, of pure flow where I get lost in the project, where the words feel like they write themselves. It is when I am most at peace and living in joy. So, what happens to those words and those stories when GenAI enters the picture?
How Does GenAI Influence Creative Writing?
Anil R. Doshi and Oliver P. Hauser conducted a study published in Science.org to learn more about how people are using or would use GenAI in creative pursuits. The study focused on short stories (about eight sentences long) where participants were grouped into three sections. Section 1 was not allowed to use GenAI, section 2 had the option of using GenAI once, and section 3 had the option of using GenAI up to five times. They were only allowed to use AI for ideas.
The results?
Doshi and Hauser found that “access to generative AI ideas causes stories to be evaluated as more creative, better written, and more enjoyable.” However, this was true “especially among less creative writers.”
Moreover, GenAI stories “are more similar to each other than stories by humans alone.” This means that we would see less diversity and innovative storytelling in the creative writing field. We risk “losing collective novelty.”
After the conclusion of the study, the evaluators addressed ethical and financial concerns:
They imposed an ownership penalty of at least 25% on writers who received generative AI ideas.
They indicated that the content creators, on whom the models were based, should be compensated.
They indicated that disclosure of the use of AI or the underlying text from AI should be part of publications that use such tools.
This is what many writers and authors have been saying from the beginning. Companies should seek permission and consent to use the copyrighted works of authors, and they should be compensated. If you happen to use AI, that should be disclosed in the text. It’s also worth including an AI clause on the copyright page to deter anyone from using your work to train AI models. I did.
How Does GenAI Impact Critical Thinking?
Lastly, I’m going to examine how critical thinking is impacted by the rise of AI. If it’s anything like its impact on writing, it means there will likely be a decrease in diverse and innovative ideas and skills.
Microsoft found that “a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted.” Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Trusting GenAI implicitly has the potential to lower critical thinking skills, a worrying issue given AI’s numerous inaccuracies.
This Microsoft study surveyed 319 knowledge workers who reported using AI tools such as ChatGPT and Copilot at least once a week, and the researchers analyzed 936 real-world examples of AI-assisted tasks. They found that “knowledge workers engage in critical thinking primarily to ensure the quality of their work — verifying outputs against external sources.” Despite improvements to task efficiency, AI “can inhibit critical engagement with work and can lead to long-term overreliance on the tool and diminished skill for independent problem solving.”
What I find interesting about this is that while the output is quick, it can be inaccurate. I spent a week or more researching and finding credible sources for this series. I had to verify the information before I could write anything. If I’d used AI, I would still have to make sure the information was accurate, it would just take place after the article was written instead of before.
AI, I think, poses a risk to accountability in that people are not going to hold themselves accountable for what AI decides to generate. This is worrisome given the current lack of accountability we already see without GenAI.
This is further proven through a research study conducted by Smart Learning Environments. Authors Chunpeng Zhai, Santoso Wibowo, and Lily D. Li suggest that using AI “affects [students’] critical cognitive capabilities, including decision-making, critical thinking, and analytical reasoning.” They found that “over-reliance stemming from ethical issues of AI impacts cognitive abilities, as individuals increasingly favour fast and optimal solutions over slow ones constrained by practicality.”
As discussed above, this overreliance occurs when “users accept AI-generated recommendations without question,” which leads to “errors in task performance in the context of decision-making.” It’s the difference between implicit confidence in AI versus confidence in yourself to carry out a task effectively. A reliance on AI has the potential to diminish one’s self-esteem. It begs the question: How is someone supposed to feel good and confident about their skills and capabilities if they cannot reliably utilize those skills without the help of AI?
Final Thoughts
Part of the reason I am against AI, in any form, is that I am a writer. Half of the writing process is about ideation and brainstorming, and playing around with different scenarios. Half the joy I get from writing has nothing to do with actually writing words down on the page and more to do with daydreaming and imagination. To use AI as an idea generator means you’re not using your imagination, and if you’re not using your imagination, it will atrophy, along with your creativity and critical thinking skills.
Planning and outlining projects — whether for a podcast episode, a newsletter, or a blog post — with AI actually decreases unique and diverse content. As we’ve seen, we risk losing collective novelty because the AI models are trained using work that’s already completed. There’s no room for play, for the messy first drafts, for the weird experimentation with language and style and voice. The outputs run the risk of becoming decisive, of spitting out the same, overused, content ideas that we already see without it.
I will not say that AI has no part in the world or our future. It’s here and it’s here to stay (whether I like it or not). However, I think it’s best used in moderation and by scientists and experts only. I don’t believe the general population needs access to AI or GenAI, not yet, at the very least. It’s too energy-intensive, and it’s too outside any legal process. Until AI is greener and can help us combat climate change, until the future of the planet we all live on is secure, we need to significantly reduce our AI usage. It’s no longer just the big corporations that are contributing to the majority of greenhouse gas emissions, it’s each of us as individuals.
Comments