AI can make you more creative—but it has limits [MIT Tech Review]

View Article on MIT Tech Review
Generative AI models have made it simpler and quicker to produce everything from text passages and images to video clips and audio tracks. Texts and media that might have taken years for humans to create can now be generated in seconds.

But while AI’s output can certainly seem creative, do these models actually boost human creativity?  

That’s what two researchers set out to explore in new research published today in Science Advances, studying how people used OpenAI’s large language model GPT-4 to write short stories.

The model was helpful—but only to an extent. They found that while AI improved the output of less creative writers, it made little difference to the quality of the stories produced by writers who were already creative. The stories in which AI had played a part were also more similar to each other than those dreamed up entirely by humans. 

The research adds to the growing body of work investigating how generative AI affects human creativity, suggesting that although access to AI can offer a creative boost to an individual, it reduces creativity in the aggregate. 

To understand generative AI’s effect on humans’ creativity, we first need to determine how creativity is measured. This study used two metrics: novelty and usefulness. Novelty refers to a story’s originality, while usefulness in this context reflects the possibility that each resulting short story could be developed into a book or other publishable work. 

First, the authors recruited 293 people through the research platform Prolific to complete a task designed to measure their inherent creativity. Participants were instructed to provide 10 words that were as different from each other as possible.

Next, the participants were asked to write an eight-sentence story for young adults on one of three topics: an adventure in the jungle, on open seas, or on a different planet. First, though, they were randomly sorted into three groups. The first group had to rely solely on their own ideas, while the second group was given the option to receive a single story idea from GPT-4. The third group could elect to receive up to five story ideas from the AI model.

Of the participants with the option of AI assistance, the vast majority—88.4%—took advantage of it. They were then asked to evaluate how creative they thought their stories were, before a separate group of 600 recruits reviewed their efforts. Each reviewer was shown six stories and asked to give feedback on the stylistic characteristics, novelty, and usefulness of the story.

The researchers found that the writers with the greatest level of access to the AI model were evaluated as showing the most creativity. Of these, the writers who had scored as less creative on the first test benefited the most. 

However, the stories produced by writers who were already creative didn’t get the same boost. “We see this leveling effect where the least creative writers get the biggest benefit,” says Anil Doshi, an assistant professor at the UCL School of Management in the UK, who coauthored the paper. “But we don’t see any kind of respective benefit to be gained from the people who are already inherently creative.”

The findings make sense, given that people who are already creative don’t really need to use AI to be creative, says Tuhin Chakrabarty, a computer science researcher at Columbia University, who specializes in AI and creativity but wasn’t involved in the study. 

There are some potential drawbacks to taking advantage of the model’s help, too. AI-generated stories across the board are similar in terms of semantics and content, Chakrabarty says, and AI-generated writing is full of telltale giveaways, such as very long, exposition-heavy sentences that contain lots of stereotypes.   

“These kinds of idiosyncrasies probably also reduce the overall creativity,” he says. “Good writing is all about showing, not telling. AI is always telling.”

Because stories generated by AI models can only draw from the data that those models have been trained on, those produced in the study were less distinctive than the ideas the human participants came up with entirely on their own. If the publishing industry were to embrace generative AI, the books we read could become more homogenous, because they would all be produced by models trained on the same corpus.

This is why it’s essential to study what AI models can and, crucially, can’t do well as we grapple with what the rapidly evolving technology means for society and the economy, says Oliver Hauser, a professor at the University of Exeter Business School, another coauthor of the study. “Just because technology can be transformative, it doesn’t mean it will be,” he says.



Leave a Reply