Morrissey Technology – New research reveals GPT-4, the latest AI model from OpenAI, outperformed 151 people on three tests designed to measure divergent thinking, which is considered an indicator of creative thinking. Divergent thinking, which is a thinking process to produce creative ideas, is characterized by the ability to produce a unique solution to a question that does not have one expected solution.
For example, “What’s the best way to avoid talking about politics with your parents?”
In the University of Arkansas study, GPT-4 was proven to provide more original and complex answers than human participants. The study, titled ‘Current state of Artifical Inteligence generative language models more creative than humans in divergent thinking tasks,’ was published in Scientific Reports. The three tests used were, first, the Alternative Use Task, which asked participants to find creative uses for everyday objects such as rope or forks.
Second, the Consequences Task, which asks participants to imagine the possible outcomes of a hypothetical situation, such as “what if humans no longer needed sleep?”.
Third, the Divergent Associations Task, which asks participants to produce 10 nouns whose semantic distance is as far as possible.
For example, words with little semantic distance, such as between ‘dog’ and ‘cat’, as well as words with a wide semantic distance such as ‘cat’ and ‘ontology’. Answers were evaluated based on the number of responses, length of responses, and semantic differences between words. As a result, the authors found that OpenAI language models are more complex than people.
“Overall, GPT-4 was more original and complex than humans in every different thinking task, even when controlling for response fluency,” according to the study authors, quoted by ScienceDaily.
“In other words, GPT-4 showed higher creative potential across a sequence of divergent thinking tasks,” the researchers concluded.
However, these findings come with some caveats. First, this study aims to measure creative potential, not efforts to create.
“It is important to note that the measures used in this study are all measures of creative potential, but engagement in creative activities or accomplishments is another aspect of measuring a person’s creativity.”
Kent F. Hubert and Kim N. Awa, Ph.D psychology students involved in this research, also said that AI still depends on humans, does not determine itself.
“AI, unlike humans, has no agency,” the authors say, “relying on the help of human users. Therefore, AI’s creative potential remains in a stagnant state unless called upon.”
In other words, so far there will be no Terminator-style ‘doomsday’ triggered by the creativity of Skynet FOR4D computers to launch nuclear missiles throughout the world. The research also did not evaluate the conformity of GPT-4’s responses to reality. So, while AI may provide more responses and more original responses, human participants may feel limited by their responses having to be based on the real world. Awa also acknowledged that human motivation to write complex answers may not be high. On top of that, there’s the additional question of “how do you operationalize creativity? Can we say that using this test for humans can be generalized to different people? Does it assess a wide range of creative thinking?”
“So I think this makes us have to critically examine what measures of divergent thinking are most popular.”
The authors also say that they must study further whether AI can replace human creativity. For now, the researchers see “the future possibility of AI to act as a tool of inspiration, as an aid in a person’s creative process, or to overcome order as promising.”