Posted March 6, 2019
By Mark Runco

I have written many times that there is no such thing as a creativity test. That may sound odd, given that much of my academic career, spanning over 30 years, has been devoted to developing and validating assessments so we better understand creativity, but then again, it is exactly that experience which led to the conclusion that there is no such thing as a creativity test.

There are reliable and valid estimates of creative potential, and equally reliable and valid estimates of past creative performance. Tests of divergent thinking, including those that are a part of the rCAB battery, are examples of the former, and the Creative Activity and Accomplishment Check List, developed by Holland in the early 1960s and refined by many since then (Milgram, Hocevar, Kogan, Dollinger, and most recently Paek and Runco, 2017), is an example of the latter.

Even that conclusion, that there are reliable and valid estimates of potential and past creative performance, must be further qualified. Reliability and validity, for instance, are each a matter of degree and not an all-or-nothing attribute. So the key conclusion above should probably read “adequately reliable and valid.” Even this is a bit of a risk because reliability and validity vary from sample to sample, and even administration to administration. This is why all rCAB tests are checked each time: Whenever an rCAB test is given, CTS checks reliability and validity, so the degree of each is determined for that particular administration.

True, the meaning of validity has changed over the years. It has become more a matter of usefulness than a statistical attribute. I would say that the statistical basis is still vital, but usefulness must be included in the definitions of reliability and validity. Certainly usefulness has not replaced reliability and validity.

Other changes have occurred as well. The Creative Activity and Accomplishment Check List, for example, now has subscales for Technological Creativity, Everyday Creativity, and most recently, Moral and Political Creativity. These last two won’t surprise anyone, given the questionable governing that has been on display in Washington (and elsewhere) since November 2016. A number of non-psychometric discussions about the current Administration’s attacks on the U.S. Constitution have been published, including Sternberg’s articles on “Active Ethical Leadership,” but assessments have been developed so empirical data can be collected. No doubt the science deniers in the current Administration in D.C. will misunderstand and misrepresent any empirical results, as is their habit, but in the long run science will regain its respect. Hopefully the damage done—to the climate, to education, to the economy, and to the Rule of Law—is reversible and USA won’t make the same mistake twice. It should be obvious that this is not really a tangent. The point was that the assessment of creativity continues to evolve, as is evidenced by new methods, new scoring (e.g., semantic networks used with the rCAB), and new scales (e.g., political, moral, everyday, and technological creativity).

The progress being made with assessments of creative potential and past performance are intertwined with the problems. Progress is rarely linear. There are starts and stops, and divergence such that both good and bad options are explored. This takes us right back to the conclusion that there is no such thing as a creativity test because progress has been made in many areas, including the neurosciences, but there data are often collected in a laboratory, and that is where the conclusion is the most applicable. We have known for 100 years, give or take a couple of decades, that laboratory research is high on internal validity but low on external validity. In other words, there is great experimental control in a lab, but results may not generalize all that well to the natural environment—where similar control is absent. So what of the “creativity” tested in the lab? Very likely it is reliable, but only in the lab. This may be true of creativity research conducted in a lab, even more than other human behaviors tested in the lab, given that creativity often involves spontaneity and intrinsic motivation, and things which are probably excluded by experimental control.

Such lab research is limited, and findings about creativity are really about creative potential—what people are capable of doing. But they also represent a step in the right direction and can be tested at a later date, in the natural environment. That would represent real progress.

Suggested Readings

Paek, S.-H., & Runco, M.A. (2017). Dealing With the Criterion Problem by Measuring the Quality and Quantity of Creative Activity and Accomplishment. Creativity Research Journal, 29, 167-173.

Runco, M.A. (in press). Political Examples of a Dark Side of Creativity and the Impact on Education. In C.Mullen (Ed)., Education Under Duress. New York: Springer.

Sternberg, R.J. (2017) ACCEL: A New Model for Identifying the Gifted. Roeper Review, 39:3, 152-169, DOI: 10.1080/02783193.2017.1318658