“Changes are happening—it is not like last year. Each year is more burdensome than its fellow.”
– The complaints of Khakheperraseneb, circa 2000-1800 BCE
Apocalypse now?
Creative disruption and cultural decay in the recent explosion of GenAI
It’s a sign of our times that in reference to the recent explosion of artificial intelligence tools and technologies, a leading newsmagazine recently asked on its cover if we were experiencing “the end of humanity?” A lot about this moment feels unprecedented. From the pace at which new Generative AI tools are being released, to the fact that less than a year ago so much of what today is possible through readily available tools seemed exclusively the realm of science fiction or advanced computer graphics.
With so much of this technological change affecting core aspects of our culture—with material implications for the production of media and art, for the practice of creativity and communication—the human stakes can feel unique and existential. When faced with different forms of technological disruption, a very human reaction is to reach for apocalyptic narratives that prematurely announce the end of things. However, when has technological disruption not felt unprecedented?
It’s comforting to think that the current speed at which things are changing around us has no point of comparison in history. A sense that our times are more challenging, complex, and chaotic than anything humanity has faced before us helps us cope with how overwhelmed we may currently feel. Yet history tells us that technological disruption of creative mediums is not a new phenomenon.
Look, for example, at how Socrates’ resistance to the written word, as opposed to that which is spoken, doesn’t hesitate to announce the end of memory. In Plato’s dialogue Phaedrus, Socrates objects to the advent of writing by saying to Phaedrus that “this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.”
Travel forward in time to the end of knowledge, when Italian monks denounce the printing press for its ability to allow ideas to be distributed to the masses, removing the vetting power of the church and ending the monopoly of scribes. Filippo de Strata, a 15th century monk and scribe, distills the sentiment of the elite toward the new technology by denouncing that printers “shamelessly print, at a negligible price, material which may, alas, inflame impressionable youths, while a true scribe dies of hunger.” Far from being the end of knowledge, the printing press spread knowledge and literacy farther and faster than ever before and spawned the new bookseller industry.
More recently, an overly zealous Charles Baudelaire seems to announce the end of art when facing the emergence of photography as a creative medium. Distressed by the rupture to the achievements of painters who mastered their techniques through years of training, Baudelaire declares to be “convinced that the badly applied advances of photography, like all purely material progress for that matter, have greatly contributed to the impoverishment of French artistic genius, rare enough in all conscience.”
Even more compelling for its recency, the widely spread idea that smartphones mark the end of our attention spans is particularly illuminating. A misquoted market research study¹ gave birth to the catchy, but now debunked, idea that humans have shorter attention spans than goldfish. Widely quoted in media and books, the idea spread like wildfire and gave us a shorthand to illustrate the undeniable transformations that ubiquitous connectivity through handheld devices has brought to all aspects of human activity.
Perhaps we should learn from history to be skeptical of the simplicity and absoluteness with which we tend to react to great uncertainty and accelerated change. Today, we find ourselves trying to assimilate the capabilities of AI and make sense of how it fits into our culture. As we focus on how Generative AI works and the aspects of our daily life it will disrupt, the question of “what do we do with this?” has not fully been answered.
Or, the next renaissance?
Disruption and creative amplification
How can we understand the current moment and what its impact may be? Once again, history offers direction: New technology that once caused great cultural concerns ultimately sparked creative expression, communication, and community in bold new ways.
Some experts in the AI industry project that 90% of online content will be AI-generated by 2025². As every abundance creates a new scarcity, the value of human-generated content could become even more precious. As Generative AI applications make it easier to realize our creative visions and democratize
access to creative production and consumption, those precious human origins may in fact have even more platform to contribute to culture.
Good Governance and purposeful intention is required to help protect our humanity and ensure that it is used responsibly. By keeping humans at the center—and using the technology to augment, rather than replace our capabilities—Generative AI functions as another medium for human expression and communication.
Leaders in this new era have an incredible opportunity and responsibility. It’s up to us to find the applications that elevate the human experience within the boundaries of safety, ethics, and responsibility. The actions we take will write new chapters of our cultural history. Let’s be reminded of our past chapters to help write the next.
For more Deloitte Digital research and perspectives on GenAI, visit our Insights page.
1. BBC News, "Busting the Attention Span Myth," March 10, 2017.
2. Yahoo! News, "90% of online content could be 'generatived by AI by 2025,' expert says," January 13, 2023.