
Not for now, according to the College of Humanities and Social Sciences George Mikros. As things stand, relevant technologies are unable to reflect the depth and humanity that underpin the work of great writers. Doing so will eventually present as many dilemmas as possibilities.
With his terse prose and understated elegance, Ernest Hemingway is at the opposite end of the literary spectrum to Mary Shelley, whose gothic imagination created Frankenstein’s monster. Advancements in Large Language Models (LLMs) have nevertheless prompted academics, readers, and creators to speculate if Generative Artificial Intelligence (GAI) can truly replicate such unique voices of our most iconic authors. In response, researchers from Hamad Bin Khalifa University (HBKU) have authentically imitated the distinctive styles of some of our most iconic authors. While intriguing, the results reveal both the power and limitations of today’s GAI.
The Good, the Bad
For this experiment, GPT-4o was tasked with imitating Hemingway’s and Shelley’s distinctive styles using shared narrative themes, such as isolation, ambition, and man’s struggle against nature. Employing strategies like zero-shot generation, stylistic imitation, and in-context learning, researchers explored whether the GAI could not only mimic vocabulary and syntax but also capture the ineffable essence of an author’s voice. Their experiment involved generating texts constrained to overlapping themes in Hemingway’s The Old Man and the Sea and Shelley’s Frankenstein. The aim was to create a level playing field where stylistic differences could shine without being muddled by divergent subject matter.
From short stories to extended narratives, GPT-4o produced outputs that were subsequently subjected to rigorous stylometric analysis, comparing them to the original works. In many ways, the results were impressive. The GAI excelled at adopting surface-level stylistic elements—short sentences and minimalist dialogue for Hemingway, lush descriptions, and gothic overtones for Shelley. These traits gave the generated texts an air of authenticity, enough to pass as plausible approximations of the originals for casual readers.
The in-context learning approach, where the model analyzed excerpts of an author’s work before generating new material, showed particular promise. This technique enhanced the stylistic alignment of GPT-4o’s outputs, demonstrating the GAI’s ability to adapt its “voice” when provided with examples.
Despite its strengths, GPT-4o struggled to replicate the deeper elements of Hemingway’s and Shelley’s literary styles. Stylometric analyses—tools that quantify linguistic patterns—revealed that the GAI imitations often lacked the authors’ distinct narrative cadence and emotional detail. For instance, while Hemingway’s prose is renowned for its simplicity, its rhythm and thematic weight are deceptively complex. Shelley’s gothic flair, meanwhile, intertwines elaborate imagery with profound philosophical musings, a combination that eluded GPT-4o.
Hierarchical clustering and visualization techniques like t-SNE (a method for dimensionality reduction) underscored these gaps. Original works formed distinct clusters, clearly separated from their GAI-generated counterparts. Even with improved alignment from in-context learning, the AI’s imitations occupied a stylistic middle ground—not quite Hemingway, not quite Shelley, and clearly machine-made.
What’s Holding GAI Back?
The answer lies in the complexity of human creativity. True stylistic imitation goes beyond word choice and syntax; it demands an understanding of context, intent, and the interplay between form and meaning. While GPT-4o can parse and reproduce patterns, it lacks the lived experience and intuitive grasp of details that define human authorship.
Additionally, GPT-4o’s reliance on pre-trained knowledge poses a limitation. Its understanding of Hemingway and Shelley is derived from a corpus of text that may be unevenly representative of their works. Unlike a human scholar, the GAI cannot discern which elements are foundational to an author’s style versus incidental quirks.
Beyond technical challenges, GAI's ability to convincingly imitate literary styles raises pressing ethical concerns. What happens when a machine can generate texts indistinguishable from those of a living or deceased author? Issues of intellectual property, authenticity, and creative ownership loom large. Could we see a future where “new” Hemingway or Shelley works flood the market, challenging our notions of authorship and originality?
There’s also the risk of misuse. Persuasive texts crafted in the style of public figures could be weaponized for misinformation campaigns or deceptive marketing. As GAI grows more adept at stylistic imitation, safeguards must evolve to prevent exploitation.
A Glimpse into the Future
Despite its current limitations, GPT-4o’s stylistic experiments hint at an exciting future for GAI in the creative arts. Imagine a collaborative tool that helps writers refine their craft by offering stylistic suggestions or generating plot outlines in the voice of their favorite author. In education, such models could provide insights into literary techniques, making classic works more accessible to students.
Yet, to achieve these possibilities responsibly, researchers and developers must address the technical and ethical gaps. Improved algorithms, richer training datasets, and robust transparency measures will be essential. Equally important is an ongoing dialogue with authors, educators, and ethicists to navigate the societal implications of AI’s creative potential.
So, the answer to the question “can GPT-4o write like Hemingway or Shelley?” is, for now, a qualified “not quite.” While the GAI impressively mimics surface-level stylistic traits, it falls short of capturing the depth and humanity that define great literature. But as technology evolves, so will its ability to engage with our creative traditions.
For readers and writers, this moment offers an opportunity to reflect on what makes literature truly human. Is it the words on the page, or the soul behind them? In a world where machines increasingly challenge our creative boundaries, this question has never been more urgent—or more fascinating.
Dr. George Mikros is a professor at Hamad Bin Khalifa University’s (HBKU) College of Humanities and Social Sciences (CHSS).
This piece has been submitted by HBKU’s Communications Directorate on behalf of its author. The thoughts and views expressed are the author’s own and do not necessarily reflect an official University stance.