• contact@blosguns.com
  • 680 E 47th St, California(CA), 90011

When Is a Photo Not a Photo? The Looming Specter of Artificially Generated Photographs

In 1984, when photographers were still using film, I began exploring the early use of computers to undetectably modify photographs. In an article in The New York Times Magazine I wrote that “in the not-too-distant future, realistic-looking images will probably have to be labeled, like words, as either fiction or nonfiction, because it may be impossible to tell them apart. We may have to rely on the image maker, and not the image, to tell us into which category certain pictures fall.”

This was two years after National Geographic, at the dawn of the digital image revolution, had modified a photograph of the pyramids of Giza so that it would better fit on its cover, using a computer to shift one pyramid closer to the other. The magazine’s editor defended the alteration, viewing it not as a falsification but, as I wrote then, “merely the establishment of a new point of view, as if the photographer had been retroactively moved a few feet to one side.” I was astonished. It seemed to me that the magazine had just introduced to photography a concept from science fiction—virtual time travel—as if revisiting a scene and photographing it again.

In a few short years, image manipulation software such as Photoshop began to transform the photograph from essentially being a visual record made at a specific moment in time to a malleable medium that could be modified anytime. Critic Susan Sontag’s earlier characterization of the photograph as “not only an image (as a painting is an image), an interpretation of the real; it is also a trace, something directly stenciled off the real, like a footprint or a death mask” would soon become nostalgic. And today, rather than advertisements promoting Kodak’s film-era slogan “let the memories begin,” the Google Pixel camera touts itself as having both a “magic eraser” and a “face unblur.”

We are now experiencing another quantum leap in visual technology. While much has been written about artificial intelligence (AI) systems, such as ChatGPT, which process massive data sets to simulate writing “in the style of,” say, Shakespeare, or lyrics “in the manner of” Lizzo—the potential impact of synthetic imaging has received less attention. Systems like OpenAI, Stable Diffusion, and Midjourney make it possible to use text prompts to produce images, in seconds, that closely resemble photographs of people and places that never existed, or of events that never happened. Rather than search online for “actual” photographs of people and events, one might soon be encouraged to have similar images made to one’s specifications.

Anyone—without the use of a camera—can now create images inspired by the work of famous photographers or, for that matter, the work of painters, musicians, philosophers, scientists, and so on. This has provoked concerns as to whether it is necessary to recognize and compensate those, particularly artists and designers, who have provided, largely without their consent, some of the hundreds of millions of images used to train such systems, and whether the creators should be able to opt out of future involvement. (Others, such as fashion models and stylists, might also find their livelihoods at risk.) Among several lawsuits recently brought against companies with AI image generators, including some filed by artists, Getty Images sued Stability AI this month for a whopping $1.8 trillion, contending a “brazen infringement” of its “intellectual property on a staggering scale” due to, as Getty claims, Stability AI’s unauthorized use of more than 12 million of Getty’s photos for training purposes “as part of its efforts to build a competing business.” A spokesman for Stability AI told Vanity Fair: “Please note that we take these matters seriously. We are reviewing the documents and will respond accordingly.”

Photographer Stephen Shore asked one AI text-to-image system to “Photograph like Stephen Shore.” The result had “a kind of deadpan blankness that I liked,” he said.Courtesy of the artist.

ARTISTS EXPERIMENTING

In this period of limbo—between high-tech invention and cultural acceptance—I’ve been experimenting with OpenAI’s Dall-E, discovering that such systems can be extraordinarily inventive on their own. When I asked the program, for instance, to come up with an “iconic photograph that is so horrible it would cause wars to stop” in the style of war photographer Robert Capa, Dall-E devised an image of a woman pointing a camera and a young girl huddled against her, looking fearful and distraught, the camera itself bent as if impacted by the scene in front of it. Rather than expose the viewer to a potentially traumatizing scene, its horror was indicated by the response of the child, the onlooker; it was now up to the viewer to imagine what had occurred. Similarly, I was surprised when I solicited “a photograph of the greatest mothers in the world” and was provided with a photorealistic image of a verdant setting in which an ape-like animal tenderly holds her baby—not the human mother and child that I had expected.

Such experiments make one aware of how photography has been used to confirm the expected (vacations are fun, celebrities are glamorous), typically relying on stereotypes. Whereas a caption, for all of its merits, can often limit the meaning of a photograph, the text prompts can at times lead to imagery that provokes a rethinking of one’s own biases. (That said, these systems can also come up with misogynistic or racist responses, given the preponderance of these kinds of images in the online data sets upon which they are based.)

Leave a Reply