Columbia University in the City of New York

Zuckerman Scientists Merge Art and AI in Venice

At the Venice Biennale, Zenna Tavares meshes interactive storytelling and artificial intelligence to explore the counterfactual reasoning he studies as a Zuckerman Innovation Scholar

Scenes from Djali, a Venice Biennale installation, were recorded inside the Greene Science Center, home of the Zuckerman Institute (Photo courtesy of Zenna Tavares)

NEW YORK, NY — Scientist Zenna Tavares, PhD, wants to know precisely how our minds consider ways the world could be different from what it is. We all do counterfactual reasoning like this, when we weigh restaurant options or play out regrets about paths not taken.

“I want to develop a mathematical understanding of counterfactual reasoning with a goal of describing it precisely enough that machines can do it,” said Dr. Tavares, the Zuckerman Institute’s Innovation Scholar, whose training ranges from robotics to electronic engineering to cognitive science to philosophy. Jointly sponsored by Columbia’s Data Science Institute and Zuckerman Institute, Dr. Tavares works to develop machines that can reason in a more human way.

 

Zenna Tavares caught in the act of his human-reasoning research, which involves robotics, at the Zuckerman Institute (Credt: John Abbott)

He’s also one of the minds behind a counterfactual-infused art installation currently on display at the Biennale Architettura 2023, a component of the famed Venice Biennale. This interactive video piece, named Djali after West African storytellers called jalis, explores an AI’s view of possible futures. Filmed in part at Columbia’s Greene Science Center (with help of the Zuckerman Institute’s facilities staff), it runs until later this month at the Central Pavilion of the Giardini della Biennale, as part of an exhibit The Laboratory of the Future

 

The Laboratory of the Future, part of Biennale Architettura 2023 (Photo courtesy of Zenna Tavares)

Djali is a work of some 20 collaborators: writers, AI developers, illustrators, designers, roboticists, musicians, scientists, costume makers, actors, operations managers and filmmakers. 

Joining Dr. Tavares as the project’s principals are his brothers — filmmaker and architect Kibwe Tavares and writer, artist and musician Gaika Tavares — as well as computer scientist and software developer Eli Bingham and Emily Mackevicius, PhD, a postdoctoral scientist in the Center for Theoretical Neuroscience and the Zuckerman Institute lab of Dmitriy Aronov, PhD, who studies the neurobiology of foraging in birds. Along with Dr. Zenna, Mr. Bingham and Dr. Mackevicius also are cofounders of the nonprofit AI startup, Basis, which is a partner organization in the project. 

 

Djali display at the Venice Biennale (Photo courtesy of Zenna Tavares)

One of the installation’s two related experiences centers on a nine-minute, computer-augmented video in a dark theater setting. 

“The video is an investigation by an artificial intelligence, one of the characters, that is trying to reason out what has happened to a woman who has gone missing,” Dr. Tavares explains. “The AI considers different characters in a futuristic world and assesses how they might or might not be involved in the woman’s disappearance.” 

As one artificial intelligence writes the narrated story of these characters in the video, another helps to offer viewers an unusual dynamic visual experience: the characters remain still in each scene, but the perspective on them and their settings changes smoothly as if the camera were on a roving drone. The effect was created from still images fed into a neural network that fused the images into a digital 3D model of the entire space.

 

Neural networks allowed the team, including Zamzam Warsame, to create 3D models of scenes viewable from any angle (Photo courtesy of Zenna Tavares)

“This enabled us to view each scene from any perspective at all, even ones the camera did not capture,” Dr. Tavares explained. “When you watch the video, you do not see recorded footage, but rather a reconstruction from the 3D models.”

In the installation’s second experience, an individual audience member can interact with single scenes from the video on a large display. As a viewer moves around and looks at the monitor from different places, cameras track the locations of their eyes and feed this data into a computer. An algorithm specifies the viewer’s changing gaze in the space 90 times per second. For each cycle, the neural network instantaneously re-renders the 3D perspective seen on the screen. 

“It makes you feel like you are in the scene and can move about in it,” said Dr. Tavares. 

Counterfactuals come into both parts of Djali, Dr. Tavares says. The video’s storyline examines how people use observations to make correct inferences or counterfactual ones, in this case about what might have happened to the character who seems to have vanished. The narrative also examines how race, class and preconceptions shape observations and inferences, each one being a potential counterfactual to the reality of the situation. 

Counterfactuals continuously come into the display-based experience: each momentary 3D location of a viewer’s gaze on the monitor generates a current image from just one of the infinitude of potential perspectives the viewer could have in each moment. All of the unchosen views in any moment are counterfactual views. 

Dr. Mackevicius, who helped coordinate the photoshoots, was enthralled with the project’s artistic vision. 

“Scientifically, I am interested in how the flow of information influences group behavior, including groups of humans and AIs,” said Dr. Mackevicius. “Djali addresses similar themes, artistically, with a complexity and depth of vision and imagination not yet scientifically possible.”

 

Gaika Taveras (left) and Dr. Mackevicius (right) during the making of Djali (Photo courtesy of Zenna Tavares)

Drs. Tavares and Mackevicius and their collaborators are already working toward a second version of the exhibit in which viewers will be able to navigate between perspectives in the video, not just in individual scenes. 

The collaborators’ long-term vision transcends art and entertainment. Within the context of their new nonprofit research startup, Basis, they hope to develop experiential, counterfactual-based simulation tools called participatory models. These could enable, say, city leaders to more fully appreciate how policy decisions might affect a citizenry in both the near and long term. Another project aims to build AI-powered models of cells and tissues, which draw on biology’s vast and growing stores of databases. The goal is is to deliver tools scientists could use to probe possible outcomes of experiments that currently are technically impossible. 

“We learn, discover and make decisions by considering what is not actual but could be,” said Dr. Tavares. “What if we could build AIs that can do that?”

 

View All News >