
A 17-year-old Danville student is changing the future with his detection models designed to spot AI-manipulated media known as deepfakes.
Vaishnav Anand, a student at The Athenian School, first started researching deepfake detection in order to differentiate between real photos of celebrities and AI-manipulated photos of them. After learning more about Geographic Information Systems (GIS), he began to combine his previous work with AI detection with a new interest in geospatial mapping.
“Every day, there’s about a hundred terabytes of satellite information that commercial companies use,” Anand told DanvilleSanRamon.com. “Even if 1% of that massive amount of satellite data that is consumed by companies that produce information for the rest of the world is falsely AI generated, it can lead to pretty massive repercussions.”
AI-manipulated satellite photos, or “geospatial deepfakes”, are an increasingly modern problem that can affect the entire globe politically and economically, according to Anand. He explained how using AI to manipulate satellite photos could allow for the hiding of military institutions, creation of fake oil and mineral deposits, and the alteration of environmental disasters for commodity markets. If geospatial deepfakes were to become more prominent, these changes could be used to destabilize countries and economies.
Anand trains AI models in order to find these deepfakes. His research is pushing the boundaries in geospatial deepfake detection primarily because there are not many libraries currently available. As Anand explained, facial recognition models are easy to train because there are a large number of photos of both real people and digitally manipulated photos.
“With geospatial deepfakes there’s no predisposed data set with this separated ‘real and fake’,” Anand said.

In order to train his detection model Anand used a Generative Adversarial Network (GAN) to take all real satellite imagery from SpaceNet 7 and manipulate it. The GAN is composed of two parts: a generator, which produces the satellite photos, and a discriminator, which is supposed to tell if they’re real or fake. Anand’s research is primarily built around using the discriminator. Currently Anand is planning to use diffusion models to improve his AI detector.
Anand presented his findings at the Esri International User Conference in San Diego, which is the world’s largest GIS conference. As a Teen Project Leader for the National 4-H GIS Leadership Team, representing California, he also personally met with Jack Dangermond, the founder and president of Esri and a global leader in GIS technologies. Anand was also invited to present at the CCIR Student Research Symposium of King’s College, University of Cambridge, on July 29.

“I love seeing people acknowledge the information that I’m giving,” Anand said. “Over the past few years deepfakes have proliferated at an exponentially growing rate. And still there is a subset of people (that is growing smaller and smaller) that have no idea about what deepfakes are and [are] consuming all this media thinking it’s authentic when it’s not.”
The ethics of AI are one of Anand’s concerns for the future and what he’s working on improving. After he finishes his current work in geospatial deepfake detection with diffusion models, Anand plans to ultimately make a user interface (UI) extension that can notify people about any text, audio, or images that have been altered as they see it.
By helping verify media that is consumed, Anand hopes to help protect against unethical uses of AI.
“As exciting as innovation AI is, it’s important to recognize the threats that AI poses and how to protect ourselves from those threats,” Anand said.
While Anand’s immediate next project is creating deepfake detection for audio files, he is also writing a series of books on cybersecurity. These books are planned to help “demystify” technology so that the average person can understand how to protect themselves in the modern day and age.



