You Were Never the Player: How Pokémon Go Built a $3.5 Billion Surveillance Map
You thought you were catching Pokémon. You were building the most detailed map of the physical world ever created.
In the summer of 2016, something unprecedented happened. Hundreds of millions of people walked out of their homes, phones raised, and began systematically photographing and mapping every park bench, every storefront, every street corner, every statue and mural and mailbox in their cities. They did it willingly. Eagerly. They thought they were playing a game.
They were. But the game was playing them, too.
On March 10, 2026, Niantic Spatial — the AI company that emerged from Pokémon Go’s wreckage — announced a partnership with Coco Robotics to power autonomous delivery robots using something called a Visual Positioning System. That system was trained on 30 billion images captured by Pokémon Go, Pikmin Bloom, and Monster Hunter Now players over the course of nearly a decade.
Those delivery robots are now rolling through Los Angeles, Chicago, Miami, Jersey City, and Helsinki. They navigate using a spatial map that you — or someone you know — almost certainly helped build. For free.
And that’s just the beginning.
From Pokéstops to Positioning Systems
To understand how we got here, you need to understand what Niantic was actually building while everyone was chasing Pikachu.
Pokémon Go launched in July 2016 and immediately became the most downloaded mobile app in history, hitting 500 million installations in its first 60 days. Players flocked to real-world locations — parks, landmarks, businesses — that served as Pokéstops and Gyms. Every visit generated GPS data. Every AR photo captured spatial information. Every step logged speed, direction, and device orientation.
But the real data play came later.
In late 2020, Niantic introduced AR Mapping tasks — a feature that asked players to walk around real-world objects while their phone cameras captured detailed scans. In exchange, players received in-game rewards: items, experience points, virtual goods worth fractions of a penny in real-money terms. The community reaction was telling. Players widely regarded the rewards as insultingly low, and many attempted to skip or disable the scanning tasks entirely.
What they didn’t realize was that each scan uploaded far more than a simple photo. Every AR mapping submission contained GPS coordinates, camera angle, phone orientation, accelerometer data, speed, direction, and detailed sensor readings. From a pedestrian perspective, at ground level, submitted from millions of angles across millions of locations — the exact kind of data that satellites and street-view cars cannot capture.
Over the years, players contributed scans from more than 10 million locations globally, with approximately 1 million new scans submitted every week. The result was a three-dimensional reconstruction of the walkable world at a level of detail that no mapping company had ever achieved.
Niantic wasn’t building a game engine. It was building a Large Geospatial Model.
The $3.5 Billion Split
On March 12, 2025, Niantic announced that it would sell its entire gaming division — Pokémon Go, Pikmin Bloom, Monster Hunter Now, and all associated apps — to Scopely, a mobile gaming company owned by Saudi Arabia’s Savvy Games Group, for $3.5 billion.
The deal closed on May 29, 2025. Scopely got the games. Niantic kept the data.
The company’s remaining technology division was reborn as Niantic Spatial Inc., a standalone AI company headquartered in San Francisco and capitalized with $250 million — $200 million from Niantic’s balance sheet and a $50 million investment from Scopely itself. Niantic Spatial retained founding CEO John Hanke, brought in Brian McClendon as CTO, and Thomas Gewecke as COO.
Read that again: the buyer of the games also invested in the data company. The $3.5 billion wasn’t just a purchase price — it was the market’s valuation of what nearly a decade of crowdsourced spatial mapping was worth.
What Niantic Spatial retained was arguably the most valuable part: the Visual Positioning System, the Large Geospatial Model, the neural networks, and the 30 billion training images. The games were the vehicle. The spatial data was always the destination.
What the Visual Positioning System Actually Does
Niantic Spatial’s crown jewel is the Visual Positioning System (VPS) — and to understand why it matters, you need to understand why GPS fails.
In a dense urban environment, GPS signals bounce off buildings, creating what engineers call “urban canyon” interference. Your phone’s GPS might say you’re standing on the right side of the street when you’re actually 50 meters away, inside a building across the road. For human navigation, that’s an annoyance. For an autonomous robot navigating a crowded sidewalk, it’s a showstopper.
VPS solves this problem by matching what a camera sees in real-time against Niantic’s database of 30 billion images. Instead of relying on satellite signals, the system recognizes physical features — the corner of a building, the texture of a sidewalk, the shape of a lamppost — and triangulates position based on visual data.
The result, as Niantic Spatial CTO Brian McClendon put it: “We know where you’re standing within several centimeters of accuracy, and, most importantly, where you’re looking.”
That last part is critical. VPS doesn’t just know where something is. It knows what direction it’s facing, what it can see, and how its surroundings have changed over time. Niantic describes this as a “living map” — a digital twin of the physical world that updates as new data comes in.
To make this work, Niantic Spatial has trained more than 50 million neural networks with over 150 trillion parameters, enabling precise positioning in more than 1 million locations worldwide.
Delivery Robots Are Just the Start
The Coco Robotics partnership, announced March 10, 2026, is the most visible commercial application of this technology — but it’s far from the only one.
Coco Robotics operates approximately 1,000 flight-case-sized delivery robots across Los Angeles, Chicago, Miami, Jersey City, and Helsinki. Each robot can carry up to eight extra-large pizzas or four grocery bags. In February 2026, Coco launched the Coco 2, designed to transition from human-assisted delivery to fully autonomous operation.
VPS is what makes that autonomy possible. Where GPS drifts by 50 meters, VPS pinpoints to centimeters — the difference between a robot that delivers your pizza and one that drives into traffic.
But here’s where the story gets darker.
The Military Contract
On December 16, 2025, Niantic Spatial announced a partnership with Vantor — the Earth intelligence firm formerly known as Maxar Intelligence — to develop GPS-denied navigation for military platforms.
The collaboration combines Vantor’s visual navigation software for aerial platforms with Niantic Spatial’s ground-level VPS. Together, they’re building a system that allows drones, ground vehicles, and dismounted soldiers to share a common coordinate framework when GPS is unavailable — whether due to jamming, spoofing, or deliberate denial by adversaries.
In early testing, the combined system achieved a 70% reduction in positioning error, down to roughly 1.5 meters in many scenarios. Field testing was planned for early 2026.
Let that sink in: a map built by people catching Pokémon is now being adapted for military positioning in contested environments. The same data that helped you find a Snorlax in Central Park is helping drones navigate without satellites in GPS-denied battlefields.
Players scanning a statue in their local park for 50 experience points were, in effect, contributing to a dual-use geospatial intelligence platform. They were never asked. They were never told.
The Large Geospatial Model: Building a Digital Twin of Earth
Niantic Spatial’s ultimate ambition goes beyond delivery robots and military contracts. The company is building what it calls a Large Geospatial Model (LGM) — think of it as an LLM, but for physical space.
Where a Large Language Model predicts the next word in a sentence, a Large Geospatial Model predicts the structure and features of the physical world. It reconstructs, localizes, and understands spaces using data from ground-level cameras, overhead sensors, and partner datasets.
CEO John Hanke has described the vision as “a virtual simulation of the world that changes as the world does.” By the end of 2026, according to Niantic Spatial’s own blog, the most capable AI systems “will no longer be trapped behind screens. They will navigate our streets, factories, and homes using a shared understanding of space.”
The LGM serves three functions:
- Reconstruct — building 3D maps and digital twins from sensor data
- Localize — pinpointing position to centimeter accuracy using visual matching
- Understand — providing real-time contextual awareness through semantic analysis, detecting and tracking over 200 types of objects
Niantic Spatial also announced a multi-year strategic partnership with Snap Inc. to build AI-powered maps, signaling that the LGM is being positioned as foundational infrastructure — not just for robots, but for AR glasses, autonomous vehicles, smart city systems, and any application that needs to understand physical space.
The company’s stated goal is to become the spatial intelligence layer beneath every machine that moves through the real world. And the training data? Contributed, for free, by hundreds of millions of gamers who thought they were playing a game.
The Consent Problem
Here’s the core privacy question: Did players consent to this?
Niantic has consistently maintained that AR scanning was opt-in. Players had to visit a specific publicly accessible location and actively tap to begin a scan. Regular gameplay — walking around, catching Pokémon, battling at gyms — did not contribute to the scanning dataset. This is technically true.
But “opt-in” and “informed consent” are not the same thing.
When players tapped that scan button, they were told they were helping improve the game experience. The framing was always about Pokémon Go — better AR features, more immersive gameplay, enhanced Pokéstop interactions. At no point were players told:
- Your scans will be used to train a commercial positioning system
- That system will be sold to robotics companies
- The same data will power military navigation technology
- Niantic will spin off the data into a separate AI company
- That company will be valued at hundreds of millions of dollars based largely on your contributions
This is the textbook definition of purpose limitation violation — a core principle of data protection laws like the GDPR. Data collected for one stated purpose (game improvement) cannot be repurposed for a materially different purpose (commercial robotics, military positioning) without fresh consent.
Niantic Spatial has stated that future data collection will involve an “opt-in third party service” — an implicit acknowledgment that the original consent framework was insufficient for the current use cases.
But what about the 30 billion images already collected? That horse has left the barn. The neural networks are trained. The VPS is operational. The military partnership is in field testing. Retroactive consent is not consent.
The Unpaid Labor Question
There’s another dimension to this story that goes beyond privacy into basic economic fairness.
Pokémon Go players who participated in AR mapping performed work. They walked to specific locations, pointed their cameras at specific objects, and systematically captured multi-angle visual data. This is precisely the kind of work that professional mapping companies pay surveyors to do. Google pays drivers. Apple pays contractors. Niantic paid its contributors in virtual Poké Balls.
The value of their collective labor is now embedded in a company worth hundreds of millions of dollars, powering products and partnerships worth billions. The players received nothing. No compensation. No equity. No revenue share. Not even a notification that their contributions had been repurposed.
As multiple commentators have noted, this makes Pokémon Go one of the most successful unpaid labor extraction operations in tech history — not through deception, but through the simple alchemy of turning work into play and data into dollars.
What This Means for Everyone — Even Non-Players
You don’t have to have played Pokémon Go to be affected by this.
The 30 billion images in Niantic’s database don’t just contain pictures of Pokéstops. They contain everything the camera could see: pedestrians, license plates, building interiors visible through windows, private homes adjacent to public spaces, people sitting in cafés, children in playgrounds. The images capture the world as it existed at the moment of scanning — including everyone and everything in frame.
Niantic states that it anonymizes and processes data to strip identifying information. But the VPS doesn’t need to identify individuals to raise privacy concerns. It creates a persistent, updatable, centimeter-accurate map of the physical world that any paying customer can access. That map knows what your neighborhood looks like. It knows when things change. It knows the spatial layout of public spaces you move through daily.
The Coco Robotics partnership means delivery robots equipped with cameras are now navigating your sidewalk using this map — and potentially contributing new data back into the system. Niantic’s “living map” concept explicitly relies on continuous updates. Today it’s delivery robots. Tomorrow it could be security drones, retail analytics systems, or insurance assessors.
The question isn’t whether Niantic’s map will be misused. It’s whether a centimeter-accurate, continuously updated digital twin of the physical world can exist without being misused.
The Regulatory Vacuum
As of March 2026, no major regulator has taken action against Niantic Spatial over its data repurposing practices. This is not entirely surprising — the practice falls into a gray area that existing privacy frameworks struggle to address:
- GDPR requires purpose limitation and informed consent, but enforcement against US-based companies using data collected globally over many years is complex
- CCPA/CPRA gives California residents the right to know what data is collected and to opt out of its sale, but Niantic Spatial states it does not “sell” personal data as defined by state privacy laws — a careful legal distinction
- No US federal privacy law exists that would comprehensively address the repurposing of crowdsourced spatial data for commercial and military use
Niantic Spatial has published both consumer and business privacy policies, along with an SDK-specific data privacy framework and a law enforcement information page. The company appears to be positioning itself carefully for an eventual regulatory reckoning.
But the structural problem remains: the data was already collected, the models were already trained, and the commercial partnerships are already operational. Even if regulators act, unwinding the use of data already baked into 50 million neural networks is practically impossible.
This is the new playbook for data extraction: collect now, repackage later, and let the lawyers sort it out after the models are trained.
What Niantic’s Playbook Teaches Us
The Pokémon Go story is not unique. It’s a template.
Every app that asks you to take a photo, scan a QR code, enable your camera, or share your location is potentially building a dataset that will outlive the app’s original purpose. The pattern is always the same:
- Offer a free, entertaining service that requires data-rich interactions
- Collect far more data than the service requires, justified by vague “improvement” language in the terms of service
- Build proprietary models on the collected data
- Spin off or license the models to commercial and government customers
- Claim the data was voluntarily provided and point to the terms of service
Pokémon Go is simply the most dramatic example because the scale was unprecedented and the gap between the stated purpose (a fun AR game) and the actual use (military positioning systems) is so vast.
But the same logic applies to:
- Fitness apps that map your running routes (and sell aggregate movement data)
- Smart home cameras that “improve AI” (and train facial recognition models)
- Navigation apps that record your driving (and build traffic prediction products)
- Social media platforms that process your photos (and train object recognition systems)
In every case, the consumer-facing product is the collection mechanism. The data is the product.
What You Can Do
The Pokémon Go spatial data is already collected and cannot be recalled. But you can take steps to limit your exposure to the same playbook going forward:
For Current Pokémon Go Players
- Disable AR Mapping tasks in your game settings. Multiple guides exist for opting out of scanning features while still playing the game normally
- Revoke camera permissions for the app when not actively using AR features. The game functions without them
- Review what data Niantic/Scopely holds on you by submitting a data access request through the game’s privacy settings
- Delete your scanning contributions if the option is available through your account privacy controls
For Everyone
- Audit app permissions regularly. Any app with camera and location access is a potential spatial data collector. Revoke permissions you don’t actively need
- Be skeptical of “gamified” data collection. When an app rewards you for scanning, photographing, or mapping real-world locations, ask yourself: who benefits from this data in five years?
- Use a privacy-focused DNS like NextDNS or a local Pi-hole to block telemetry and data collection endpoints from apps
- Read privacy policies for data-sharing clauses — specifically look for language about “partners,” “affiliates,” “service improvement,” and “anonymized data sharing.” These are the loopholes
- Support privacy legislation. The US still lacks a comprehensive federal privacy law. Contact your representatives and support organizations like the EFF, EPIC, and Access Now that advocate for stronger data protection
- Teach your kids. Pokémon Go’s most active demographic is young people. Help them understand that when something is free, the data is the price
For Policymakers
- Close the purpose limitation gap. Data collected for one purpose should not be repurposeable for materially different commercial or military applications without fresh, informed consent
- Mandate data provenance disclosures. Companies selling AI models should be required to disclose the sources and consent status of their training data
- Establish crowdsourced data protections. When millions of individuals collectively create a valuable dataset, they should have collective rights over its use
The Bigger Picture
In 2016, Pokémon Go felt like magic — a moment when technology brought people together, got them outside, made them explore their neighborhoods. For many, it genuinely was that.
But behind the Pikachu, behind the Pokéstops, behind the gym battles and community days, a machine was learning. Every scan, every photo, every walk around the block was training a system that would eventually power delivery robots on sidewalks and military drones in GPS-denied environments.
Niantic didn’t hide what it was doing. It just never told anyone what it was doing it for.
The 30 billion images are captured. The neural networks are trained. The commercial partnerships are signed. The delivery robots are rolling. The military field tests are underway. And somewhere, right now, a Pokémon Go player is scanning a Pokéstop for 50 experience points, still feeding the machine.
You were never the player. You were always the product.
If you found this article valuable, share it with someone who plays Pokémon Go. They deserve to know what they’ve been building.



