People are taking photos and videos all over major cities, all the time, from every angle. Theoretically, with enough of them, you could map every street and building — wait, did I say theoretically? I meant in practice, as the VarCity project has demonstrated with Zurich, Switzerland.
This multi-year effort has taken images from numerous online sources — social media, public webcams, transit cameras, aerial shots — and analyzed them to create a 3D map of the city. It’s kind of like the inverse of Google Street View: the photos aren’t illustrating the map, they’re the source of the map itself.
Because that’s the case, the VarCity data is extra rich. Over time, webcams pointed down streets show which direction traffic flows, when people walk on it, and when lights tend to go out. Pictures taken from different angles of the same building provide dimensional data like how big windows are and the surface area of walls.
The algorithms created and tuned over years by the team at ETH Zurich can also tell the difference between sidewalk and road, pavement and grass, and so on. It looks rough, but those blobby edges and shaggy cars can easily be interpreted and refit with more precision.
The idea is that you could set these algorithms free on other large piles of data and automatically create a similarly rich set of data without having to collect it on your own.
“The more images and videos the platform can evaluate, the more precise the model becomes,” said a postdoc working on the project, Kenneth Vanhoey, in an ETH Zurich news release. “The aim of our project was to develop the algorithms for such 3D city models, assuming that the volume of available images and videos will also increase dramatically in the years ahead.”
Several startups have already emerged from the project: Spectando and Casalva offer virtual building inspections and damage analysis. Parquery monitors parking spaces in real time through its 3D knowledge of the city. UniqFEED (on a different note) monitors broadcasted games to tell advertisers and players how long they’re featured in the feed.
The video above summarizes the research, but a longer one going deeper into the data and showing off the resulting model will be appearing next week.