Not long ago, maps were a static, printed means of showing a person how to get from point A to point B. Today, they are a “digital canvas” to make data more useful and more transformative for people, organizations and machines.
“We were all a group of developers trying to help these organizations be more data driven and drive change,” Gundersen says in an interview. In doing so, Gundersen and his colleagues discovered that “a lot of that data was geo in nature. So we had to get good at making maps. The map became a canvas for us to make data more operational,” he added.
If you think digital maps at their inception were mind-blowing, today’s and tomorrow’s mapping capabilities are other worldly, thanks to vast amounts of sensor and user data that far surpass the relatively simple embedded road sensors of the early days of digital mapping. Gundersen will offer a vision for how this will play out – and transform our world – in his keynote, “Building a Living Map of the World Updated from Billions of Sensors” at 11am this morning.
The journey so far
Mapbox was founded in 2010 with the notion that developers needed a “Photoshop for maps,” Gundersen said.
“We wanted to take a dense set of data – for instance from doctors reporting malaria outbreaks – and put it on a map that was highly interactive, super-fast, and worked in low-bandwidth environments,” he said.
Two years after founding, the company offered a maps SDK for iOS, followed in 2014 by a maps SDK for Android developers. By 2015, developers could take advantage of Mapbox Studio to design custom maps and a host of other tools as the company expanded its core concept of a platform for data and mapping.
Mapping the mappers
Of the four global mapping companies – Mapbox, Google, HERE, and TomTom – Mapbox and Google are the leaders in sensor data and AI map data generation. The sensor data generated across its platform gives Mapbox a comprehensive traffic network and excellent estimated-time-to-arrival accuracy. On top of the map data asset, Mapbox differentiates by allowing users to upload and distribute their own data and customize the navigation and user experience, according to Gundersen.
In embedded navigation, Mapbox traffic and ETA models are built by actual drivers and refined with every drive. The company benchmarks control routes in key regions to test competitive and “ground-truth” ETAs against real conditions. “We tune our ETAs based on our customers’ areas of interest,” Gundersen said.
“Every app that installs Mapbox sends back longitude, time stamp, directionality information,” Gundersen says. “The cool part is developers love us because we only collect anonymous data. But we collect all this data to make a road network. What I want to focus on in our talk is that what was pretty revolutionary just two years ago has now become a feature in which we can take 300 million miles of data a day and turn it into a map that updates and adjusts based on how people are moving.”
Gundersen likes to call this a “living feedback loop.”
Data’s transformative potential clearly gets Gundersen excited.
“The real magic, and I think what’s important for your audiences, is thinking about just the power of harnessing data coming off sensors,” he said. “The variety of sensors in the marketplace now and in the future creates a need for platform approach. By designing for a multitude of sensor types and use cases, for example in our Vision SDK product, we’re preparing a toolbox with maximum flexibility and customization in mind.”
Enabling new applications
“We’re now making maps not just from GPS data, but we’re able to start using vision, front-facing cameras, and using a level of artificial intelligence directly on the chip set, on detection, classification, all running locally, in battery-powered environments,” he said. “This makes this, I think, especially relevant to many of the folks working with Arm on pretty cutting-edge stuff.”
Mapbox technology is not just for drivers. In fact its data-centric platform approach has value in untold sectors, including real estate, shipping and logistics, healthcare. And as technology and AI and machine learning evolve and as 5G emerges, there are applications that can’t even be imagined today, especially as compute power expands on the edge.
“Our maps running on Sprint’s mobile edge computing in the Curiosity IoT Network Cores connects to the Vision SDK — launched with Arm — to detect, categorize and incorporate changes with super low latency,” Gundersen said. “Deployment at the network edge means our map APIs and the neural networks used for analyzing imagery from front-facing cameras run on Sprint’s endpoints, bringing live location data within 10ms of any user on the network.”
With the emergence of 5G networks, sub-10 ms latency is not only possible but critical for autonomous vehicles and robots. Sprint’s mapping platform allows smart machines not only to know what is along their path but also to react to changes in that path on the spot,” he said.
“That rethinking involves repositioning our AI closer to the edge,” Gundersen said. “The Vision SDK, now in private beta, runs neural networks for object detection and segmentation directly on the mobile device, understanding the roadside environment practically at the side of the road.”
Gundersen will deliver his keynote at Arm Techcon 2019 at the San Jose Convention Center today, Wednesday Oct. 9 at 11 a.m.