Axim Geospatial is now an NV5 company!

Learn More

This article may reference legacy company names: Continental Mapping, GISinc, or TSG Solutions. These three companies merged in January 2021 to form a new geospatial leader [Axim Geospatial].

The geospatial industry is a prime example of how artificial intelligence can be leveraged to improve its products and services. Not all AI is created equally, however, and that could be an issue.

The “AI Arms Race” is on within the geospatial industry to develop algorithms that perform a variety of capabilities ranging from feature identification to analytics. Software companies are developing functionality predicated on AI and baking it into products, open source competitions are spurring the development of algorithms to meet particular needs, and organizations such as mine are developing tools to help their organization assimilate, process, and evaluate geospatial in new and more efficient ways.

One such example expediting the development of publicly available algorithms within the geospatial industry is IARPA’s Functional Map of the World challenge (info here). fMoW is an exciting venture that is focused on expediting the development of deep learning (DL) algorithms that can identify common features in satellite imagery. The latest round of competition has been going on for some time with the results to be announced any day so check out the web site.

A common challenge when integrating AI into an organization is ensuring that subject matter expects trust the algorithm and understand how an algorithm came to the answer it’s providing. We see this in our implementation of AI-based tools such as Data Fitness amongst our Quality Control team. Data Fitness measures and evaluates the quality of geospatial data with a focus on the hard to measure aspects such as errors of omission, errors of commission, and attribute correctness. QC Analysts are prone to want to check the entire data set even after the algorithm has evaluated and measured the data. How do you really know it’s correct when the algorithm doesn’t explain itself? We solve that at present through extensive testing and benchmarking that ties into defined workflow procedures that articulate how to handle scores against different use cases. This is an area we’re focused on improving.

Interestingly, DARPA is undergoing a new program the call ‘Explainable AI (XAI)’ (info here) that is focused on developing new AI tools where an algorithm can give an answer and explain how it was derived. The goal in our industry is to expand upon the answer to give reasoning behind it. For example, the algorithm should not just tell us what something is within an image (e.g., a pothole, paint stripe or bridge) but also tell us why it thinks it is what it’s suggesting (e.g., it’s a paint stripe because its width is X, it’s color is white or yellow, and its reflectance is Y). 11 research grants have been awarded and development is underway. Look for interesting results to come from this research.

Let’s hold ourselves accountable to explain our algorithms (to the extent we can without giving away proprietary information) and build them in ways that allow us to validate their results.

Get Started

We hope this article has provided some value to you! If you ever need additional help, don't hesitate to reach out to our team.  Contact us today!

Talk To An Expert

 

Topics: Mapping & Visualization, Machine Learning & AI, Public Works