Author: Joe Ambrogne

A drone executes a grid flight path over a target area

Autonomous Flight Apps: An Introduction for Aspiring UAV Mappers

Why should you use a drone flight planning app for photogrammetry?

As UAV mapping grows in popularity, aspiring photogrammetry pilots are bound to hear about drone flight planning software (sometimes referred to as flight apps). This is likely to be a new concept for first-time commercial pilots, and maybe even for businesses specializing in manual operations like photography or newsgathering. But a lot of the literature just assumes everyone knows what a flight app is and why it’s critical for photogrammetry. Read on if you missed the memo.

This isn’t going to be another of those “best apps for drone flying” lists, but a basic explanation of what flight apps are, how they work in the field, and why pilots absolutely need them for commercial UAV mapping.

What is a flight app?

A flight app is a computer program that lets you plan, execute, and monitor autonomous drone flights from your computer or mobile device. Many flight apps also allow you to configure photogrammetry mission settings such as camera angle and image overlap, further simplifying your job.

Since they exist on your PC, laptop, tablet, or smartphone, flight apps don’t typically come packaged with commercial-off-the-shelf drones. You’ll have to buy one separately and download it.

A drone, laptop, tablet, smartphone, and FPV headset
With a flight app, you run the mission from a laptop, tablet, or smartphone instead of your manual controller.

How do they work in the field?

No two flight apps are exactly alike. But here’s a hypothetical example of how you might use one to conduct an autonomous photogrammetry flight:

Preflight Planning

You arrive at the mission site with your drone and a tablet on which you’ve installed your flight app. When you boot up your tablet, the flight app displays a GPS map of the surrounding airspace. You draw directly on this map—either manually selecting waypoints for the drone to fly over or designating a coverage area and letting the app design a suitable flight pattern. Then you open other menus to set your drone camera angle, the number of photos it should take, and the amount of overlap between adjacent photos.


With everything set, you tap a button on the tablet and the drone takes off on its own towards the mission area. At this point, you probably put down your tablet and pick up your drone’s control stick if only to assume manual control in an emergency. Ideally, the drone follows its preprogrammed flight pattern, takes the necessary photos, and lands without your intervention.


With your drone safely on the ground, the flight app begins automatically downloading photos from your drone’s SD card to your tablet and may even kick off an image processing job.

Why are they critical for UAV mapping?

Flight apps make professional UAV mappers more effective at their jobs. Here are a few reasons why you need them:


While a skilled pilot can achieve a lot, it’s very hard to do professional photogrammetry under manual flight control. Clients expect lifelike 3D models. But most photogrammetry software is unforgiving to variances in photo resolution caused by even slight changes in a drone’s airspeed, altitude, and pitch. Furthermore, most photogrammetry missions call for taking photos with a precise front and side overlap (70%, for instance). This is virtually impossible for a human to achieve with consistency. But computers do it effortlessly.


A good flight app not only executes missions with inhuman precision, but also with inhuman efficiency. The first time you use a flight app, it may shock you to see how quickly it completes the mission. With modern drone batteries lasting no more than 30 minutes, being able to fly drones autonomously can save you valuable time and let you cover much larger areas.

Reduced Pilot/Observer Workloads

If you want to run a single-pilot UAV mapping business, flight apps are essential. Anyone who has tried conducting a manual photogrammetry flight solo knows they risk breaking Part 107 regulations every time they glance momentarily from their drone to their camera to line up a shot. More importantly, the danger of actually losing visual line-of-sight increases with the size of the mission area. (You’d think by now that all drone manufacturers would paint their airframes bright orange for visibility, but for some reason they love gray.)

By using a flight app, you can look directly at your drone throughout the flight while the app handles the camera work. This means you can conceivably photograph large-scale areas safely without having to hire a visual observer.

All-In-One UAV Mapping Tool

Think about the complex workflow you must follow to conduct a single photogrammetry flight. First, you need to check an app like NOTAM Search or B4UFly for flight restrictions. After each flight, you need to manually transfer your photos from your drone to your smartphone or PC, and then from there to a desktop or cloud-based photogrammetry provider.

A good UAV mapping flight app is like a Swiss army knife for photogrammetry, handling autonomous flight planning, restricted airspace awareness, data capture, and maybe even image processing all in one application.

A drone transfers its image data to a laptop and then to the cloud for processing
After a flight, some apps may automatically transfer your images for photogrammetry processing.

Is all UAV mapping automated?

In general, manual piloting skills still matter in UAV mapping. Most flight apps build what pilots refer to as “lawnmower” flight paths, sweeping back and forth over the target area at a relatively consistent altitude and maintaining a fixed camera angle. While this is great for capturing flat, multi-acre sites, it doesn’t give the drone a clear view of vertical structures. Using a flight app over a forest might result in a 3D model of trees with clear canopies but blurry, distorted trunks. To get around this, pilots may still supplement their automated flights with manual flights—letting the app survey the entire area from above, and then conducting manual flights that orbit tall structures to capture their side views or even fly underneath their overhanging surfaces.

That said, some companies have begun offering flight apps designed specifically for vertical structures, and they are likely to become more commonplace as the tech evolves.

How to choose your UAV mapping flight app?

There are many flight apps on the market, but they aren’t all necessarily suited to every UAV mapping mission. If you decide to purchase a flight app for your business, review these common but important features:

Drone Compatibility

Before choosing a flight app, make sure your drone is compatible. Some drones, especially new models, may not support all available apps. For example, the DJI Mini 2 was released in November 2020, but its users had to wait for a January 2022 firmware update before they could use any autonomous flight app. As of this writing, many app developers are still playing catch-up.

Waypoint Flight Planning

This is a core feature of flight apps, but it’s worth mentioning. Your app should have an intuitive map that lets you draw the target area using lines, points, and polygons, and monitor the drone’s real-time position during the flight.

Camera/Overlap Control

Flight apps tailored for UAV mapping also let you configure camera settings like angle, shutter speed, and ISO, as well as the overlap between photos.

Flight Restriction Overlays

This important safety feature lets you see TFRs and other no-fly zones on the map as you plan your flight and may even prevent you from plotting waypoints in illegal airspace.

Integration with Photogrammetry Software

The best flight apps are part of a larger UAV mapping suite and can save you time by automatically transferring your images from the drone’s SD card to your mobile device and from there to your photogrammetry engine for processing.

Optional: Vertical Structures

As stated above, most UAV mappers can supplement their automated flights with manual flights to capture the occasional tower, tree, and building. But if a significant part of your business is modeling vertical structures, it might be worth finding an app that offers more than the standard “lawnmower” flight paths. Be aware that you may have to pay a steeper price.

Final thoughts

We hope this post has demystified the concept of flight apps and convinced you of their value in UAV mapping/photogrammetry. Let us know if we’ve missed any important app features. Most of all, fly safely.

How Hard is it for Manned Pilots to Pass the Part 107 Exam?

You’re a traditional manned (crewed) pilot who wants to fly drones for hire. How ready are you? What and how much do you need to study? In this post, we will help you harness your previous flight training to tackle the drone exam. 


In a previous post, we outlined the steps non-pilots should take to study for, schedule, and pass the Part 107 Remote Pilot License exam. We also noted that licensed private pilots can instead take the FAA’s short online course thanks to their preexisting knowledge. But there’s a caveat: only pilots who meet the recency requirements of 14 CFR Part 61 are eligible to bypass the knowledge exam. If you’ve been out of the game for years, either because you retired from active service or simply too busy to fly, you’ll have to take the test like everyone else. 

This puts you in an odd spot. On one hand, you’re right to think the process should be shorter and easier for you due to your previous flight training. On the other hand, flying drones is certainly not like flying a Cessna. Just how ready are you to waltz into that exam room?  

Below, we reexamine the Part 107 curriculum through the eyes of rusty pilots like you who aren’t sure where to jump back in. Let us help you crush the Part 107 exam in short order.  

Brief overview: Part 107 versus Part 61

First thing’s first. The entire Part 107 certification process is the equivalent of what your Part 61 CFI referred to as “ground school.” There are no practical skills requirements. Whereas Part 61 training prescribes mandatory flight-hour minimums (including dual, solo, and night), an aeromedical evaluation, and a practical flight test, Part 107 students will never even speak with a DPE. Once you pass the written exam, you earn your commercial drone pilot license with all the rights and responsibilities it entails. It’s going to feel weird. 

Second, you probably know most of what will be on your Part 107 exam because the material is very similar to what you learned before. You might even recognize some of the test questions from your old ASA Private Pilot Test Prep Book. But there is still a sizeable amount of new material to absorb, and we will go over that below. 

NOTE: Should you still enroll in a drone flight-training program? There are many companies out there offering practical, hands-on training, and they are probably worth your investment. But unlike traditional flight school, they are not required to earn your license. We suggest passing the knowledge exam separately and learning practical skills—whether through a paid program or self-led practice—on your own time. 

Step 1: Review the FAA’s Part 107 training materials

Previously, we told you where to find the FAA’s Part 107 training materials. Now, we’ll help you prioritize each one in your rusty pilot training plan. 

#1: Title 14 CFR Part 107 – Small Unmanned Aircraft Systems 

Read this first. It contains virtually all the new information you will have to learn—specifically, how to fly drones for hire legally in the United States. Part 107 is mercifully short compared to other sections of the FAR-AIM, but you still need to know it well to pass the test. This is especially true if you start your career as an independent or freelance pilot. Remember: when you go solo, you are the expert. Nobody else will warn you when you are about to break a law. 

#2: AC 107-2 – Small Unmanned Aircraft Systems (SUAS) 

Read this next. It’s a supplement to Part 107, providing additional guidance on certain regulations. Some highlights include: 

  • Chapter 2, which provides a useful list of FAA reference materials in addition to those we’ve listed here 
  • Chapter 5, which offers practical techniques for communication and transfer of controls between pilots, estimating groundspeed and altitude, and operating over people or from moving vehicle 
  • Appendix C, which provides a neat preflight inspection condition chart for people who aren’t maintenance experts (useful for making go-no-go decisions) 

#3: FAA-G-8082-22 – Remote Pilot – Small Unmanned Aircraft Systems Study Guide 

Once you’re comfortable with Part 107, open this document to begin reviewing familiar ground school topics. While this study guide can help you reframe your old studies in the context of drone operations, do not assume it’s comprehensive. At best, it spotlights just some of the topics you will have to know for the exam. To get a sense of the full list, you need the ACS. 

#4: FAA-S-ACS-10B – Remote Pilot ‒ Small Unmanned Aircraft Systems Airman Certification Standards 

NOTE: If your original flight training concluded before 2015, you may not be familiar with the concept of Aeronautical Certification Standards. In short, ACS is an attempt by the FAA to improve the practical value of ground school by tying theoretical knowledge topics directly to pilot certification requirements. In crewed flight training, that means exchanging rote-based memorization for scenario-based training that teaches pilots to think outside the box. 

The ACS is particularly useful if you fail your exam. Read its section “Using the ACS” for more information on how you can figure out which topics you need to study more.  

More importantly, the ACS document provides you, the rusty pilot, with a roadmap of topics you will need to revisit from your previous flight training. Skim its list of learning objectives and you’ll realize that the Part 107 exam tests you on most of the same ground school concepts as it does crewed pilots, except maybe for flight instrumentation, airport signage, and engine mechanics. This should help you plan your ground school refresher. 

Step 2: Plan your ground school refresher

Option #1: Pilot’s Handbook of Aeronautical Knowledge 

Using the ACS objectives as your guide, skim the chapters in this book related to flight school topics that you’ve forgotten the most. Pay special attention to information-heavy topics like weather, aeronautical charts, and performance diagrams. You don’t want to be blindsided on exam day by an obscure question about microbursts. And make sure you can problem solve. For example, in keeping with the ACS philosophy, the exam may ask you to look at a spot on a Sectional Aeronautical Chart to determine whether your client’s requested flight is safe and legal. 

Option #2: Pilot Training System’s Drone Pilot Training Videos 

Instead of reading your notes or the Pilot’s Handbook of Aeronautical Knowledge, you can watch this YouTube series to cover the same topics in roughly 1.5 hours. Each video covers a particular topic, so you can pick and choose.  

Final thoughts

As a crewed pilot, you already have at least 75% of the requisite knowledge, and should easily pass the Part 107 exam with an abbreviated self-study plan like the one above. But don’t let that lull you into complacency. The FAA will expect you to behave responsibly like any other commercial pilot. Make sure you remember everything you learned to earn your first wings. Aerodynamics. Performance. ADM. NOTAMs. Airspace. Aeronautical charts. Weather. And once your new cert comes in the mail, do the public a favor and maintain your old high standards.  

Best of luck! 

Mapware’s Photogrammetry Pipeline, Part 2 of 6: Homography

In the previous article on Mapware’s photogrammetry pipeline, we described how our software uses keypoint extraction to help a computer see the most distinctive features in each image without human eyes.

The next step, homography, involves pairing images together based on their keypoints. To understand how this works, you need to know a little more about how a computer “sees” images.

The limits of computer vision

It can be helpful to think of photogrammetry image sets like puzzle pieces. In the same way that humans snap puzzle pieces together into a complete picture, photogrammetry software connects drone images together to generate a 3D model of the whole site.

But there’s an important difference. Unlike humans, computers don’t actually understand the features depicted in each image. Whereas a human might intuitively know that a puzzle piece showing the back half of a truck connects to another piece showing its front half, a computer wouldn’t know that they go together because it doesn’t see a truck – it sees pixels.

Figure 1: A human sees the bright-red corner of a truck over a blue highway. Mapware’s algorithm detects a rapid change in grayscale intensity values between pixels.

What a computer can do, however, is identify the same truck in two images based on their mathematically similar keypoints. This is why drone photographers take overlapping photos.

The purpose of overlap

Overlap occurs when two adjacent photographs show part of the same scene on the ground. If you take two photos with 50% overlap using a drone, that means you take the second photo when the drone has only moved halfway beyond the area where it took the first photo. Any keypoints generated within the overlapping region are created twice—one per image. The similarities between these keypoints will help the computer pair photos go together during homography.

NOTE: Many drone flight control apps are designed to automate photogrammetry data capture, and these typically let pilots specify the amount of overlap they want between adjacent images. If you are using one of these, Mapware recommends configuring a front and side overlap of 70% to generate the highest quality models.

Figure 2: We recommend taking photos that overlap one another by 70%, both from the front and sides. This increases the odds that two or more photos will display the same feature and Mapware will match their keypoints.

The homography process (in two steps)

In the homography process, Mapware considers each pair of images independently (pair by pair) to identify which images overlap and, if so, what is the best possible linear transformation that relates the first image to the second image. In other words, for a given point in the first image, Mapware determines how to transform it to get the corresponding point in the second image. We’ll break this two-step process down below.

Step 1: keypoint matching

In the first step, Mapware runs an algorithm to compare each image in the set to every other image in the set. If it finds two images with nearly identical keypoint fingerprints, it designates the two images as a pair. Mapware iterates through the entire image set until each image is (hopefully) paired with at least one other image.

Figure 3: Mapware employee Dan Chu took these two photos while flying a DJI Inspire 2 over Sandwich, Massachusetts. The photos share a large overlapping region (highlighted above in pink). Mapware looks for similar keypoints in that region to pair these images.

Step 2: linear transformation

Remember that keypoint fingerprints are invariant (unchanging) with regards to scale and orientation – meaning they generate nearly identical values even after being enlarged, shrunk, or spun around. This is important in the keypoint matching step because it helps Mapware pair two images even if one image is taken by a drone at a higher altitude or different angle.

But Mapware must eventually stitch all of the images together into a 3D model, and that involves undoing these differences to help the images fit together properly. The second step of the homography process does this using linear algebra. It finds the most probable linear transformation between the two keypoints—in other words, it mathematically calculates the best way to stretch, rotate, and translate the first image’s keypoints so they are identical to the second image’s keypoints.

After homography

Once Mapware has identified image pairs and calculated their scale/orientation differences, it can align all the images together into a composite image of the whole landscape. This is called structure from motion (SfM) and will be described in the next article in this series.

Mapware’s Photogrammetry Pipeline, Part 1 of 6: Keypoint Extraction

Mapware can create a 3D digital twin of a landscape from a set of 2D aerial photos. In this article, we discuss the first step in Mapware’s processing pipeline: keypoint extraction.

Purpose of keypoint extraction

When users upload digital images to Mapware and initiate its photogrammetry process, the first step is keypoint extraction: identifying the distinctive features in each image and assigning them values that a computer can easily reference later.

Keypoint extraction begins the photogrammetry pipeline for two reasons.

First, it assists with computer vision, the science of helping a computer understand an image the way humans do by picking out the most interesting shapes from the background. This helps Mapware later in the pipeline when it stitches image sets together into a 3D digital twin.

More importantly, keypoint extraction aids in image compression. Typical photogrammetry projects can require hundreds or even thousands of photos, with each photo containing millions of pixels. Reading these large image sets can be memory-intensive, increasing the risk of system crashes. But keypoints serve as bookmarks in each image file, allowing computers to read the important features and ignore the rest. The result is faster and more-reliable processing.

The keypoint extraction process

Mapware identifies keypoints like most photogrammetry software, using a combination of corner detection, descriptor assignment, and invariance calculation.

  • Corner Detection: To identify distinctive features in each photo, Mapware’s corner detection algorithms search for pixel groups whose grayscale intensity values differ substantially from their neighbors. These can either be edges—boundary lines between two areas of differing intensity—or corners—points where two lines converge. Mapware identifies edges and corners, but only designates corners as keypoints. This is because corners are easier to localize using cartesian (x, y) coordinates, whereas edges could be lines that run the entire length of an image. You might think of corner detection as loosely similar to the way human eyes notice highly-contrasting features in an image. However, the comparison is arbitrary, as computers may not notice the same image features a human would.
Figure 1: Mapware ignores edges and generates keypoints from corners, because corners are easier to pinpoint in an image.
  • Fingerprint Assignment: To aid in image compression, Mapware then runs another algorithm to mathematically reduce each keypoint into a compact hash called a fingerprint. Some other photogrammetry products refer to these as “descriptors” because they not only help a computer quickly find a keypoint later in the image, but also describe its properties. Thanks to fingerprint assignments, Mapware doesn’t have to process entire images again; it can just read their (smaller) fingerprints.
  • Invariance Calculation: To help stitch photos together later in the photogrammetry pipeline, Mapware ensures each fingerprint is invariant (unchanging) with respect to scale and orientation. In real-world terms, this means a pilot can photograph the same feature twice from different heights and angles—and Mapware will assign nearly identical fingerprints to both images despite their differences. This helps Mapware match photos of the same feature in spite of the changing flight paths of a camera drone.
Figure 2: Thanks to the invariance built into the fingerprint algorithm, Mapware recognizes the same feature in these images even though they were taken from different angles and heights.

How Mapware uses keypoints

After Mapware has identified the keypoints in each image and assigned their fingerprints, it can gradually assemble individual images together into the composite that will become a 3D digital twin.

It starts by identifying pairs of images that have nearly identical keypoints. These keypoint pairs exist because drone pilots take photos with overlap to ensure that the same features will appear on more than one image. For example, they may ensure the same feature appearing at the back of one photo appears at the front of the next photo.

If Mapware identifies the same keypoint in both images, it knows to pair them together on their overlapping region. The next article in this series describes the keypoint pairing step, which is called homography.

Mapware’s Photogrammetry Pipeline: Introduction

If you’re curious about how engineers transform drone photos into 3D computer models, this blog series is for you.

At Mapware, we offer an all-in-one commercial photogrammetry product for both commercial and government customers. Our parent company, Robotic Services, designs complete geospatial intelligence (GEOINT) solutions customized for both civilian and government clients, like the U.S. Air Force.

We’re writing this series to give you a basic overview of Mapware’s photogrammetry pipeline—the systems engineering term for sequential processes. Each entry will focus on the concepts underlying one of Mapware’s six pipeline steps.

Reading this series won’t grant you a doctorate in computer science or turn you into a practicing photogrammetrist, but it will help you speak the basic language. We at Mapware aim to get steadily more technical over time and may expand upon concepts we introduce here. We want readers to build their domain knowledge alongside us.

Mapware in the Context of Photogrammetry 

Photogrammetry is the science and technology of using photographs, LiDAR, and other geospatial measurement techniques to represent the physical world in actionable ways—such as topographic maps, object reconstructions, or 3D digital twins.

Modern photogrammetry is used in the digitization of archaeological sites, buildings, and entire landscapes, as well as in facial recognition, filmmaking, forensic investigations, and sports. 

In 1858, Gaspard-Félix Tournachon, also known as Nadar, patented the idea of mapmaking and surveying using aerial photos. He also took what is credited as the first aerial photograph from a hot-air balloon over France in 1858. Luckily, aerial photographers today don’t have to lug heavy cameras and portable darkrooms into the air.

Mapware is a photogrammetry SaaS designed to ingest aerial photographs of a large area (usually taken by drones) and stitch them together into an accurate 3D digital twin that users can explore virtually by panning, rotating, and zooming, as well as toggling between orthomosaic, 3D model, and other data layers. 

Above is a digital elevation model (DEM), one of several layers that can be applied to a digital twin in Mapware.

Mapware’s Pipeline 

To accomplish this, Mapware follows a six-step pipeline: 

  • Keypoint Extraction: identifying distinctive features in each image 
  • Homography: matching identical features between images to construct image pairs 
  • Structure from Motion: aligning all image pairs into a composite image of the whole landscape 
  • Depth Mapping: constructing a topographical map (with elevations) of each image 
  • Fusion: combining depth maps into a high-res composite 3D point cloud of the landscape 
  • Structured Output: transforming the point cloud into a usable 3D model for the client 

In the next entry in this series, we will discuss the photogrammetry concepts underlying the first step in the pipeline – Keypoint Extraction. 

  • 1
  • 2