Valuable content and guides for buying or using a Drone.
25 years of survey work, hundreds of failed models, and a few lessons that'll save you days of wasted flight time.
Here's what nobody warns you about the first time you try photogrammetry with a drone: the flying is the easy part. The data collection, the overlap settings, the lighting decisions, the ground control — that's where everyone bleeds out. You come back from a flight feeling great, load your images into Agisoft or DroneDeploy, and three hours later you have a lumpy, twisted, unusable mess that looks like someone melted your site in a microwave. Sound familiar? Good. That means you're paying attention.
Let's fix it, top to bottom.
01 / THE FOUNDATIONPhotogrammetry is the science of extracting 3D geometry from overlapping 2D photographs. Your drone is just a camera platform. The magic — and the failure points — live in how you fly, how many photos you take, and how much those photos overlap with each other.
Think of it like this: if you're trying to reconstruct a building from memory, you need to have walked around it from multiple angles. One photo of the front tells you almost nothing about depth. The software works the same way. It needs to see the same physical point in at least three separate photos from meaningfully different angles to triangulate its position in 3D space.
That process — called Structure from Motion, or SfM — is the engine behind every photogrammetry tool you'll use. Pix4D, Agisoft Metashape, RealityCapture, DroneDeploy, all of them. They all do SfM under the hood. The quality of your output is almost entirely determined by the quality of what you feed in.
The one thing I wish someone had told me on day one: garbage in, garbage out is not a cliché here. It's a physical law. No amount of processing power or software settings rescues a bad capture mission.
Every tutorial you've read has probably said "80% front overlap, 70% side overlap." That's the minimum for flat, featureless terrain on a sunny day with no wind. For anything else, those numbers will fail you.
Here's what I actually use in the field:
| Terrain / Subject | Front Overlap | Side Overlap | Notes |
|---|---|---|---|
| Flat agricultural / open land | 80% | 70% | Standard settings, works fine |
| Construction site (some relief) | 85% | 75% | Add oblique pass if structures present |
| Dense vegetation / forest | 90% | 80% | Still won't penetrate canopy; RTK helps |
| Complex structures / facades | 90% | 85% | Needs oblique passes at 45°, multiple altitudes |
| Powerlines / thin features | 90%+ | 85%+ | Usually needs manual orbit captures too |
More overlap means more photos, more processing time, and more storage. But a model that doesn't reconstruct properly costs you a re-flight. A re-flight always costs more than the extra storage. Crank the overlap up. You can always downsample later. You can't invent missing coverage after the fact.
Your flight altitude directly controls your Ground Sampling Distance — the GSD, which is how many centimeters of real ground one pixel in your image represents. Fly at 60m and you might get 2cm/pixel with a DJI Mavic 3. Fly at 120m and you're at 4cm/pixel. That might sound fine until you're trying to detect a 3cm crack in pavement or identify individual roof tiles.
The formula is roughly: GSD (cm) = (altitude in meters × sensor pixel size in mm) / (focal length in mm × 10)
Your drone's app will often calculate this for you. But know what number you're actually getting before you fly, not after. Match your altitude to the deliverable accuracy your project actually requires.
03 / THE KILLER MISTAKEThis is the thing I've seen destroy more models than any other single issue. You fly a perfect nadir grid — all images pointing straight down — you get great overlap, you process it, and the output looks domed. The edges of your site curve upward or the center bulges. Measurements are off by meters. Nothing is flat that should be flat.
You've been hit by the doming artifact. It's caused by one thing: flying only nadir (straight-down) images without Ground Control Points or without oblique passes.
Here's the physics. When your camera always points straight down and moves in a grid pattern, the software has an incredibly hard time resolving the true curvature (or lack of curvature) of the ground surface. There's a systematic error in how the bundle adjustment works with that geometry. The solution is one of three things:
The cheapest fix? A grid flight plus a perimeter orbit at 45° oblique. Takes maybe 20% more flight time. Destroys the doming problem completely.
Ground Control Points are physical markers — usually printed targets, painted crosses, or specialty survey targets — placed on the ground before your flight. You measure their precise XYZ coordinates with a total station or RTK GNSS receiver, then manually identify them in your photos during processing. The software uses them to anchor your model to real-world coordinates and correct systematic errors.
The question I get more than any other is: "How many do I need?"
The real answer is: placement quality beats raw quantity. Five well-placed GCPs will beat fifteen badly placed ones every single time. Here's what well-placed means:
The absolute worst place to put GCPs: all four corners plus center. That's what every tutorial shows. It's also the worst possible distribution because it creates a regular grid that hides certain systematic error patterns. Put them in irregular positions. Off-center. Asymmetric. That's how real survey control works.
Your GCP targets need to be large enough to be visible in your imagery at your flight altitude. A good rule: the target should be at least 5–10 pixels across in your images. At 80m altitude with a typical consumer drone camera, that means targets around 40–60cm across. Print them bigger than you think you need. I've had to re-fly sites because the targets were too small to identify confidently in the images. Costs one roll of paper to go bigger. Costs a day to re-fly.
05 / LIGHTING AND CONDITIONSPhotogrammetry software works by matching visual features between photos. Shadows are the enemy. Not because they look bad — because they move.
If you fly a large site over 45 minutes, the sun moves. Shadows shift. A feature in the shadow in photo 300 was in bright light in photo 100. The software sees what it thinks are two different things. Matches fail. Your model has holes, blurring, weird artifacts around any shadowed area.
The ideal shooting conditions:
Vegetation on a windy day is one of the hardest things to model. Leaves and grass move between frames. They become ghosted, blurry, impossible to match. Water surfaces are similarly brutal. You'll never get a clean reconstruction of a lake surface. Plan around this. It's not a software problem you can fix in post.
Auto exposure is your enemy on a mapping mission. As the drone moves around the site, auto exposure shifts your shutter speed, aperture, and ISO to match different parts of the scene. The result: photos that are too dissimilar for the software to match well.
Before launching, manually set:
Yes, some shots will be slightly under or overexposed compared to if you'd used auto. That's fine. Consistency between frames is worth far more than perfect per-frame exposure.
06 / PROCESSINGBy the time you're processing, you've already made 90% of the decisions that determine your output quality. But there are still ways to wreck it in software.
In Agisoft Metashape, the photo alignment step has quality settings from "Lowest" to "Highest." In Pix4D it's similar. Never use less than "High" for a project that matters. The alignment step is where your key point matches are found and camera positions are calculated. Skimping here creates a cascade of errors through every downstream step. Yes, it takes longer. Yes, it's worth it.
The dense cloud generation is where the real geometry is built, and it's also the most time-intensive step. For preliminary or internal review models, using "Medium" quality here is fine. For deliverables, go "High." For sub-centimeter accuracy requirements, "Ultra High" — but budget hours or overnight processing for large datasets.
Make sure your GCPs and your output coordinate system match. This sounds obvious. It is not obvious when you're tired at the end of a field day and your measurement was in WGS84 geographic coordinates but your project is in a local state plane projection. Or your elevation is in ellipsoidal height but your client needs orthometric (sea-level-referenced) height. Geoid models and datum transformations are a whole rabbit hole. Know which one your project needs before you go to the field, not after.
| Common Coordinate Mismatch | Symptom in Output | Fix |
|---|---|---|
| WGS84 vs local projection | Model placed in wrong location | Reproject GCPs or output to matching system |
| Ellipsoidal vs orthometric elevation | Heights off by ~30–50m in many regions | Apply geoid model (e.g. EGM2008, GEOID18) |
| Mixed datum years | Horizontal offsets, subtle but real | Use CORS network or known benchmark for control |
| Feet vs meters | Vertical scale factor of 3.28 | Check units in both field data and software project |
Photogrammetry needs visual texture to work. A flat, unmarked asphalt parking lot will reconstruct with holes and deformation because there's nothing for the algorithm to grab onto. Same with sand dunes, snow fields, or calm water. The fix: if you can place additional targets or markers on the surface before flying, do it. If you can't — and sometimes you can't — accept that those areas won't reconstruct cleanly and plan your analysis accordingly.
Flying a corridor (long, narrow site) using the standard grid pattern creates poor geometry at the edges. The side overlap at the edges of the corridor is always lower because you've got no adjacent flight lines there. For corridor mapping, add dedicated edge passes — fly the outside edges of your corridor as separate lines. Your model quality at the edges will improve dramatically.
Metal roofs, glass, standing water on flat roofs — all of these create specular reflections that look completely different from different angles. The software can't match them. You get holes exactly where the reflective material is. Solutions are limited: fly during overcast conditions (diffuse light creates far less specular reflection), or mask those areas in processing and accept them as no-data zones.
This happens when there's a visual gap in your coverage — a section of the site with insufficient overlap where the software can't bridge the two halves. It'll look like the model tore. Check your flight log for any pause, RTH (return to home) event, or waypoint anomaly. Find where the gap occurred. Then you need to either re-fly that section, or if photos exist but weren't used, force the software to include them manually.
08 / THE DELIVERABLESMost people come to photogrammetry wanting "a 3D model." But there are several distinct outputs your software produces, and they serve different purposes. Know what you're building before you start.
| Output Type | What It Is | Best Used For |
|---|---|---|
| Orthomosaic (GeoTIFF) | Flat, georeferenced aerial image | Area measurement, planimetric mapping, base maps |
| Digital Surface Model (DSM) | Elevation of everything including vegetation and structures | Volumetrics, flood modeling, general elevation analysis |
| Digital Terrain Model (DTM) | Bare-earth elevation (vegetation removed algorithmically) | Civil engineering, drainage, terrain analysis |
| Dense Point Cloud (.las, .laz) | 3D point cloud with color | BIM, as-built documentation, detailed inspection |
| Textured Mesh (.obj, .fbx) | 3D mesh with photographic texture | Visualization, VR/AR, client presentations |
For most civil and survey work, the orthomosaic and DSM are the primary deliverables. The textured mesh is gorgeous but enormous, and many clients don't have software to open it. Always clarify the delivery format before you fly.
09 / ACCURACYYour software will report RMSE values for your GCP residuals after processing. Something like "Horizontal: 2.3cm, Vertical: 4.1cm." People take that number and report it as their survey accuracy. That's not how it works.
The GCP residuals tell you how well the model fit to the points you used as control. Your real accuracy is measured against your independent check points — the ones you didn't use as control. If your check point residuals are similar to your control residuals, your accuracy estimate is probably honest. If the check points are significantly worse, something is wrong: bad GCP measurement, poor distribution, systematic error in your data.
Industry standard for drone photogrammetry at typical altitudes (60–120m): 2–5cm horizontal, 3–8cm vertical, with good RTK or well-measured GCPs. Claims of sub-centimeter accuracy with a consumer drone at 60m without rigorous control are almost always wrong.
If a client needs sub-centimeter accuracy, be honest about whether photogrammetry can deliver it at their budget. Sometimes terrestrial laser scanning or a different method is the right tool. Knowing when not to use the method you know well is the mark of an expert.
Print this. Put it in your drone case.
The reason most people struggle with photogrammetry isn't that it's technically complex. It's that the failure modes are invisible until processing — hours or days after the flight. You don't know you got bad data until you're sitting at your desk and the model falls apart. That lag between cause and effect is brutal.
But every failure mode has a name now. You know what doming is and how to prevent it. You know what overlap numbers actually work. You know that GCPs need distribution, not just quantity. You know to lock your camera settings and fly during boring weather and add oblique passes.
Your next flight will be better. The one after that will be better still. That's the job.
Written from 25 years of field work. No affiliations with any software or hardware vendor. These are just the things that work.
Your ultimate resource for drone reviews, guides, comparisons, and regulations. Fly smarter, fly better.