LiDAR Peak Analysis: What It Takes

Items that do not fit the categories above.
Forum rules
  • This is a mountaineering forum, so please keep your posts on-topic. Posts do not all have to be related to the 14ers but should at least be mountaineering-related.
  • Personal attacks and confrontational behavior will result in removal from the forum at the discretion of the administrators.
  • Do not use this forum to advertise, sell photos or other products or promote a commercial website.
  • Posts will be removed at the discretion of the site administrator or moderator(s), including: Troll posts, posts pushing political views or religious beliefs, and posts with the purpose of instigating conflict within the forum.
For more details, please see the Terms of Use you agreed to when joining the forum.
User avatar
Eli Boardman
Posts: 660
Joined: 6/23/2016
14ers: 58  1  15 
13ers: 18 1
Trip Reports (16)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by Eli Boardman »

I know I keep beating the same dead horse about the remaining uncertainty in LiDAR analysis, but here's something I ran across tonight that illustrates part of the inherent uncertainty in this sort of analysis.

The peak in question is Relay Peak by Lake Tahoe in Nevada, currently ranked with 318 ft. of interpolated prominence. Searching for .las data in The National Map yields an interesting (and quite uncommon) result: two separate surveys are available covering this same spot! The first survey ("CA-NV_LakeTahoe_2010") was flown in August of 2010, and the second survey ("CA NoCAL Wildfires B1 2018") was flown in July-August of 2018.

Let's walk through what happens next...

The few times this has happened before, I have defaulted to using the newer survey, and that's what I do this time too, because newer=better, right? Despite the krumholtz trees around the area, identifying the summit and saddle ground is easy enough since the vegetation is sparse, and results are quickly forthcoming. Using the 2018 data, the peak ends up with 298 ft. of prominence. I mark the peak as demoted and move along.

On a hunch, I think "what if there was snow or something at the saddle?" Maybe I should double-check with the other survey's data since it's so close. So, I download the 2010 data and...whoa! There are a WHOLE LOT MORE POINTS! In fact, we can quantify this using Lasinfo as "point density," or the number of last-returns per unit area. The 2010 LiDAR data has 13.5 points per square meter, and the 2018 LiDAR data has a measly 5.8 points per square meter!

As it turns out, with the 2010 data, the summit location is almost identical (less than 1 ft. from the 2018 summit point), but the elevation is about 2 ft. higher, and the peak ends up with exactly 300 ft. of prominence.
___________________________

Now, for the geeks, it's a little weirder than it sounds. You see, since the selected summit points in the 2010 and 2018 surveys were so close together, it's improbable that there's actually a super narrow 2-ft. tall rock sitting on top. The answer lies in the arcane magic/math of point spread functions and LiDAR footprints. While we sometimes talk about LiDAR "points," those points aren't infinitely small. They actually represent a spatial average due to loss of collimation of the laser beam. As the laser pulse travels from the airplane to the ground and back to the optics, it spreads out and ultimately reflects off a non-infinitesimal area of the ground. In fact, the true footprints of nearby points can often overlap! Not only this, but the pulse doesn't return to the LiDAR optics all at once; rather, the point spread function for different distances to the ground affects how "blurry" the laser looks--this is called the waveform, and waveform analysis is ongoing research. How exactly all of this comes together for various LiDAR systems is a serious academic research topic and is usually ignored for all practical purposes. However, when we start pushing the limits of the data by trying to measure rough topography to the nearest foot, this is a problem we will encounter.

The likely reason that the 2010 survey has higher summit values is because the higher point density tells us that the airplane <probably> flew at a lower altitude (or used a better scanner, or both). In either case, the laser footprint is almost certainly smaller in the 2010 survey, yielding higher accuracy and changing the ranked status of this peak. This is why I am not comfortable declaring East Twin Peaks higher than West Twin Peaks in the Wyoming 13ers list, since the western summit's highest LiDAR point is only 2 ft. lower and that peak has an extremely sharp summit block.

Moral of the story: the determination of summit elevations by airborne LiDAR data has several feet of uncertainty in sharp/rocky areas due to the unknowable effects of the laser footprint on each point's reported Z coordinate. Highly technical aspects of LiDAR data (like the point density and the laser footprint) can make or break a peak's ranked status in some cases. I suggest a new category of LiDAR-soft-ranked peaks with prominence in a range somewhere around 295-299 ft.

FYI if anyone really cares, the answer is to buy one of THESE and use Terrestrial Laser Scanning (TLS). :lol: 8)

Pictures: 2010 LiDAR data (first) and 2018 LiDAR data (second) with exactly the same color ramps in the same location, both shown with approximately 3 m lines for scale.
2010.JPG
2010.JPG (219.27 KiB) Viewed 2391 times
2018.JPG
2018.JPG (58.08 KiB) Viewed 2391 times
Teresa Gergen
Posts: 245
Joined: 8/12/2012
Trip Reports (0)
 

Re: LiDAR Peak Analysis: What It Takes

Post by Teresa Gergen »

Great. I have not unfrequently found two sets of data available and have chosen the later one to download and use each time, thinking the same thing. I didn't keep any records as to which summits were involved or if those ones were close in terms of being ranked.
User avatar
glenmiz
Posts: 1142
Joined: 8/30/2013
14ers: 58 
13ers: 121 1 3
Trip Reports (4)
 

Re: LiDAR Peak Analysis: What It Takes

Post by glenmiz »

I'm a retired petroleum engineer and Eli's discussion reminds me of the ol' days when geophysicists talked about similar effects with sound/seismic waves reflecting/refracting off interfaces in the subsurface. Back then, those smart people deduced that the dispersion of sound waves could, in some limited situations, distinguish between oil/gas/saltwater in the pore space a couple of miles underground. It was pretty amazing to me back then as an exploration tool and this Lidar stuff is pretty amazing to me now.

Thanks Eli!
Aim high to end high
User avatar
Eli Boardman
Posts: 660
Joined: 6/23/2016
14ers: 58  1  15 
13ers: 18 1
Trip Reports (16)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by Eli Boardman »

Quick note on a way to sanity-check your results:

In annoying topography, i.e. flat and convoluted areas where the col is substantially separated from the peak and the height of intervening ridges is close to the summit elevation, I've found that it's helpful to plot the col candidate point (converted to lat/lon) in Google Earth. Then, I create a loop path starting and ending at the chosen point, with the rest of the path chosen arbitrarily other than the requirement that the path stay below the key col elevation. Then, showing an elevation profile of this loop around the peak can confirm that it is indeed possible to "walk" around the peak without exceeding the col elevation. Then I use Google Earth to confirm that there are no points within 20 ft. or so of the summit elevation (or any sharp-looking towers) within the loop area. Of course, this doesn't confirm that the col point is the <correct> key col, but it does help eliminate the possibility of being confused about whether a slight saddle is actually a col in the first place, or just an arbitrary point on a slope.

Example: Boulder Benchmark is promoted from 299 ft. of prominence to 301 ft. based on LiDAR backed up by a Google Earth sanity-check:
ColCircle.JPG
ColCircle.JPG (126.57 KiB) Viewed 2243 times
User avatar
JoeGrim
Posts: 25
Joined: 5/2/2017
14ers: 3 
13ers: 8
Trip Reports (0)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by JoeGrim »

I really like Eli's proposal of a new LiDAR soft ranked category. As for what threshold to use for the bottom of it (e.g., 295'), I'm not sure what that should be. What are objective techniques we could use to determine this threshold? For example, what is the typical beam width of LiDAR on the ground? What is its probable range in width? How much can rocky summits range in elevation within that beam width? Another factor that comes into play is when the summit HP is under a bush or tree. Most of the ~70 peaks I have done so far have been sub-alpine, so I can only determine the highest elevation that is not under a tree or bush. (Thankfully, Google Earth lets me see what areas aren't underneath trees.) However, we can all recall bagging several peaks where we had to duck under a tree branch to touch the HP itself. Typically, these sub-tree HPs have been no more than a few feet high, so I would feel 99% confident that a peak isn't ranked if it's LiDAR prominence is 295'. Fortunately, saddle elevations are typically broad and wide, and I usually have very high confidence of their LiDAR elevation +/-1 ft, so this uncertainty only comes into play for their summit elevation.
User avatar
JoeGrim
Posts: 25
Joined: 5/2/2017
14ers: 3 
13ers: 8
Trip Reports (0)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by JoeGrim »

It would be nice to gather a data base of photos of the hand-levelled summit HP, along with rock cairns, and someone standing based them for scale. That way we could have photographic evidence to go along with what we see in the LiDAR data. I would suggest we enter these into the photos of the peak that are already used on LoJ, but I also don't want to create a clutter of extra photos either. What do you all think?
User avatar
bdloftin77
Posts: 1090
Joined: 9/23/2013
14ers: 58  1 
13ers: 58
Trip Reports (2)
 

Re: LiDAR Peak Analysis: What It Takes

Post by bdloftin77 »

JoeGrim wrote: Mon Feb 07, 2022 6:56 am I really like Eli's proposal of a new LiDAR soft ranked category. As for what threshold to use for the bottom of it (e.g., 295'), I'm not sure what that should be. What are objective techniques we could use to determine this threshold? For example, what is the typical beam width of LiDAR on the ground? What is its probable range in width? How much can rocky summits range in elevation within that beam width? Another factor that comes into play is when the summit HP is under a bush or tree. Most of the ~70 peaks I have done so far have been sub-alpine, so I can only determine the highest elevation that is not under a tree or bush. (Thankfully, Google Earth lets me see what areas aren't underneath trees.) However, we can all recall bagging several peaks where we had to duck under a tree branch to touch the HP itself. Typically, these sub-tree HPs have been no more than a few feet high, so I would feel 99% confident that a peak isn't ranked if it's LiDAR prominence is 295'. Fortunately, saddle elevations are typically broad and wide, and I usually have very high confidence of their LiDAR elevation +/-1 ft, so this uncertainty only comes into play for their summit elevation.
I like it as well. Especially with peaks like West Eolus (which might actually be ranked with lidar probably missing the high point), I've thought that category would be really useful. And good point about vegetated summits - Teresa's pointed out to me that lidar is likely not catching the truly highest point on such summits - just hopefully getting in the vicinity.

As Eli mentioned, even though we want to have exact results, there's still error/uncertainty regarding lidar. There's the ~10 cm (though often less) vertical error, the spacing between the pulses (typically 1 foot, but I've often seen 3.5' gaps), the pulse size, and the post-processing by the vendors.

I'd initially thought only peaks with 299' should be put into that category, but perhaps a wider range like 295-299', or 297-299' might be useful. Just like past soft-ranked peaks, not all peaks within that range would be forced to be soft-ranked (eg 10' contour intervals) as long as the analyzer is confident the summit is unranked. Though that could be up for debate, whether peaks are automatically soft-ranked if they fall into that range, or if there is uncertainty from the analyzer's standpoint.

I mentioned this proposal to John. I hadn't talked with him yet about a new "lidar soft-ranked" status, so I'm not sure what his thoughts are. If he chooses to avoid a new soft-ranked category, just let the climber beware! Peaks that are barely unranked might actually be ranked, and it'd be good to climb them just in case. As long as we live in this world, there will be uncertainty and error, regardless of how small.
JoeGrim wrote: Mon Feb 07, 2022 6:59 am It would be nice to gather a data base of photos of the hand-levelled summit HP, along with rock cairns, and someone standing based them for scale. That way we could have photographic evidence to go along with what we see in the LiDAR data. I would suggest we enter these into the photos of the peak that are already used on LoJ, but I also don't want to create a clutter of extra photos either. What do you all think?
I think that would be super useful! I've often wished there were good close-up summit photos as I've been analyzing peaks. Especially all in one place. This makes decisions a lot easier regarding which point to choose. Not sure where the best repository location would be.
User avatar
Boggy B
Posts: 781
Joined: 10/14/2009
14ers: 58  7 
13ers: 777 76
Trip Reports (40)
 

Re: LiDAR Peak Analysis: What It Takes

Post by Boggy B »

Eli Boardman wrote: Sat Feb 05, 2022 11:41 pm The few times this has happened before, I have defaulted to using the newer survey, and that's what I do this time too, because newer=better, right? Despite the krumholtz trees around the area, identifying the summit and saddle ground is easy enough since the vegetation is sparse, and results are quickly forthcoming. Using the 2018 data, the peak ends up with 298 ft. of prominence. I mark the peak as demoted and move along.

On a hunch, I think "what if there was snow or something at the saddle?" Maybe I should double-check with the other survey's data since it's so close. So, I download the 2010 data and...whoa! There are a WHOLE LOT MORE POINTS! In fact, we can quantify this using Lasinfo as "point density," or the number of last-returns per unit area. The 2010 LiDAR data has 13.5 points per square meter, and the 2018 LiDAR data has a measly 5.8 points per square meter!

As it turns out, with the 2010 data, the summit location is almost identical (less than 1 ft. from the 2018 summit point), but the elevation is about 2 ft. higher, and the peak ends up with exactly 300 ft. of prominence.
Interesting.

Even though the 2010 and 2018 highpoints are only a foot apart, it looks like the 2010 HP occupies one of the larger gaps in the 2018 coverage. Maybe just over a square foot. I understand you're attributing this to laser maths, but is it possible that is/was a cairn? Or are nearby overlapping summit points also higher in the 2010 data?

Also, were the 2010 survey criteria as stringent (i.e. no snow) as in 2018? Is it possible this survey was reordered, or is the 2018 coverage not comprehensive in this area?

Not challenging your conclusions, just curious about other potential factors.

[EDIT] Actually you said your line is 3m, so the gap in 2018 looks more like a square meter.
User avatar
Eli Boardman
Posts: 660
Joined: 6/23/2016
14ers: 58  1  15 
13ers: 18 1
Trip Reports (16)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by Eli Boardman »

Boggy B wrote: Mon Feb 07, 2022 9:24 amInteresting.

Even though the 2010 and 2018 highpoints are only a foot apart, it looks like the 2010 HP occupies one of the larger gaps in the 2018 coverage. Maybe just over a square foot. I understand you're attributing this to laser maths, but is it possible that is/was a cairn? Or are nearby overlapping summit points also higher in the 2010 data?

Also, were the 2010 survey criteria as stringent (i.e. no snow) as in 2018? Is it possible this survey was reordered, or is the 2018 coverage not comprehensive in this area?

Not challenging your conclusions, just curious about other potential factors.

[EDIT] Actually you said your line is 3m, so the gap in 2018 looks more like a square meter.
I managed to find some pictures of the summit. This one is from AllTrails, credit Kenneth Lund 6/26/21. It seems like there is indeed a pretty large block at what appears to be the summit. The highest points in the 2018 survey (on the left side of the gap) are still quite a bit lower than the closest 2010 points.
relay.jpg
relay.jpg (264.7 KiB) Viewed 2096 times
The 2010 survey was acquired in August, so there was likely no appreciable snow in the Tahoe region. Many of the surveys posted on TNM were originally flown for different purposes--the 2010 data appears to have been some sort of high-resolution land surface survey specifically focused on the Tahoe drainage basin, as compared to the 2018 survey that was flown in response to wildfires (presumably to measure the loss of forest and any geomorphological changes). The 2018 survey covers a much larger area than the 2010 survey, which is probably why the 2018 survey had the airplane flying at a higher altitude to more quickly cover the area of interest around the wildfire damage. Larger survey area --> (usually) higher flight altitude --> larger laser footprint --> more spatial averaging in each shot --> lower accuracy summit elevations.
User avatar
bdloftin77
Posts: 1090
Joined: 9/23/2013
14ers: 58  1 
13ers: 58
Trip Reports (2)
 

Re: LiDAR Peak Analysis: What It Takes

Post by bdloftin77 »

Boggy B wrote: Mon Feb 07, 2022 9:24 am
Eli Boardman wrote: Sat Feb 05, 2022 11:41 pm The few times this has happened before, I have defaulted to using the newer survey, and that's what I do this time too, because newer=better, right? Despite the krumholtz trees around the area, identifying the summit and saddle ground is easy enough since the vegetation is sparse, and results are quickly forthcoming. Using the 2018 data, the peak ends up with 298 ft. of prominence. I mark the peak as demoted and move along.

On a hunch, I think "what if there was snow or something at the saddle?" Maybe I should double-check with the other survey's data since it's so close. So, I download the 2010 data and...whoa! There are a WHOLE LOT MORE POINTS! In fact, we can quantify this using Lasinfo as "point density," or the number of last-returns per unit area. The 2010 LiDAR data has 13.5 points per square meter, and the 2018 LiDAR data has a measly 5.8 points per square meter!

As it turns out, with the 2010 data, the summit location is almost identical (less than 1 ft. from the 2018 summit point), but the elevation is about 2 ft. higher, and the peak ends up with exactly 300 ft. of prominence.
Interesting.

Even though the 2010 and 2018 highpoints are only a foot apart, it looks like the 2010 HP occupies one of the larger gaps in the 2018 coverage. Maybe just over a square foot. I understand you're attributing this to laser maths, but is it possible that is/was a cairn? Or are nearby overlapping summit points also higher in the 2010 data?

Also, were the 2010 survey criteria as stringent (i.e. no snow) as in 2018? Is it possible this survey was reordered, or is the 2018 coverage not comprehensive in this area?

Not challenging your conclusions, just curious about other potential factors.

[EDIT] Actually you said your line is 3m, so the gap in 2018 looks more like a square meter.
I saw this too. I wonder if the high point was just in the scanner line gap? Or if it's other factors as BoggyB mentioned. But could definitely be an elevation mismatch between the two flights.

Looks like there's a 2017 tile covering the Relay Peak summit as well, claiming to be Quality-1. If you haven't already, you might check out that one too.

For USGS lidar accuracy in general:
See these tables: https://www.usgs.gov/ngp-standards-and- ... ion-tables
And here's some more on what those tables mean, and how they were determined: https://www.usgs.gov/ngp-standards-and- ... validation
The USGS is aiming for Qualilty-2 data at minimum.

Though the interswath accuracy might only apply to single projects vs data acquired over multiple years, it looks like the USGS is trying to have good consistency with overlapping data (see table 2). But weird stuff can definitely happen. I came across a similar situation to both your CA peak (https://listsofjohn.com/peak/58656) and to Relay. For Mt Susan, the summit was fine, but the saddle area had two completely different flights. One flight's points were about 0.4-0.6 feet lower than the other. Rounding the summit and then rounding the saddle for the prominence would yield 300' for one flight, and 299' for the other. Fortunately if I subtracted 299.5' from the absolute summit elevation, all the saddle candidates were below this elevation, so the prominence subtracted and then rounded would yield 300' from either flight.
Relay Peak Coverage
Relay Peak Coverage
Relay Peak.png (585.15 KiB) Viewed 2073 times
Last edited by bdloftin77 on Mon Feb 07, 2022 12:00 pm, edited 1 time in total.
User avatar
Eli Boardman
Posts: 660
Joined: 6/23/2016
14ers: 58  1  15 
13ers: 18 1
Trip Reports (16)
 
Contact:

Re: LiDAR Peak Analysis: What It Takes

Post by Eli Boardman »

bdloftin77 wrote: Mon Feb 07, 2022 10:44 amLooks like there's a 2017 tile covering the Relay Peak summit as well, claiming to be Quality-1. If you haven't already, you might check out that one too.
Thanks, I hadn't even noticed that the Reno-Carson data barely extends that far!

Here's an animated GIF showing the same color ramp (3147-3148.5 m) and exactly the same scale/location for all three pointclouds. It looks like the 2018 survey is the outlier, likely because it was flown as a quick response to the wildfires, meaning they wanted to cover a large area rapidly.

Image
User avatar
bdloftin77
Posts: 1090
Joined: 9/23/2013
14ers: 58  1 
13ers: 58
Trip Reports (2)
 

Re: LiDAR Peak Analysis: What It Takes

Post by bdloftin77 »

Eli Boardman wrote: Mon Feb 07, 2022 11:57 am
bdloftin77 wrote: Mon Feb 07, 2022 10:44 amLooks like there's a 2017 tile covering the Relay Peak summit as well, claiming to be Quality-1. If you haven't already, you might check out that one too.
Thanks, I hadn't even noticed that the Reno-Carson data barely extends that far!

Here's an animated GIF showing the same color ramp (3147-3148.5 m) and exactly the same scale/location for all three pointclouds. It looks like the 2018 survey is the outlier, likely because it was flown as a quick response to the wildfires, meaning they wanted to cover a large area rapidly.
No problem. Glad the 2017 one agrees better! And that's a massive, unexpected pile of rocks at the summit, especially the large one on the right.

Yep, that highest point in the 2018 survey does seem incorrect based on the other two flights. It seems like it should definitely be in the orange elevation range.
Post Reply