Performance analysis – III – Douglas-Peucker

The next measurement done was after implementing the Douglas-Peucker algorithm for filtering the line string. The principle behind this algorithm is pretty simple: after choosing an epsilon value, the smaller the epsilon the higher the precision and the lower the compression, you apply the following steps:

1) find the farthest point from the segment determined by the first and the last point, let it have the index X

2) if this just found greatest distance is smaller than epsilon, then return the line string formed from the first and the last node.

3) now, we now that X has a distance greater than epsilon, so we solve recursively for [start…X] and for [X…end], and then join the solutions obtained from these two.

The average case complexity of the algorithm is O(N log N), but its worst case is O(N^2).

Determining the epsilon according to the current detail level was pretty fun, since I had to test all kinds of formulas. Right now it looks like this (still provisional ūüôā ):

335 qreal GeoDataLineString::epsilonFromDetailLevel( int detailLevel ) const
336 {
337     if ( p()->m_vector.size() < 30 )
338         return 0;
339
340     if ( detailLevel <= 1 )
341          return 50000;
342     if ( detailLevel == 2 )
343         return 20000;
344     if ( detailLevel > 2 && detailLevel <= 8 )
345         return 17500 Р2500 * ( detailLevel Р2 );
346     if ( detailLevel > 8 && detailLevel <= 11 )
347         return 2500 Р700 * ( detailLevel Р8 );
348
349     return 400 / ( 1 << ( detailLevel Р12 ) );
350 }

As a start, I implemented the “zoom-caching” version of Douglas-Peucker, and it has one major advantage and one major disadvantage:

  • the disadvantage is that the creation of the cache lasts between 1 and 3 seconds ( less for lower detail levels ) – and it becomes annoying because it happens at each zoom change
  • the advantage is that the time for creating polygons is 1-2 ms for each detail level smaller than 10 ( Atlas map has 8 detail levels, OSM is the only one with more, 18 to be precise )

Right now, I am working on a method to use Douglas-Peucker with no cache rebuilding at zooming, by mixing this method and the last one presented in the previous post. 

Here are some pictures of the line string on different zoom levels, with / without filtering:

Level 2, Filtered Рbuilt with 44 nodes (out of the total of 624072)

Level 2, Unfiltered

Level 3, Filtered –¬†built with 71 nodes (out of the total of¬†624072)

Level 3, Unfiltered

Level 5, Filtered –¬†built with 134 nodes (out of the total of¬†624072)

Level 5, Unfiltered

Level 7, Filtered –¬†built with 296 nodes (out of the total of¬†624072)

Level 7, Unfiltered

Level 9, Filtered –¬†built with 848 nodes (out of the total of¬†624072)

Level 9, Unfiltered

Seeing this pictures, I realize that I could use more some nodes for levels >= 7, what do you think?

 

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Performance analysis – III – Douglas-Peucker

  1. Magnus says:

    Good work, but I think levels >= 5 could do with more nodes.

  2. Florian Reinhard says:

    what’s the performance gain unfiltered vs. filtered if one ignores the 1-3s performance loss for filtering it’s self?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s