The times obtained for the entire 600000 nodes line string are decent, but luckily they can be improved. The ultimate purpose of this optimization is to apply the Douglas-Peucker filtering algorithm on the line, according to the zoom level. This algorithm is a method of making the line look almost the same with fewer points – the number of points depending on a chosen epsilon value, which in our case is going to be influenced by the zoom level.
All these filtering methods have to be implemented in the GeoDataLineString class, and the key fact for a good performance is caching, since re-creating the filtered line string each time an element is needed would be futile. There are two ways of caching / updating the information: update when zoom changes, or update when the line string changes. Of course, the one which updates when zoom changes is slower, but it has the advantage of being easier to implement.
Before getting to the algorithm itself, we applied two naive filtering methods, in order to compare results and to make the transition easier.
The first one is fairly easy to implement and understand: it would just keep every 10th point of the line string, and re-create the filtered line string each time the zoom changed (not necessary for correctitude, but useful for obtaining a veritable time measurement). Here are its results:
Create polygons time: 46 ms
Time elapsed: 131 ms Nodes: 62409
Create polygons time: 18 ms
Time elapsed: 77 ms Nodes: 62409
Create polygons time: 9 ms
Time elapsed: 124 ms Nodes: 62409
Create polygons time: 11 ms
Time elapsed: 66 ms Nodes: 62409
As you can see, the time for creating polygons increased for the first call ( when the cache is created ), but became 2-3 times smaller for the other calls. When the zoom would change, the cache would be recreated and we would obtain again a time of ~45 ms for the polygon creation.
The next method uses again every 10th element, but in a different way: an array of 0’s and 1’s is kept, where a 0 on the i-th position means that the i-th node in the line string will not get displayed, and a 1 viceversa. Obviously, this array is constant no matter what the zoom level is, so the caching needs to be redone only when the line string changes. Here are some results:
Create polygons time: 22
Time elapsed: 79 ms
Create polygons time: 21
Time elapsed: 187 ms
Create polygons time: 13
Time elapsed: 88 ms
Create polygons time: 16
Time elapsed: 105 ms
We can see the polygon creation times are comparable with the previous ones, but on the long term this method will work faster, since the caching is done less often.
Next, in order to get closer to the Douglas-Peucker approach, we extended the previous approach to more than 0-1, so we could take into consideration detail levels (zoom levels) too. Obviously, for a lower detail level (zoom out), fewer points need to be seen than for a higher detail level. Line string’s points would get numbers from 1 to 8 assigned, according to the following pattern: 1 4 6 3 7 2 5 8 1 4 6 3 7 2 5 8 […] 1 4 6 3 7 2 5 8 1. When the filtered line string is called with a detail level of x, only the points with an associated number <= x will be displayed. ( for example if the detail level is set to 3, then only the first, the fourth, the sixth, the ninth and so on get displayed ). Obtaining the current detail level was a bit of an issue, but after playing a little with ViewportParams::angularResolution() I managed to figure it out. Here are the results ( when the filtering method was called with a detail level set to 1 ):
Create polygons time: 41 ms
Time elapsed: 135 ms
Create polygons time: 39 ms
Time elapsed: 104 ms
Create polygons time: 34 ms
Time elapsed: 153 ms
Create polygons time: 37 ms
Time elapsed: 86 ms
As you can see, the times are higher, possibly because this filtering gets out 7/8 elements, instead of 9/10 like the previous ones, or because more memory is needed in order to do it.
Douglas-Peucker is next 🙂