Bye bye Plain Map

Hello again Marble fans! As August approaches, the process of replacing the Atlas map is progressing and producing pretty good results. Last time I posted, I managed to open SHP  and make them work with filtering. Right now, Marble has a fully functional Topographic Map (same as the old Plain Map, just that vector layers are replaced with geodata layers).

The first step needed in order to get here was to create a new DGML file, for the new Topographic map, that loads the SHP and KML files which create the map. After this, i had to manually create an Ocean Layer, for making the Earth blue, since the old map had one of its own. After following these steps the Earth looked… a little bit pink 🙂 (the colors are hardcoded for now, since SHP files do not define colors and styles, as KML’s do)


Back to serious business 🙂 Brush applied, same color as Plain Map, but still hardcoded. 

Now, for recreating the functionality of the DGML vector style, I had to make the pen and the brush customizable from the DGML file (meaning that the colors of the map can be set from the DGML, using tags, with nothing to change in the code). The basic principle of this action was to take the penStyle and brushStyle obtained by the DGML parsers and assign them to the GeoDataDocuments created from parsing the SHP maps, and I had to deal a lot with four files: MarbleMap.cpp, MarbleModel.cpp, FileManager.cpp and FileLoader.cpp. Briefly, I obtained the style defined in the DGML file inside MarbleModel.cpp and transmitted it up to FileLoader.cpp, where I applied it to the documents. Here’s how the map looks right now (with colors and pen styles customizable yay 🙂 ):

Still, we’ll have to deal with a bug, coming from the SHP parser I believe. If you look carefully in the map below, parts of some border lines do not get displayed… Any ideas why? 🙂

Next, I have to add the Texture Layers to this map and say bye-bye to the old Atlas Map too 🙂

Posted in Uncategorized | 4 Comments

Yuppy, SHP format is rendered properly!

The last week was pretty full of Marble for me, and my main two targets were to implement the last (and best 🙂 ) version of Douglas-Peucker filtering and to solve the Antarctica bug.

Douglas-Peucker works as expected, so the creation of the cache happens only once, right after opening the .KML file and lasts somewhere around 3 seconds for that Appalachian Trail line string. Zooming in, moving around the map and all these frequent operations take less than 4ms for detail levels <= 12. After 12, the times increase a little, but they are still better than for the non-filtered version.

Now, about the Antarctica bug: it was by far the hardest task I had to do for Marble until now, because, as the name says, it was a bug fix. And you never know what to expect from bug fixes. The problem was this: in the Spherical Projection, a linestring that contained Antarctica ( the most important properties of this linestring are that it is closed, that it contains the pole and that it intersects the dateline ) was rendered properly, but in the Cylindrical Projections it looked like this ( obviously wrong ):

The thing I was suggested to try was to find the southernmost dateline crossing of the linestring, and insert between those two points two another points, (-180, -90) and (180, 90), which should create the link and close the polygon properly. After adding the necessary features to the GeoDataLineString and inserting the points in AbstractProjection::crossDateLine, I had to try different ways for more than 4 days, because Antarctica didn’t seem to want to work. After noticing that those wrongly drawn points were actually not random, but points translated 360 degrees to the left / right from their correct position, I played a bit with the normalizations and managed to make the map look right:

As I am getting to the heart of my project, I received a bonus today, after trying to load the first SHP files:

They work and it looks like Antarctica is rendered properly!!! Yuppy! 🙂

[Later Edit]: Last night, SHP rendering didn’t work with linestring filtering, but now it does 🙂 A toast for a faster Marble! 😀

Posted in Uncategorized | 8 Comments

Performance analysis – III – Douglas-Peucker

The next measurement done was after implementing the Douglas-Peucker algorithm for filtering the line string. The principle behind this algorithm is pretty simple: after choosing an epsilon value, the smaller the epsilon the higher the precision and the lower the compression, you apply the following steps:

1) find the farthest point from the segment determined by the first and the last point, let it have the index X

2) if this just found greatest distance is smaller than epsilon, then return the line string formed from the first and the last node.

3) now, we now that X has a distance greater than epsilon, so we solve recursively for [start…X] and for [X…end], and then join the solutions obtained from these two.

The average case complexity of the algorithm is O(N log N), but its worst case is O(N^2).

Determining the epsilon according to the current detail level was pretty fun, since I had to test all kinds of formulas. Right now it looks like this (still provisional 🙂 ):

335 qreal GeoDataLineString::epsilonFromDetailLevel( int detailLevel ) const
336 {
337     if ( p()->m_vector.size() < 30 )
338         return 0;
340     if ( detailLevel <= 1 )
341          return 50000;
342     if ( detailLevel == 2 )
343         return 20000;
344     if ( detailLevel > 2 && detailLevel <= 8 )
345         return 17500 – 2500 * ( detailLevel – 2 );
346     if ( detailLevel > 8 && detailLevel <= 11 )
347         return 2500 – 700 * ( detailLevel – 8 );
349     return 400 / ( 1 << ( detailLevel – 12 ) );
350 }

As a start, I implemented the “zoom-caching” version of Douglas-Peucker, and it has one major advantage and one major disadvantage:

  • the disadvantage is that the creation of the cache lasts between 1 and 3 seconds ( less for lower detail levels ) – and it becomes annoying because it happens at each zoom change
  • the advantage is that the time for creating polygons is 1-2 ms for each detail level smaller than 10 ( Atlas map has 8 detail levels, OSM is the only one with more, 18 to be precise )

Right now, I am working on a method to use Douglas-Peucker with no cache rebuilding at zooming, by mixing this method and the last one presented in the previous post. 

Here are some pictures of the line string on different zoom levels, with / without filtering:

Level 2, Filtered – built with 44 nodes (out of the total of 624072)

Level 2, Unfiltered

Level 3, Filtered – built with 71 nodes (out of the total of 624072)

Level 3, Unfiltered

Level 5, Filtered – built with 134 nodes (out of the total of 624072)

Level 5, Unfiltered

Level 7, Filtered – built with 296 nodes (out of the total of 624072)

Level 7, Unfiltered

Level 9, Filtered – built with 848 nodes (out of the total of 624072)

Level 9, Unfiltered

Seeing this pictures, I realize that I could use more some nodes for levels >= 7, what do you think?


Posted in Uncategorized | 2 Comments

Performance analysis – II

The times obtained for the entire 600000 nodes line string are decent, but luckily they can be improved. The ultimate purpose of this optimization is to apply the Douglas-Peucker filtering algorithm on the line, according to the zoom level. This algorithm is a method of making the line look almost the same with fewer points – the number of points depending on a chosen epsilon value, which in our case is going to be influenced by the zoom level.

All these filtering methods have to be implemented in the GeoDataLineString class, and the key fact for a good performance is caching, since re-creating the filtered line string each time an element is needed would be futile. There are two ways of caching / updating the information: update when zoom changes, or update when the line string changes. Of course, the one which updates when zoom changes is slower, but it has the advantage of being easier to implement.

Before getting to the algorithm itself, we applied two naive filtering methods, in order to compare results and to make the transition easier.

The first one is fairly easy to implement and understand: it would just keep every 10th point of the line string, and re-create the filtered line string each time the zoom changed (not necessary for correctitude, but useful for obtaining a veritable time measurement).  Here are its results:

Create polygons time: 46 ms
Time elapsed: 131 ms Nodes: 62409

Create polygons time: 18 ms
Time elapsed: 77 ms Nodes: 62409

Create polygons time: 9 ms
Time elapsed: 124 ms Nodes: 62409

Create polygons time: 11 ms
Time elapsed: 66 ms Nodes: 62409

As you can see, the time for creating polygons increased for the first call ( when the cache is created ), but became 2-3 times smaller for the other calls. When the zoom would change, the cache would be recreated and we would obtain again a time of ~45 ms for the polygon creation.

The next method uses again every 10th element, but in a different way: an array of 0’s and 1’s is kept, where a 0 on the i-th position means that the i-th node in the line string will not get displayed, and a 1 viceversa. Obviously, this array is constant no matter what the zoom level is, so the caching needs to be redone only when the line string changes. Here are some results:

Create polygons time: 22
Time elapsed: 79 ms

Create polygons time: 21
Time elapsed: 187 ms

Create polygons time: 13
Time elapsed: 88 ms

Create polygons time: 16
Time elapsed: 105 ms

We can see the polygon creation times are comparable with the previous ones, but on the long term this method will work faster, since the caching is done less often.

Next, in order to get closer to the Douglas-Peucker approach, we extended the previous approach to more than 0-1, so we could take into consideration detail levels (zoom levels) too. Obviously, for a lower detail level (zoom out), fewer points need to be seen than for a higher detail level. Line string’s points would get numbers from 1 to 8 assigned, according to the following pattern: 1 4 6 3 7 2 5 8 1 4 6 3 7 2 5 8 […] 1 4 6 3 7 2 5 8 1. When the filtered line string is called with a detail level of x, only the points with an associated number <= x will be displayed. ( for example if the detail level is set to 3, then only the first, the fourth, the sixth, the ninth and so on get displayed ). Obtaining the current detail level was a bit of an issue, but after playing a little with ViewportParams::angularResolution() I managed to figure it out. Here are the results ( when the filtering method was called with a detail level set to 1 ):

Create polygons time: 41 ms
Time elapsed: 135 ms

Create polygons time: 39 ms
Time elapsed: 104 ms

Create polygons time: 34 ms
Time elapsed: 153 ms

Create polygons time: 37 ms
Time elapsed: 86 ms

As you can see, the times are higher, possibly because this filtering gets out 7/8 elements, instead of 9/10 like the previous ones, or because more memory is needed in order to do it.

Douglas-Peucker is next 🙂

Posted in Uncategorized | Leave a comment

Performance analysis

As a preparatory step for changing the Atlas map, Marble’s vector graphics painter, the GeoPainter, needs to be tested and optimized as much as possible, by loading big KML files, using different filtering methods and comparing the results. Keeping in mind that even the longest and hardest journeys start with a single step, guess what I had to do first: of course, a tutorial 🙂

This one teaches you how to load a KML file into a Marble Widget. The process is quite easy once you know a few tricks regarding the Marble Widget ( a good idea to learn them would be to go through all the tutorials ), you can see here how to load a file, add it to Marble’s tree model and display it.

Once I learned how to load files into the widget, tackat prepared a KML containing a huge line string ( around 600.000 nodes ) in order to test GeoPainter’s speed. The line string represents the Appalachian Trail, and looks like this ( the blue line ):

For measuring GeoPainter’s performance, I had to “install” two timers ( thank you QTime ) inside the GeoPainter class: one for the entire drawPolyLine() method, and the other one for createPolygonsFromLineString()’s method call inside drawPolyLine(), since that is the major time consumer.

The first test I did was on the unfiltered line string, while the following ones employed different more or less sophisticated algorithms. So I set my widget on the Spherical projection, Plain map and a zoom level of 1500 and started testing. The drawing times I obtained ( with no music on the background, youtube seems to keep my processors too busy for accurate tests 🙂 ) can be seen below. Time elapsed represents the time for the entire drawPolyLine() method, Create polygons time shows for how long lasted the call to createPolygonsFromLineString() and Nodes is the number of points in the line string.

Create polygons time: 25 ms
Time elapsed: 111 ms Nodes: 624072

Create polygons time: 41 ms
Time elapsed: 93 ms Nodes: 624072

Create polygons time: 34 ms
Time elapsed: 169 ms Nodes: 624072

Create polygons time: 27 ms
Time elapsed: 112 ms Nodes: 624072

Create polygons time: 26 ms
Time elapsed: 138 ms Nodes: 624072

In the next posts I am going to present the (better yaay 🙂 ) results obtained by using some filtering methods on the line string.


Posted in Uncategorized | 1 Comment


Map projections are methods of representing the surface of a sphere / a three-dimensional object (i.e. the Earth) on a plane. Marble has a total number of three different kinds of projections:

1. Spherical Projection

2.Equirectangular Projection

3. Mercator Projection

The last two, as you can see, are pretty similar, and these similarities also reflect into the code. Each of the three projections is represented by a class, with its own particularities. For example, the Spherical map contains horizons (because a side of the Earth can not be seen at each moment), while the Cylindrical maps (Mercator + Equirectangular) contain the International Date Line and so on.

There is an abstract class, AbstractProjection, that contains all the methods which are supposed to be common for all the projections, and therefore is inherited by each of SphericalProjection, MercatorProjection and EquirectProjection. Since Equirectangular and Mercator projections are so similar, and so different from Spherical projection, many corner cases have arisen in different methods of AbstractProjection. My job was to create a new class, CylindricalProjection, which would inherit AbstractProjection and would be inherited by EquirectProjection and by MercatorProjection, containing all the things Equirect and Mercator have in common.

A major difficulty I had to overcome (by the way, thanks tackat and idis, my mentors, for all the help with regard to this issue) was implementing the Shared D-Pointer. Briefly, one of KDE’s coding policies is that you are not allowed to have private members in public classes, so if they are needed, you need to implement a class, called BaseClassNamePrivate, which will incorporate the necessary private methods. If the class hierarchy is big enough, then you are encouraged to use the shared d-pointer, which makes the access to all the members much easier, but, as I have experienced, is pretty harsh to declare and become functional.

That was my second task, and the first major one, that actually involved dealing with the code (a lot!). The next one is a tutorial again, see you soon!

Posted in Uncategorized | Leave a comment

Exemplification of Marble GeoData: GeoDataLineString and GeoPainter

In order to display features such as place marks, images, polygons, textual descriptions and so on, Marble implemented a set of classes, the GeoData ones, with a structure similar to Google’s KML Format. If you look inside Marble Namespace Reference, most of  the classes whose names start with GeoData implement some of the above mentioned features: placemarks (specific points on the map, which can contain text tags, icons, pictures, etc.), linestrings (open polygon lines), linear rings (closed polygon lines), documents (the file which contains everything in a KML) and so on.

The GeoPainter allows drawing geometric primitives on the map (lines, polygons, circles, rectangles, text, images). My first task was an accommodation one, creating a tutorial for drawing GeoDataLineStrings using the GeoPainter.

Here is the tutorial, good luck in understanding it 😉

Posted in Uncategorized | Leave a comment

Google Summer of Code – my project at Marble

Hello everybody! I am Cezar, 19, from Romania, future freshmen at Yale University where I will (probably) major in Computer Science.

Even though I started it a little bit too late, this blog is going to present the progress I have made and will be making on my project during Google Summer of Code 2012.

The project I am working on belongs to Marble, part of the KDE suite of programs, which is an impressive open-source Virtual Globe  and World Atlas, with lots of features. In order to introduce you to the heart of my project, you will need to know that Marble uses several Map Themes:

  • the Atlas Map, which is a geopolitical map that contains basic relief too 
  • Satellite View map, which is similar to the Atlas one, except that it contains a satellite view of the Earth
  • Open Street Map, which is an impressive and really detailed touristic and not only map, based on this project

The purpose of my GSoC 2012 project is to replace the Atlas map, since it has several problematic aspects:

  1. it is based on a really old and outdated dataset (MWDBII), whose last update was made 20 years ago
  2. it uses both vectors (for borders, coastlines, etc.) and JPG images (for terrain)
  3. the zooming is strongly limited
  4. the selection of a geographical entity (either by the user or by a program) is impossible 

In order to create the next generation Atlas map, we have decided to employ and integrate the Natural Earth Data open source project, since we believe in its capabilities of supplying our needs and solving the issues present in MWDBII. There are multiple advantages this replacement would bring, among which: much more features than the current format, the entire map is going to be vector-based, it is updated regularly, it provides bigger scales, so the zoom limitations will improve significantly, data attributes on the map, such as country codes, population, etc.

Now that you’re at least a little familiar with my project, in the next post I am going to present the work I have done until now, starting with the introductory and accomodation tasks.

All the best 🙂

Posted in Uncategorized | Leave a comment