Skip to content

Analysing the influence of linear simplification on the performance of interactive map overlays

17. May 2013

In the last two posts, I described the basic idea of using ArcGeometries for topologic linear simplification and how I implemented this using D3 and TopoJSON.

This post is now focused on the next test implementation that I wrote on that theme…have a look at it here.

This application enables me to stop the time that it takes to visualise the map overlay. Each time the map gets resetted for zooming (basic zoom & zoom to object, but panning is not affected), it has to be recalculated…and that takes some time…depending on the granularity (resolution) of the visualised data.

I’ve got 3 different possibilities of visualising and simplifying the data…and…I added two different datasets to the application. That makes all together 6 different scenarios that I’ve tested.

3 possibilities for simplification of features:

  • None –> Nothing is Generalised
  • Full (Initial) –> Generalised ArcGeometries – the points of each ArcGeometry get filtered directly when created
  • Dynamic –> Generalised Features – the points of each Feature are filtered dynamically when visualised

2 different datasets – based on german administrative units (GeoBasis-BKG):

  • german municipalities – complete dataset (11626 features)
  • german municipalities – clipped to an extent (2545 features) –> [6.273 48.771 9.478 50.44]

I made 20 completely identical resets, stoped the time and built the arithmetic mean for the detected times.

The resets were:
(maybe uninteresting for you…but for me as I can now throw away my lubricating chit :-))

  • click on ‘Bruchsal’
  • zomm out 2x
  • click on ‘Eberbach’
  • click on ‘Weinheim’
  • zoom out 4x
  • click on ‘Bruchsal’
  • zoom out 3x
  • click on ‘Bruchsal’
  • zoom out 3x
  • click on ‘Eberbach’
  • click on ‘Weinheim’

–> including the initial reset, that happens when the application loads…that makes 20 resets

Now…let’s have a look at the average values:

No Simplification

Dynamic Simplification

Full (initial) Simplification

Complete Dataset

(11626 features)

7,79 secondsfull_no_gen 3,36 sec (~43%)full_full_gen 1,34 sec (~17%)full_dynamic
Clipped Dataset

(2545 features)

1,31 secondsclipped_no_gen 0,55 sec (~42%)clipped_full_gen 0,24 sec (~18%)clipped_dynamic

The processing time of a vector data ovelay can be reduced to ca. 43% by dynamically simplifying the dataset when visualising the features as svg.

Simplifying the vector data initially, accelerates the processing drastically…you can reduce processing time to round about 18%!

Always remember…this analysis is concentrated on a full simplification of the vector data. That means, only the anchor (start-&end-) points of each vector are kept, the rest is filtered out! That is why the term ‘simplification’ is not 100% exact at this point…as…the ‘Visvalingham simplification’ is always (in each of the 3 possible implementations) made initially…by analysing the vector data and enriching it with the effective area of each point of each arc. Afterwards, the enriched info is used to filter points out!

But why is it way more effective to simplify the vector data initially than dynamically? The application (and the images above) include also the information on:

  • the number of arcs & the number of points (that are contained in these arcs)
  • the number of features, the total number of points & the number of points after filtering

Very simple…when you compare these values between the initial & the dynamic simplification. The initial simplification reduces the number of points for the whole dataset, that will be visualised. The dynamic simplification, processed always the complete number of points and just simplifies the visualisation, but loops over each point each time the visualisation changes (when map is resetted).

Notice! The total number of points in the arcs and the total number of points in the features are not identical, because some arcs are redundant. That means, one arc can be part of one (outer boundary) or two geometries (shared, inner boundary). The number would still not be identic when you recalculate the points by involving the redundancy, because one polygon is mostly build from more than one arc. When the feature is re-build from the ArcGeometries, the geometries of each arc are combined, while one point is always redundant (start&end or end&start) and will be transfered only once. An example:

  • One polygon –> consists of 4 arcs
  • arc[0] bound to arc[1] & arc[1] bound to arc[2] & arc[2] bound to arc[3] & arc[3] bound to arc[0]
  • 4 links –> that makes 4 redundant points
  • but a polygon needs identic start-&end-point –> do not discard the linking point of arc[3] bound to arc[0]
  • Result: [numer of arcs] – 1 = 3 points are discarded

So in conclusion…simplify your vector data overlays before visualising them…the users of your map application will like it 😉

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: