Just another WordPress.com site

Heat Maps


Let me start out with this bold statement about heat mapping: you can’t slice up people.  More on that later.

There are two categories of heat mapping that we are working on here at IDV and while their results might both look like pretty similar, they are some big differences between the two.  They are Value Interpolation and Frequency, each with their own considerations and appropriate data types…


This is usually what we picture when we think of heats maps.  Value Interpolation is useful for filling in the gaps between known data measurements to generate a fluctuating value along a continuous surface.  Known data point measurements serve as a sampling of the whole and a weighted distance algorithm estimates the value of each inbetweenzie pixel given its proximity from known values and assigns each pixel in the resulting image a corollary color/alpha value.  Or something like that.

Increasing the frequency of input points results in a more precise and smooth surface mapping.  Relative “brightness” indicates higher data values (not more input points).

Data Type
Value Interpolation requires data that describes continuous phenomena-values that flow inabruptly from here to there like temperature or elevation or pollen.  Continuous phenomena are relatively rare in the business world and are more commonly seen in geologic or climatological visuals.  So never interpolate discrete phenomenon -like population.

Since interpolation relies on a fluctuating data value, the most important variable is the data field that is chosen for interpolation.  A more global consideration is the interpolation algorithm used.  Kriging seems to yield some of the nicest and smoothest results (and is the method we prefer at IDV) but is more complicated.  There are plenty of other interpolation algorithms each with their own pros and cons.  Beyond that, visual variables include the theming of the surface, like colors and/or transparency.

Temperature, elevation, pollen count, ozone, gas field productivity, wind speed…

The most common example is, not surprisingly, actual heat maps…


Here is a Value Interpolated surface from WeatherChannel.com of North American temperature.

Temperature values are interpolated to fill in areas between the weather stations.  The physical locations of the contributing stations don’t really matter other than affecting the precision of the surface.

Heat can be interpolated because it is a continuous data type.  If it is 100 degrees at one station and 50 degrees at a neighboring station, it must be 75 degrees somewhere between the two.

(This image may actually be generated using a thermal infra-red sensor on a satellite -not interpolation.  But the visualization would be that same so for our purposes it’s a reasonable stand-in.)

Areas with higher numbers of weather stations won’t make the map look hotter.  The relative locations of the weather stations matter little to the visualization



Frequency is useful for illustrating the relative quantity and distribution of discrete things or events.  The result is a surface where "brighter" blobs indicate lots going on around there.  Heat map is a bit of a misnomer in this case, as heat isn’t an appropriate data type for Frequency surfaces.

Event points are assigned a defined radial buffer who’s pixels are assigned a diminishing value (radial gradation) based upon the distance from the event point.  The radial gradation of all events are summed to assign a cumulative per-pixel vale in a single merged raster surface illustrating the frequency of discrete events.

Increased frequency of event points results in a relatively higher overall value.  “Brighter” areas indicate higher frequency.  Frequency surfaces are an illustrative visual aid to indicate a general shift from less to more, rather than an estimate of what lies between known points.

Data Type
Frequency surfaces require data that describes discrete phenomena; event driven -items or events recorded at a point location but can justifiably be visualized as having an impact on the area directly around it (decreasing with distance).  The idea of an exact point is a bit of an illusion and largely a question of scale.  Many kinds of point data can be thought of as having a fuzzy effect around it. For example, the locations of cell towers have an immediate impact on their position and their signal strength generally decreases with distance.  If a murder happens in a neighborhood it also impacts the area nearby.  In many cases, a cumulative radial blur is a better visualization for data like this.

Data that fits into the Frequency surface paradigm are quite common in the business world.

Some things or events are more important than others and should have their visual representation weighted as such.  Event points can be assigned a magnitudinal data multiplier to affect the individual elements’ radial distance and/or intensity.

Broadcast towers, Lightning strikes, Criminal events, IEDs, Collisions, Retail locations, Infection incidents, Home foreclosures …


Here is an example of a Frequency surface of crime events, and a second image showing the input point locations of the events as a reference.

The brightness of the surface illustrates the frequency of crime reports.  The locations of the criminal events matter greatly to the visualization (as opposed to the Value Interpolation described above).

A criminal event has an impact on the nature of the place directly around it and the relative size and brightness of areas illustrate an approximate crime-iness.



John Nelson / IDV Solutions / john.nelson@idvsolutions.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s