Monday, March 12, 2012

XYZ geographic data interpolation, part 3



This will be probably be a final posting on interpolation of xyz data as I believe I have come to some conclusions to my original issues. I show three methods of xyz interpolation:
1. The quick and dirty method of interpolating projected xyz points (bi-linear)
2. Interpolation using Cartesian coordinates (bi-linear)
3. Interpolation using spherical coordinates and geographic distances (thin plate spline)

Wednesday, February 29, 2012

XYZ geographic data interpolation, part 2



Having recently received a comment on a post regarding geographic xyz data interpolation, I decided to return to my original "xyz.map" function and open it up for easier interpretation. This should make the method easier to adapt and follow.

The above graph shows the distance to Mecca as interpolated from 1000 randomly generated lat/lon data using the "interp" function of the akima package. Several functions, found within this blog, are needed to reproduce the plot (pos2coord, earth.dist, new.lon.lat, color.palette, val2col, image.scale). One thing you will notice is the strip of uninterpolated area within the stereographic projection. This is a problem that I have yet to resolve and has to do with the fact that the interpolation is not considering the connection along the 180° longitude line. This will probably require some other type of interpolation based on geographic distances rather than Cartesian coordinates.


R code to produce the above graph...

Monday, December 19, 2011

Maximal Information Coefficient (MIC)


Pearson r correlation coefficients for various distributions of paired data (Credit: Denis Boigelot, Wikimedia Commons)

A paper published this week in Science outlines a new statistic called the maximal information coefficient (MIC), which is able to equally describe the correlation between paired variables regardless of linear or nonlinear relationship. In other words, as Pearson's r gives a measure of the noise surrounding a linear regression, MIC should give similar scores to equally noisy relationships regardless of type.

Tuesday, December 13, 2011

Maximum Covariance Analysis (MCA)

Maximum Covariance Analysis (MCA) (Mode 1; scaled) of Sea Level Pressure (SLP) and Sea Surface Temperature (SST) monthly anomalies for the region between -180 °W to -70 °W and +30 °N to -30 °S.  MCA coefficients (scaled) are below. The mode represents 94% of the squared covariance fraction (SCF).

Maximum Correlation Analysis (MCA) is similar to Empirical Orthogonal Function Analysis (EOF) in that they both deal with the decomposition of a covariance matrix. In EOF, this is a covariance matrix based on a single spatio-temporal field, while MCA is based on the decomposition of a "cross-covariance" matrix derived from two fields.

Monday, November 28, 2011

Another aspect of speeding up loops in R

Any frequent reader of R-bloggers will have come across several posts concerning the optimization of code - in particular, the avoidance of loops.

Here's another aspect of the same issue. If you have experience programming in other languages besides R, this is probably a no-brainer, but for laymen, like myself, the following example was a total surprise. Basically, every time you redefine the size of an object in R, you are also redefining the allotted memory - and this takes some time. It's not necessarily a lot of time, but if you are having to do it during every iteration of a loop, it can really slow things down.

The following example shows three versions of a loop that creates random numbers and stores those numbers in a results object. The first example (Ex. 1) demonstrates the wrong approach, which is to concatenate the results onto the results object ("x") , thereby continually changing the size of x after each loop. The second approach (Ex. 2) is about 150x faster - x is defined as an empty matrix containing NAs, which is gradually filled (by row) during each loop. The third example (Ex. 3) shows another possibility if one does not know what the size of the results from each loop will be. An empty list is created of length equaling the number of loops. The elements of the list are then gradually filled with the loop results. Again, this is at least 150x faster than Ex. 1 (and I'm actually surprised to see that it may even be faster than Ex.2).

Thursday, November 24, 2011

Define intermediate color steps for colorRampPalette

The following function, color.palette(), is a wrapper for colorRampPalette() and allows some increased flexibility in defining the spacing between main color levels. One defines both the main color levels (as with colorRampPalette) and an optional vector containing the number of color levels that should be put in between at equal distances.

     The above figure shows the effect on a color scale (see image.scale) containing 5 main colors (blue, cyan, white, yellow, and red).  The result of colorRampPalette (upper) produces an equal number of levels between the main colors. By increasing the number of intermediate colors between blue-cyan and yellow-red (lower), the number of color levels in the near white range is reduced. The resulting palette, for example, was better in highlighting the positive and negative values of an Emprical Orthogonal Function (EOF) mode.


Empirical Orthogonal Function (EOF) Analysis for gappy data

[Updates]: The following approach has serious shortcomings, which I have recently become aware of. In a comparison of gappy EOF approaches Taylor et al. (2013) [pdf] show that this traditional approach is not as accurate as others. Specifically, the approach of DINEOF (Data Interpolating Empirical Orthogonal Functions) proved to be the most accurate. I have outlined the DINEOF algorithm in another post [link]. and show a comparison of gappoy EOF methods here: http://menugget.blogspot.de/2014/09/pca-eof-for-data-with-missing-values.html. The R package "sinkr" now contains a version of the function ("eof") for easy installation: https://github.com/menugget/sinkr

-----------------
The following is a function for the calculation of Empirical Orthogonal Functions (EOF). For those coming from a more biologically-oriented background and are familiar with Principal Component Analysis (PCA), the methods are similar. In the climate sciences the method is usually used for the decomposition of a data field into dominant spatial-temporal modes. 

Friday, November 11, 2011

Propagation of error

     At the onset, this was strictly an excercise of my own curiosity and I didn't imagine writing this down in any form at all. As someone who has done some modelling work in the past, I'm embarrassed to say that I had never fully grasped how one can gauge the error of a model output without having to do some sort of Monte Carlo simulation whereby the model parameters are repeatedly randomized within a given confidence interval. Its relatively easy to imagine that a model containing many parameters, each with an associated error, will tend to propagate these errors throughout. Without getting to far over my head here, I will just say that there are defined methods for calculating the error of a variable if one knows the underlying error of the functions that define them (and I have tried out only a very simple one here!).
     In the example below, I have three main variables (x, y, and z) and two functions that define the relationships y~x and z~y. The question is, given these functions, what would be the error of a predicted z value given an initial x value? The most general rule seems to be:
     error(z~x)^2 = error(y~x)^2 + error(z~y)^2
However, correlated errors require additional terms (see Wikipedia: Propagation of uncertainty). The following example does just that by simulating correlated error terms using the MASS package's function mvrnorm().

example:

Monday, September 12, 2011

Converting values to color levels



     Adding color to a plot is helpful in many situations for visualizing an additional dimension of the data. Again, I wrote the below function "val2col" in R after having coded this manually over and over in the past. It uses similar arguments as the image function in that one defines the colors to be used as well as optional break points or z-limits by which the data is binned into those colors. The typical situation where I use the function is with the mapping of climate data, yet the addition of color to an XY plot can often be easier on the eyes than adding an additional z-axis spatial dimension. In combination with the image.scale function, that I previously posted, the data can be quickly interpretted.
     As an example, gridded sea level pressure is plotted above as projected polygons using the packages maps and mapproj. Values were converted to colors with the val2col function and the image.scale function plotted a corresponding scale. For those interested in using netcdf files, the example also uses the ncdf package for reading the data files into R.

Wednesday, August 31, 2011

Adding a scale to an image plot


[NOTE: new version of the image.scale function can be found here: http://menugget.blogspot.de/2013/12/new-version-of-imagescale-function.html.]

Here's a function that allows you to add a color scale legend to an image plot (or probably any plot needing a z-level scale). I found myself having to program this over and over again, and just decided to make a plotting function for future use. While I really like the look of levelplot(), the modular aspect of image() makes it much more handy to combine with other plotting commands or overlays.
For example, as far as I can tell, the simple addition of the triangle symbol to mark the highest point in the above map of Maunga Whau volcano is not possible with levelplot.
After adding this symbol, the function below - image.scale() - was used to add the accompanying color scale to another area of the device.



The function...