The Data Library maintained by the International Research Institute (IRI) for Climate and Society, which is based at Columbia University's Lamont-Doherty Earth Observatory, hosts as extraordinary wide array of earth science data. It's also an extremely powerful tool for data visualization and analysis.
In this exercise, you'll use the Data Library to extract individual records from a global network of climate stations. You'll also learn how to average climate data over space and time and create your own regional climate indices.
The main page for the Data Library is located at http://iridl.ldeo.columbia.edu. The Library also provides a very useful tutorial that explains how to find datasets, select subsets of data in space and time and perform basic manipulations and visualizations. It's a great resource that I consult whenever I need to solve data problems or when I've forgotten how to find the data I need. The tutorial is available at http://iridl.ldeo.columbia.edu/dochelp/Tutorial/index.html.
1. Plot a temperature record from a single station
Starting from the Data Library's main page, select the option to find datasets 'By Category'. We want to obtain surface temperature data from one of the climate stations in Minneapolis, so select the 'Atmosphere' category. This new page lists many sources of atmospheric weather and climate data, including several versions of NOAA's Global Historical Climate Network. The GHCN is made up of daily and monthly climate observations from land surface stations across the Earth that has been subjected to a rigorous set of quality assurance reviews. Choose the set labelled 'NOAA NCDC GHCN v2', which will bring up a map showing the location of all climate stations in the network. Immediately to the right of the map, you'll see a link to 'Searches'. Click that and conduct a search for stations named 'Minneapolis'. The first option produced by your search will lead to Minneapolis, Kansas - don't pick that one. Click the station ID for the second link (ID 72658000), which gives you the climate station at the Minneapolis-St. Paul Airport. The next page lists all the datasets and variables associated with that station. Choose 'adjusted' and then 'mean' to pull up the corrected mean monthly temperature record for Minneapolis.
You'll notice that the upper part of this page shows thumbnails illustrating three different visualizations: a colored map, a black and white contour map and a time series. Click on the time series to bring up the temperature record for Minneapolis, which spans January 1835 to present. This perspective is not very useful because the time series is dominated by the very large temperature fluctuations between winter and summer. Let's remove the seasonal cycle by clicking the bottom button to get data 'in view'. On this new page, select the 'Filters' option towards the top and select the 'anomalies' filter. This choice will subtract the monthly mean from each data point in the temperature record. Now when you click on the time-series visualization, the seasonal cycle has been eliminates and it's a lot easier to see the long-term trend in local temperatures.
If you'd like to change the appearance of the figure, hit the 'Edit plot' button and then select 'More options'. Increase the number in the 'Plot size' box to make the data easier to see. Finally, go back and select the 'Expert Mode' option. This choice brings up a text box that you can use to enter commands directly. On the line following the existing text, enter 'T 12 boxAverage' , click 'OK' and then select the time series visualization. You've just created a record of annual (12-month) temperature anomalies for Minneapolis.
Finally, choose another station (perhaps one located in the target region for your Holocene climate project), and produce the same plot showing changes in temperature over its period of record.
2. Create your own Nino 3.4 index
Next we'll scale up from individual stations and create a spatial averages over a geographic region. We'll make our own index of sea-surface temperature anomalies in the Niño 3.4 region since we know what that should look like.
Search the IRI's dataset's by category and select the 'Air-Sea Interface' option. We're looking for NOAA's extended reconstructed global sea surface temperature (ERSST). This dataset doesn't have the fine spatial detail of more recent SST records (anomalies are averaged over a 2° x 2° grid) but because it goes back to 1854 it gives a very long-term perspective on ENSO dynamics. We'll use version 3b, which was released in 2008. Instead of using the anomaly filter, just select the SST variable that has already been converted to anomalies. If you click on the colored map, you'll see that this dataset covers the entire global ocean but we only need data from the Niño 3.4 region. Go back and click on the 'Data Selection' link, which will allow you to select a subset of the complete dataset. The Niño 3.4 region extends between 5°N and 5°S and 170°W to 120°W. Enter those values for 'X' and 'Y' and then hit 'Restrict Ranges'. After that, click the 'Stop Selecting' button. If you click on the map icon now, you should see a long rectangular box that reflects your selection.
Now that we've chosen the geographic domain of our analysis, we need to create a single index of SSTs over the Niño 3.4 region. Click 'Expert Mode' to bring up the coding panel. Under the existing text, enter '[X Y] average' and click OK (include spaces between the X and the Y!). At this stage, you may also see a line of code that appears as 'T ####'. If that's the case, your selection is restricted to only a single month. Delete that line to expand the time range of your selection to include the enter period of record.
Choose the time series option to plot your Niño 3.4 index. Go back and choose the 'Tables' option - this link will allow you to view the numerical values for your index as a columnar table. If you'd like to download your index, go back again and select the "Data Files' option, which will allow you to obtain the data in whatever format you desire.
3. Estimate the spatial correlation between precipitation in Minneapolis and the rest of North America
In the previous example, we computed a spatial average of a region that we know is an important target for climate and paleoclimate research. In other cases, it make not be clear what area we should use to compute our spatial average.
One of the ways we can assess the spatial structure (or 'representative-ness') of different aspects of the climate system by mapping the correlation between a single local record and the same parameter across a broader region. In this section, we'll examine precipitation records from the University of East Anglia's Climatic Research Unit. From the CRU's page at the Data Library, choose the dataset named 'TS3p1' and select the monthly precipitation variable. Use the data selection tool to restrict the range of the data set to the region 45° to 46°N and 92° to 93°W. At the same time, restrict the time range of the set to Jan 1901 - Dec 2002.Then, in Expert Mode, use the ' [X Y]average' command to create a spatial average and convert the record to anomalies. Next, use Expert Mode to add the following code:
SOURCES .UEA .CRU .TS2p1 .monthly .prcp
X (130W) (70W) RANGE
Y (20N) (70N) RANGE
We're almost ready to compare the single precipitation anomaly record for the area around the Twin Cities against the same data collected across central North America. We've added in a restriction for the X and Y range of the field data because the Data Library struggles to compute calculations using the entire domain (and we don't need to see the map for the entire Earth anyway).
Finally, add one last command by entering this line of code:
Select the colored map thumbnail to plot the correlation between Minneapolis precipitation and precipitation across the broader region. Again, you might want to draw in the coasts to make the map easier to read.
Imagine that you had a perfect proxy for annual precipitation in the Twin Cities. Based on this map, over what region would it be reasonable for you to make inferences about past changes in precipitation?
4. Provide a spatial context to the mid-1100s drought in the upper Colorado River basin
Tree-ring estimates of past drought in the upper Colorado River basin (Meko et al., 2007) suggest that the most extreme and long-lasting drought of the last several centuries occurred in the mid-1100s. This event included a 13-year stretch of below-normal river discharge between AD 1143 and 1155. How widespread was the 12th century 'megadrought' across the western United States?
We'll investigate that question using the North American Drought Atlas, The Drought Atlas uses a network of moisture-sensitive tree-ring records from Canada, the United States and Mexico to estimate changes in drought conditions across the continent during the past two millennia. In its primary application, the Atlas has been used to place recent dry and wet intervals within a context of long-term variability and to identify droughts that were more persistent or more severe than historical droughts. The Atlas has helped clarify the impact of drought on wildfire and ecological dynamics, provided a framework to test the stability of relationships between remote climate forcings and North American drought and served as a real-world target for climate model simulations.
Search data 'by source' and bring up the page associated with the Lamont-Doherty Earth Observatory of Columbia University. Select the 'Tree Ring Laboratory' option, followed by the 'North American Drought Atlas 2004'.
The Drought Atlas contains only one variable: tree-ring estimates of the Palmer Drought Severity Index (PDSI). Choose 'Data Selection' and set the time range (T) to match the core period of the low-flow period in the Colorado River reconstruction. Click 'Restrict Ranges' and then 'Stop Selecting'. Next, use Expert Mode to add the following code: '[T] average'; then click 'OK'. Click on the colored map icon to show the geographic extent of drought conditions during the mid-1100s. Selecting the option to 'draw coasts' and hitting the 'redraw' button will make the map easier to interpret.
Cook ER, Krusic PJ (2004) The North American Drought Atlas. Lamont-Doherty Earth Observatory and the National Science Foundation.
Cook ER, Woodhouse C, Eakin CM, Meko DM and Stahle DW (2004) Long-term aridity changes in the western United States. Science 306: 1015-1018.
Global Historical Climatology Network-Monthly, National Oceanic and Atmospheric Administration, http://www.ncdc.noaa.gov/ghcnm/
Mitchell, T. D., and P. D. Jones (2005), An improved method of constructing a database of monthly climate observations and associated high‐resolution grids, Int. J. Climatol., 25, 693-712.
Smith, T.M., R. W. Reynolds, T. C. Peterson, and J. Lawrimore, 2008: Improvements to NOAA's historical merged land-ocean surface temperature analysis (1880-2006). J. Climate, 21, 2283-2296
University of East Anglia Climatic Research Unit (CRU) Global 0.5° Monthly Time Series, Version 2.1 (CRU TS 2.1). http://www.cru.uea.ac.uk/~timm/grid/CRU_TS_2_1.html
Please turn in a hard copy containing the main products from each of the four sections at the beginning of class on April 2.
This exercise is only intended to introduce you to the capabilities of the IRI Data Library, so it is not necessary to include written comments describing your graphics as part of your submission.