|
|
|
GEO3231PythonReferenceSheet
|
|
|
|
===========================
|
|
|
|
|
|
|
|
- ###
|
|
|
|
|
|
|
|
- ### [Importing modules, and what those modules do](#GEO3231PythonReferenceSheet-Importingmo)
|
|
|
|
|
|
|
|
- ### [Help (manual pages)](#GEO3231PythonReferenceSheet-Help(manual)
|
|
|
|
|
|
|
|
- ### [Making an array with some data in](#GEO3231PythonReferenceSheet-Makinganarr)
|
|
|
|
|
|
|
|
- ### [Making an array that is automatically full of a sequence of numbers](#GEO3231PythonReferenceSheet-Makinganarr)
|
|
|
|
|
|
|
|
- ### [Print](#GEO3231PythonReferenceSheet-Print)
|
|
|
|
|
|
|
|
- ### [Basic Maths](#GEO3231PythonReferenceSheet-BasicMaths)
|
|
|
|
|
|
|
|
- ### [Basic stats](#GEO3231PythonReferenceSheet-Basicstats)
|
|
|
|
|
|
|
|
- ### [For Loop](#GEO3231PythonReferenceSheet-ForLoop)
|
|
|
|
|
|
|
|
- ### [Plotting](#GEO3231PythonReferenceSheet-Plotting)
|
|
|
|
|
|
|
|
- ### [line plot](#GEO3231PythonReferenceSheet-lineplot)
|
|
|
|
|
|
|
|
- ### [Scatter plot](#GEO3231PythonReferenceSheet-Scatterplot)
|
|
|
|
|
|
|
|
- ### [Adding labels and titles to plots](#GEO3231PythonReferenceSheet-Addinglabel)
|
|
|
|
|
|
|
|
- ### [Adding a legend to a plot](#GEO3231PythonReferenceSheet-Addingalege)
|
|
|
|
|
|
|
|
- ### [Contour plot of raw data (i.e. not of a cube of data that contains all of the metadata as well as the numbers)](#GEO3231PythonReferenceSheet-Contourplot)
|
|
|
|
|
|
|
|
- ### [Contour plot from an \'iris cube\' i.e. a variable read in using one of the iris modules - a variable holding the (e.g.) climate data as well as all the metadata about that climate model etc.](#GEO3231PythonReferenceSheet-Contourplot)
|
|
|
|
|
|
|
|
- ### [x-y plot from data that was read in as an iris cube then averaged so that it ply has a single dimension (i.e. it has been spatially averaged so the only dimension is time)](#GEO3231PythonReferenceSheet-x-yplotfrom)
|
|
|
|
|
|
|
|
- ### [And as above, but specifying how many ticks (numbers) you want on the x-axis:](#GEO3231PythonReferenceSheet-Andasabove,)
|
|
|
|
|
|
|
|
- ### [import matplotlib.pyplot as plt import iris.quickplot as quickplot](#GEO3231PythonReferenceSheet-importmatpl)
|
|
|
|
|
|
|
|
- ### [number\_of\_xaxis\_ticks = 5 ](#GEO3231PythonReferenceSheet-number_of_x)
|
|
|
|
|
|
|
|
- ### [fig = plt.figure() ax = fig.add\_subplot(1, 1, 1)](#GEO3231PythonReferenceSheet-fig=plt.fig)
|
|
|
|
|
|
|
|
- ### [Adding coastlines to a map plot](#GEO3231PythonReferenceSheet-Addingcoast)
|
|
|
|
|
|
|
|
- ### [Coloured contour plot from an \'iris cube\' with a line counter map (like altitude contours from an OS map) on top:](#GEO3231PythonReferenceSheet-Colouredcon)
|
|
|
|
|
|
|
|
- ### [Plotting a map of the Pacific (and not having a big white gap)](#GEO3231PythonReferenceSheet-Plottingama)
|
|
|
|
|
|
|
|
- ### [Plotting a line plot with coloured bands showing uncertainties/standard deviations](#GEO3231PythonReferenceSheet-Plottingali)
|
|
|
|
|
|
|
|
- ### [Reading data in from a text file](#GEO3231PythonReferenceSheet-Readingdata)
|
|
|
|
|
|
|
|
- ### [Loading climate model or big observational data from netcdf files (files with the extension \'.nc\') using iris](#GEO3231PythonReferenceSheet-Loadingclim)
|
|
|
|
|
|
|
|
- ### [Converting monthly data to yearly data in an iris cube](#GEO3231PythonReferenceSheet-Convertingm)
|
|
|
|
|
|
|
|
- ### [Calculating an annual mean but just over specific months- from an iris cube](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Averaging an iris cube of data along some dimension (typically across the time-intervals, or latitude etc.)](#GEO3231PythonReferenceSheet-Averagingan)
|
|
|
|
|
|
|
|
- ### [Writing a script](#GEO3231PythonReferenceSheet-Writingascr)
|
|
|
|
|
|
|
|
- ### [Calculating the mean of an array](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the mean of an array which has \'nan\' - Not A Number - \'gaps\' in it](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the standard deviation of an array](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the standard deviation of an array which has \'NaN\' - Not s Number - \'gaps\' in it ](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [ ](#GEO3231PythonReferenceSheet-.1)
|
|
|
|
|
|
|
|
- ### [Replacing a missing data indicator with a nan ](#GEO3231PythonReferenceSheet-Replacingam)
|
|
|
|
|
|
|
|
- ### [from numpy import \*](#GEO3231PythonReferenceSheet-fromnumpyim)
|
|
|
|
|
|
|
|
- ### [ data = array(\[1.0,2.0,3.0,4.0,5.0,-99999.0,7.0,8.0,9.0,-99999.0\])](#GEO3231PythonReferenceSheet-data=array()
|
|
|
|
|
|
|
|
- ###
|
|
|
|
|
|
|
|
- ### [Masking where you have NaNs in a dataset ](#GEO3231PythonReferenceSheet-Maskingwher)
|
|
|
|
|
|
|
|
- ### [Calculating the correlation between two datasets](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Linear Regression](#GEO3231PythonReferenceSheet-LinearRegre)
|
|
|
|
|
|
|
|
- ### [Multiple linear regression example](#GEO3231PythonReferenceSheet-Multiplelin)
|
|
|
|
|
|
|
|
- ### [Plotting a linear best fit line](#GEO3231PythonReferenceSheet-Plottingali)
|
|
|
|
|
|
|
|
- ### [Calculating seawater density](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Working with Iris Cubes (i.e. data read in from netcdf files using the iris modules)](#GEO3231PythonReferenceSheet-Workingwith)
|
|
|
|
|
|
|
|
- ### [Contour plot from an \'iris cube\' i.e. a variable read in using one of the iris modules - a variable holding the (e.g.) climate data as well as all the metadata about that climate model etc.](#GEO3231PythonReferenceSheet-Contourplot)
|
|
|
|
|
|
|
|
- ### [x-y plot from data that was read in as an iris cube then averaged so that it ply has a single dimension (i.e. it has been spatially averaged so the only dimension is time)](#GEO3231PythonReferenceSheet-x-yplotfrom)
|
|
|
|
|
|
|
|
- ### [As above, but specifying that you want a specific dimension (here depth) on the y-axis:](#GEO3231PythonReferenceSheet-Asabove,but)
|
|
|
|
|
|
|
|
- ### [Adding coastlines to a map plot](#GEO3231PythonReferenceSheet-Addingcoast)
|
|
|
|
|
|
|
|
- ### [Loading climate model or big observational data from netcdf files (files with the extension \'.nc\') using iris](#GEO3231PythonReferenceSheet-Loadingclim)
|
|
|
|
|
|
|
|
- ### [Converting monthly data to yearly data in an iris cube](#GEO3231PythonReferenceSheet-Convertingm)
|
|
|
|
|
|
|
|
- ### [Averaging an iris cube of data along some dimension (typically across the time-intervals, or latitude etc.)](#GEO3231PythonReferenceSheet-Averagingan)
|
|
|
|
|
|
|
|
- ### [Extracting horizontal slices from a cube (e.g. at a given depth)](#GEO3231PythonReferenceSheet-Extractingh)
|
|
|
|
|
|
|
|
- ### [Extracting vertical slices from a cube](#GEO3231PythonReferenceSheet-Extractingv)
|
|
|
|
|
|
|
|
- ### [Extracting a spatial region from a global dataset](#GEO3231PythonReferenceSheet-Extractinga)
|
|
|
|
|
|
|
|
- ### [Extracting the data part of the iris cube (rather than the metadata)](#GEO3231PythonReferenceSheet-Extractingt)
|
|
|
|
|
|
|
|
- ### [Getting the year (or month or day) values for a cube of data](#GEO3231PythonReferenceSheet-Gettingthey)
|
|
|
|
|
|
|
|
- ### [Correlation maps](#GEO3231PythonReferenceSheet-Correlation)
|
|
|
|
|
|
|
|
- ### [Calculating the difference point-by-point between two cubes of data](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the mean of a bunch of cubes](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the variance of a bunch of cubes ](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Doing the carbonate-chemistry calculations](#GEO3231PythonReferenceSheet-Doingthecar)
|
|
|
|
|
|
|
|
- ### [Plotting two time-series (or similar) datasets on the same x-axis but with different y axes:](#GEO3231PythonReferenceSheet-Plottingtwo)
|
|
|
|
|
|
|
|
- ### [Saving a figure you have plotted:](#GEO3231PythonReferenceSheet-Savingafigu)
|
|
|
|
|
|
|
|
- ### [Regridding and doing initial processing of non-CMIP5 data. Example here is SODA data (not python but related)](#GEO3231PythonReferenceSheet-Regriddinga)
|
|
|
|
|
|
|
|
- ### [Reading numerical data in from a text file:](#GEO3231PythonReferenceSheet-Readingnume)
|
|
|
|
|
|
|
|
- ### [Saving data to a text file:](#GEO3231PythonReferenceSheet-Savingdatat)
|
|
|
|
|
|
|
|
- ### [A function to create running means (smoothing data):](#GEO3231PythonReferenceSheet-Afunctionto)
|
|
|
|
|
|
|
|
### • [Adding latitude and longitude lables to your map](#GEO3231PythonReferenceSheet-Addinglatit)
|
|
|
|
|
|
|
|
- ### [Calculating the overturning stream function example (using the CMIP5 msftmyz variable):](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
### • [Extracting a range of years from a cube](#GEO3231PythonReferenceSheet-Extractinga)
|
|
|
|
|
|
|
|
### • [Color maps:](#GEO3231PythonReferenceSheet-Colormaps:)
|
|
|
|
|
|
|
|
### • [Producing pretty plots](#GEO3231PythonReferenceSheet-Producingpr)
|
|
|
|
|
|
|
|
### • [High, low and band-pass filtering cubes](#GEO3231PythonReferenceSheet-High,lowand)
|
|
|
|
|
|
|
|
### • [Calculating the depths at which a variable in a cube moved from above to below a critical value](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Calculating the NINO3.4 index from a cube](#GEO3231PythonReferenceSheet-Calculating)
|
|
|
|
|
|
|
|
- ### [Extracting contour lines from a cube of data and saving them out as a shape file for ARC GIS](#GEO3231PythonReferenceSheet-Extractingc)
|
|
|
|
|
|
|
|
- ### [Cube Statistics/Maths](#GEO3231PythonReferenceSheet-CubeStatist)
|
|
|
|
|
|
|
|
- ### [Finding the maximum value in latitude-longitude space:](#GEO3231PythonReferenceSheet-Findingthem)
|
|
|
|
|
|
|
|
- ### [Finding the minimum value in latitude-longitude space:](#GEO3231PythonReferenceSheet-Findingthem)
|
|
|
|
|
|
|
|
- ### [Adding a value (5 in this example) to the data in a cube](#GEO3231PythonReferenceSheet-Addingavalu)
|
|
|
|
|
|
|
|
- ### [Dividing the data in a cube by a certain value(5 in this example)](#GEO3231PythonReferenceSheet-Dividingthe)
|
|
|
|
|
|
|
|
- ### [Reading and concatenating EN4 data](#GEO3231PythonReferenceSheet-Readingandc)
|
|
|
|
|
|
|
|
- ### [Identifying the grid coordinates in a cube of certain latitude and longitude points:](#GEO3231PythonReferenceSheet-Identifying)
|
|
|
|
|
|
|
|
- ### [Lead/Lagged correlation e.g.](#GEO3231PythonReferenceSheet-Lead/Lagged)
|
|
|
|
|
|
|
|
- ### [Hovmuller plots](#GEO3231PythonReferenceSheet-Hovmullerpl)
|
|
|
|
|
|
|
|
- ### [Drawing rectangles on plots:](#GEO3231PythonReferenceSheet-Drawingrect)
|
|
|
|
|
|
|
|
- ### [Changing font size](#GEO3231PythonReferenceSheet-Changingfon)
|
|
|
|
|
|
|
|
- ### [Contour plot with colour bar with title](#GEO3231PythonReferenceSheet-Contourplot)
|
|
|
|
|
|
|
|
- ### [Plotting GLODAP profiles](#GEO3231PythonReferenceSheet-PlottingGLO)
|
|
|
|
|
|
|
|
NOTE the bits highlighted in blue are the bits you can change (e.g. swap
|
|
|
|
for some other data) - the rest is the fundamental part of the command.
|
|
|
|
However, note that in some instances where the example has a few lines
|
|
|
|
of code, if you change one of the variable names, you may have to change
|
|
|
|
all instances of that variable name.
|
|
|
|
|
|
|
|
### Importing modules, and what those modules do
|
|
|
|
|
|
|
|
Note, if you don\'t know which you need, just import them all!
|
|
|
|
|
|
|
|
from numpy import \*
|
|
|
|
|
|
|
|
> Imports the module bumpy, which turns Python into a numerical analysis
|
|
|
|
> tool (akin to Matlab form those of you who\'ve used Matlab)
|
|
|
|
|
|
|
|
from matplotlib.pyplot import \*
|
|
|
|
|
|
|
|
> Imports the module containing the plotting tools
|
|
|
|
|
|
|
|
from iris import \*
|
|
|
|
|
|
|
|
> This imports all (\* means all in computing termanology) the bits and
|
|
|
|
> pieces of a module (called \'iris\') that allows us to interact with
|
|
|
|
> large observational and climate model datasets stored in a type of
|
|
|
|
> file called \'netcdf\' (files with an extension (i.e. a file ending
|
|
|
|
> like .docx or .pdf) \'.nc\')
|
|
|
|
|
|
|
|
from iris.analysis import \*
|
|
|
|
|
|
|
|
> This imports the bits of the \'iris\' module required to start playing
|
|
|
|
> around with this \'netcdf\' data
|
|
|
|
|
|
|
|
import iris.quickplot as quickplot
|
|
|
|
|
|
|
|
> This imports the tools used to plot simple maps etc. with
|
|
|
|
> data that has been read in using any of the \'iris\' modules (see
|
|
|
|
> those above)
|
|
|
|
|
|
|
|
from iris.coord\_categorisation import \*
|
|
|
|
|
|
|
|
> This reads in a module that allows us to do clever things like
|
|
|
|
> converting monthly data to yearly data
|
|
|
|
|
|
|
|
from iris.analysis.cartography import \*
|
|
|
|
|
|
|
|
> This brings in a clever mapping module which can work out from the
|
|
|
|
> latitudes and longitudes at the corners of each box, what the area of
|
|
|
|
> the box is and lots more\...
|
|
|
|
|
|
|
|
from scipy.stats import \*
|
|
|
|
|
|
|
|
> Model to do advances statistical analysis
|
|
|
|
|
|
|
|
from scipy.stats.mstats import \*
|
|
|
|
|
|
|
|
> Model to do further advances statistical analysis (for example
|
|
|
|
> producing linear regression models)
|
|
|
|
|
|
|
|
### Help (manual pages)
|
|
|
|
|
|
|
|
help(command\_name)
|
|
|
|
|
|
|
|
> e.g. help(range) - displays the help text for the common you put in
|
|
|
|
> the brackets
|
|
|
|
>
|
|
|
|
> QUIT help by typing \'q\'
|
|
|
|
|
|
|
|
### Making an array with some data in
|
|
|
|
|
|
|
|
my\_array = array(\[2,5,7,4,3,2,4,5,6,8\])
|
|
|
|
|
|
|
|
### Making an array that is automatically full of a sequence of numbers
|
|
|
|
|
|
|
|
result\_variable = linspace(10,30,50)
|
|
|
|
|
|
|
|
> This will give you an array with 50 values equally spaced between 10
|
|
|
|
> and 30
|
|
|
|
|
|
|
|
### Print
|
|
|
|
|
|
|
|
print \'hello\'
|
|
|
|
|
|
|
|
> does what is says\...
|
|
|
|
|
|
|
|
### Basic Maths
|
|
|
|
|
|
|
|
from numpy import \*
|
|
|
|
|
|
|
|
my\_array1 = array(\[2.0,5.0,7.0\])
|
|
|
|
|
|
|
|
my\_array2 = array(\[2.0,2.0,2.0\])
|
|
|
|
|
|
|
|
c = my\_array1 + my\_array2
|
|
|
|
|
|
|
|
print c
|
|
|
|
|
|
|
|
d = my\_array1 / my\_array2
|
|
|
|
|
|
|
|
print d
|
|
|
|
|
|
|
|
e = my\_array1 \* my\_array2
|
|
|
|
|
|
|
|
print e
|
|
|
|
|
|
|
|
f = my\_array1 - my\_array2
|
|
|
|
|
|
|
|
print f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Basic stats
|
|
|
|
|
|
|
|
from numpy import \*
|
|
|
|
|
|
|
|
my\_array1 = array(\[2.0,5.0,7.0\])
|
|
|
|
|
|
|
|
c = max(my\_array1)
|
|
|
|
|
|
|
|
print c
|
|
|
|
|
|
|
|
d = min(my\_array1)
|
|
|
|
|
|
|
|
print d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
e = mean(my\_array1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
print e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### For Loop
|
|
|
|
|
|
|
|
for counter in \[1,2,3,4,5,6\]:\
|
|
|
|
print counter
|
|
|
|
|
|
|
|
> cycles through the list/array that you specify (here \[1,2,3,4,5,6\])
|
|
|
|
> and performs whatever operation you specify in the indented section of
|
|
|
|
> the loop, on each element of the list/array sequentially
|
|
|
|
|
|
|
|
### Plotting
|
|
|
|
|
|
|
|
show()
|
|
|
|
|
|
|
|
> displayed whatever you gave plotted
|
|
|
|
|
|
|
|
##### line plot
|
|
|
|
|
|
|
|
> plot(variable1, variable2)
|
|
|
|
>
|
|
|
|
> where for example variable1 = array(\[1,2,3,4\]) and variable2 =
|
|
|
|
> array(\[8,7,6,5\])
|
|
|
|
|
|
|
|
##### Scatter plot
|
|
|
|
|
|
|
|
> scatter(variable1, variable2)
|
|
|
|
|
|
|
|
##### Adding labels and titles to plots
|
|
|
|
|
|
|
|
> title(\'my plot\')\
|
|
|
|
> xlabel(\'variable 1\')\
|
|
|
|
> ylabel(\'variable 2\')
|
|
|
|
>
|
|
|
|
> Where xlabel and ylabel and the x and y axis titles respectively
|
|
|
|
|
|
|
|
##### Adding a legend to a plot
|
|
|
|
|
|
|
|
> plot(variable\_1,variable\_2, label = \'max\')\
|
|
|
|
> plot(variable\_1,variable\_3, label = \'min\')\
|
|
|
|
> legend()
|
|
|
|
>
|
|
|
|
> If you are plotting two lines, you might want a legend to explain what
|
|
|
|
> they are. Here we have those two lines and a legend where they
|
|
|
|
> are labelled \'max\' and \'min\'
|
|
|
|
|
|
|
|
##### Contour plot of raw data (i.e. not of a cube of data that contains all of the metadata as well as the numbers)
|
|
|
|
|
|
|
|
> contourf(data)
|
|
|
|
>
|
|
|
|
> Note that a contour (without the f produces a contour plot, but inly
|
|
|
|
> plots the contour lines rather than filling between the lines with the
|
|
|
|
> relevant colour)
|
|
|
|
>
|
|
|
|
> So, for example, if you wanted a coloured contour map as the base map,
|
|
|
|
> then a line contour map (like altitude on an OS map) op top, do:
|
|
|
|
>
|
|
|
|
> contourf(data1)
|
|
|
|
>
|
|
|
|
> contour(data2)
|
|
|
|
|
|
|
|
##### Contour plot from an \'iris cube\' i.e. a variable read in using one of the iris modules - a variable holding the (e.g.) climate data as well as all the metadata about that climate model etc.
|
|
|
|
|
|
|
|
> quickplot.contourf(my\_cube,31)
|
|
|
|
>
|
|
|
|
> Note that the \'31\' here is how many colour levels you want to use -
|
|
|
|
> you can of course leave it blank
|
|
|
|
|
|
|
|
##### x-y plot from data that was read in as an iris cube then averaged so that it ply has a single dimension (i.e. it has been spatially averaged so the only dimension is time)
|
|
|
|
|
|
|
|
> quickplot.plot(my\_cube)
|
|
|
|
|
|
|
|
##### And as above, but specifying how many ticks (numbers) you want on the x-axis:
|
|
|
|
|
|
|
|
##### import matplotlib.pyplot as plt import iris.quickplot as quickplot
|
|
|
|
|
|
|
|
##### number\_of\_xaxis\_ticks = 5
|
|
|
|
|
|
|
|
##### fig = plt.figure() ax = fig.add\_subplot(1, 1, 1)
|
|
|
|
|
|
|
|
> quickplot.plot(my\_cube)
|
|
|
|
>
|
|
|
|
> xloc = plt.MaxNLocator(number\_of\_xaxis\_ticks)\
|
|
|
|
> ax.xaxis.set\_major\_locator(xloc)
|
|
|
|
>
|
|
|
|
> plt.show()
|
|
|
|
|
|
|
|
##### Adding coastlines to a map plot
|
|
|
|
|
|
|
|
> gca().coastlines()
|
|
|
|
|
|
|
|
##### Coloured contour plot from an \'iris cube\' with a line counter map (like altitude contours from an OS map) on top:
|
|
|
|
|
|
|
|
> quickplot.contourf(my\_cube\_1,31)
|
|
|
|
>
|
|
|
|
> quickplot.contour(my\_cube\_2,31) \# i.e. no \'f\' at the end of
|
|
|
|
> cotour
|
|
|
|
>
|
|
|
|
> Where my\_cube\_1 and my\_cube\_2 are the two datasets you want to
|
|
|
|
> produce. e.g.
|
|
|
|
> [here](file:////display/HalloranResearchMain/contour_on_contour)
|
|
|
|
|
|
|
|
##### Plotting a map of the Pacific (and not having a big white gap)
|
|
|
|
|
|
|
|
> import iris.plot as iplt\
|
|
|
|
> import cartopy.crs as ccrs
|
|
|
|
>
|
|
|
|
> west = -250\
|
|
|
|
> east = -100\
|
|
|
|
> south = -90\
|
|
|
|
> north = 90
|
|
|
|
>
|
|
|
|
> temporary\_cube = cube.intersection(longitude = (west, east))\
|
|
|
|
> my\_regional\_cube = temporary\_cube.intersection(latitude = (south,
|
|
|
|
> north))
|
|
|
|
>
|
|
|
|
> ax1 =
|
|
|
|
> plt.subplot(111,projection=ccrs.PlateCarree(central\_longitude=np.round(west
|
|
|
|
> + (east - west)/2.0)))\
|
|
|
|
> my\_plot = iplt.contourf(my\_regional\_cube)\
|
|
|
|
> plt.show()
|
|
|
|
|
|
|
|
##### Plotting a line plot with coloured bands showing uncertainties/standard deviations
|
|
|
|
|
|
|
|
> [example
|
|
|
|
> script](file:////display/HalloranResearchMain/example+script+band+plots)
|
|
|
|
|
|
|
|
### Reading data in from a text file
|
|
|
|
|
|
|
|
my\_data = genfromtxt(\'filename\',skip\_header = 3)
|
|
|
|
|
|
|
|
> Reads columns of data from a text file into an array, here called
|
|
|
|
> my\_data. In this case it skips the first two lines assuming that this
|
|
|
|
> has column titles etc.
|
|
|
|
|
|
|
|
or if a cvs file:
|
|
|
|
|
|
|
|
my\_data = genfromtxt(\'filename.csv\',skip\_header = 3,delimiter =
|
|
|
|
\',\')
|
|
|
|
|
|
|
|
delimiter specifies what character is separating the columns - typically
|
|
|
|
a comma, but it could be a \';\' or a tab (which is \'\\t\') or a space
|
|
|
|
\' \' etc.
|
|
|
|
|
|
|
|
### Loading climate model or big observational data from netcdf files (files with the extension \'.nc\') using iris
|
|
|
|
|
|
|
|
my\_cube = load\_cube(\'filename\')
|
|
|
|
|
|
|
|
> Note that if this does not work and tells you ether are too many
|
|
|
|
> cubes, but can try:
|
|
|
|
|
|
|
|
my\_cube = load(\'filename\')
|
|
|
|
|
|
|
|
> This is more flexible about how to reads in data, but you might find
|
|
|
|
> that you have read in various different things at one, so will want to
|
|
|
|
> check this after (print my\_cube), then potentially extract what you
|
|
|
|
> want form my\_cube
|
|
|
|
|
|
|
|
### Converting monthly data to yearly data in an iris cube
|
|
|
|
|
|
|
|
add\_year(my\_monthly\_cube, \'time\', name=\'year\')\
|
|
|
|
my\_new\_annual\_cube= my\_monthly\_cube.aggregated\_by(\'year\', MEAN)
|
|
|
|
|
|
|
|
### Calculating an annual mean but just over specific months- from an iris cube
|
|
|
|
|
|
|
|
import iris\
|
|
|
|
from iris.coord\_categorisation import \*
|
|
|
|
|
|
|
|
\#extracting just two months from a cube with all of teh months - here
|
|
|
|
12 and 1: December and January\
|
|
|
|
cube = iris.load\_cube(\'your\_monthly\_cube\_location\_and\_name.nc\')\
|
|
|
|
add\_month\_number(cube, \'time\', name=\'month\_number\')\
|
|
|
|
cube2 = cube\[np.where((cube.coord(\'month\_number\').points == 12) \|
|
|
|
|
(cube.coord(\'month\_number\') == 1))\]
|
|
|
|
|
|
|
|
\#then to average this by each year, so that you have the December-Jan
|
|
|
|
for each year add the \'season year\', i.e. a number of each \'season\'\
|
|
|
|
add\_season\_year(cube2, \'time\', name=\'season\_year\')
|
|
|
|
|
|
|
|
\#then average by the season year:\
|
|
|
|
cube2.aggregated\_by(\[\'season\_year\'\], iris.analysis.MEAN)
|
|
|
|
|
|
|
|
\#cube 2 then contains the averaged for each december-january period in
|
|
|
|
each year
|
|
|
|
|
|
|
|
### Averaging an iris cube of data along some dimension (typically across the time-intervals, or latitude etc.)
|
|
|
|
|
|
|
|
average\_across\_time = my\_cube.collapsed(\[\'time\'\],MEAN)
|
|
|
|
|
|
|
|
or when you want to average latitudinal and longitudinally and need to
|
|
|
|
account for the different sizes of a (e.g.) 1x1 degree grid box on the
|
|
|
|
equator verses a 1x1 degree grid box at the pole
|
|
|
|
|
|
|
|
my\_cube.coord(\'latitude\').guess\_bounds()\
|
|
|
|
my\_cube.coord(\'longitude\').guess\_bounds() \#These two lines are just
|
|
|
|
something you sometimes have to do when the original data did not
|
|
|
|
include all of the latitude/longitude data we require\
|
|
|
|
grid\_areas = area\_weights(my\_cube)\
|
|
|
|
global\_average\_variable =
|
|
|
|
my\_cube.collapsed(\[\'latitude\', \'longitude\'\],MEAN,
|
|
|
|
weights=grid\_areas) \#This does the clever maths to work out the grid
|
|
|
|
box areas
|
|
|
|
|
|
|
|
### Writing a script
|
|
|
|
|
|
|
|
see week 1 practical: [Week 1 An introduction to scientific
|
|
|
|
computing](file:////display/HalloranResearchMain/Week+1+An+introduction+to+scientific+computing)
|
|
|
|
|
|
|
|
### Calculating the mean of an array
|
|
|
|
|
|
|
|
my\_mean = mean(my\_array)
|
|
|
|
|
|
|
|
### Calculating the mean of an array which has \'nan\' - Not A Number - \'gaps\' in it
|
|
|
|
|
|
|
|
from scipy.stats import \*\
|
|
|
|
my\_mean = nanmean(my\_array)
|
|
|
|
|
|
|
|
### Calculating the standard deviation of an array
|
|
|
|
|
|
|
|
my\_std = std(my\_array)
|
|
|
|
|
|
|
|
### Calculating the standard deviation of an array which has \'NaN\' - Not s Number - \'gaps\' in it
|
|
|
|
|
|
|
|
from scipy.stats import \*\
|
|
|
|
my\_std = nanstd(my\_array)
|
|
|
|
|
|
|
|
###
|
|
|
|
|
|
|
|
### Replacing a missing data indicator with a nan
|
|
|
|
|
|
|
|
### from numpy import \*
|
|
|
|
|
|
|
|
### data = array(\[1.0,2.0,3.0,4.0,5.0,-99999.0,7.0,8.0,9.0,-99999.0\])
|
|
|
|
|
|
|
|
\#if we make a simply array containing some numbers and some \'missing
|
|
|
|
data values\', where -99999.0 - e.g. numbers to highlight where there
|
|
|
|
were no observations, we can then change them to Not A Number (nan)
|
|
|
|
values like so:
|
|
|
|
|
|
|
|
data\[where(data == -99999)\] = nan
|
|
|
|
|
|
|
|
print data
|
|
|
|
|
|
|
|
\#If we then plotted this data it would commit these nan values
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
masked
|
|
|
|
|
|
|
|
###
|
|
|
|
|
|
|
|
### Masking where you have NaNs in a dataset
|
|
|
|
|
|
|
|
import numpy.ma as ma
|
|
|
|
|
|
|
|
masked\_data = ma.masked\_invalid(data)
|
|
|
|
|
|
|
|
> for more information about masks see the [relevant numpy help
|
|
|
|
> pages](http://docs.scipy.org/doc/numpy/reference/maskedarray.html)
|
|
|
|
|
|
|
|
### Calculating the correlation between two datasets
|
|
|
|
|
|
|
|
my\_correlation\_variable = spearmanr(variable\_1,variable\_2)
|
|
|
|
|
|
|
|
### Linear Regression
|
|
|
|
|
|
|
|
from scipy.stats.mstats import \*\
|
|
|
|
slope, intercept, r\_value, p\_value, std\_err =
|
|
|
|
linregress(variable\_1,variable\_2)
|
|
|
|
|
|
|
|
> Calculates the slope (m) and intercept (c) from a linear relationship
|
|
|
|
> (the equation of which is y = m\*x + c), as well as the stats on that
|
|
|
|
> relationship (r-value, p-value, standard error etc.)
|
|
|
|
|
|
|
|
### Multiple linear regression example
|
|
|
|
|
|
|
|
import numpy as np\
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
\#Your y-value\
|
|
|
|
y = \[-6,-5,-10,-5,-8,-3,-6,-8,-8\]
|
|
|
|
|
|
|
|
\#Your 3 x-values (note that you could have more/less but would have to
|
|
|
|
edit the subsequent code)\
|
|
|
|
x1 = \[-4.95,-4.55,-10.96,-1.08,-6.52,-0.81,-7.01,-4.46,-11.54\]\
|
|
|
|
x2 = \[-5.87,-4.52,-11.64,-3.36,-7.45,-2.36,-7.33,-7.65,-10.03\]\
|
|
|
|
x3 = \[-0.76,-0.71,-0.98,0.75,-0.86,-0.50,-0.33,-0.94,-1.03\]
|
|
|
|
|
|
|
|
\#combines the x\'s into one variable\
|
|
|
|
x = \[x1,x2,x3\]
|
|
|
|
|
|
|
|
\#does the least squares fitting\
|
|
|
|
X = np.column\_stack(x+\[\[1\]\*len(x\[0\])\])\
|
|
|
|
m3,m2,m1,c = np.linalg.lstsq(X,y)\[0\]
|
|
|
|
|
|
|
|
\#plots the result. The original y is in red, and the multiple linear
|
|
|
|
regression estimates values in green\
|
|
|
|
plt.plot(y,\'r\')\
|
|
|
|
plt.plot(m3\*x3 + m2\*x2 + m1\*x1 + c,\'g--\')\
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Plotting a linear best fit line
|
|
|
|
|
|
|
|
slope, intercept, r\_value, p\_value, std\_err =
|
|
|
|
linregress(variable\_1,variable\_2)
|
|
|
|
|
|
|
|
scatter(variable\_1,variable\_2)\
|
|
|
|
x = variable1\
|
|
|
|
y = slope\*variable1+intercept\
|
|
|
|
plot(x,y)\
|
|
|
|
show()
|
|
|
|
|
|
|
|
### Calculating seawater density
|
|
|
|
|
|
|
|
import seawater\
|
|
|
|
results\_array = seawater.dens(salinity\_data,temperature\_data,1)
|
|
|
|
|
|
|
|
> Note the value \'1\' at the end is saying there is a pressure of 1.
|
|
|
|
> If you were calculating density for deep in the ocean, technically you
|
|
|
|
> should account for the increased pressure of seawater. Type
|
|
|
|
> help(seawater.dens) to find out more about this
|
|
|
|
|
|
|
|
### Working with Iris Cubes (i.e. data read in from netcdf files using the iris modules)
|
|
|
|
|
|
|
|
##### Contour plot from an \'iris cube\' i.e. a variable read in using one of the iris modules - a variable holding the (e.g.) climate data as well as all the metadata about that climate model etc.
|
|
|
|
|
|
|
|
> quickplot.contourf(my\_cube,31)
|
|
|
|
>
|
|
|
|
> Note that the \'31\' here is how many colour levels you want to use -
|
|
|
|
> you can of course leave it blank
|
|
|
|
|
|
|
|
##### x-y plot from data that was read in as an iris cube then averaged so that it ply has a single dimension (i.e. it has been spatially averaged so the only dimension is time)
|
|
|
|
|
|
|
|
> quickplot.plot(my\_cube)
|
|
|
|
>
|
|
|
|
>
|
|
|
|
|
|
|
|
##### As above, but specifying that you want a specific dimension (here depth) on the y-axis:
|
|
|
|
|
|
|
|
> quickplot.plot(my\_data,my\_cube.coord(\'depth\'))\
|
|
|
|
> show()
|
|
|
|
|
|
|
|
##### Adding coastlines to a map plot
|
|
|
|
|
|
|
|
> gca().coastlines()
|
|
|
|
|
|
|
|
##### Loading climate model or big observational data from netcdf files (files with the extension \'.nc\') using iris
|
|
|
|
|
|
|
|
> my\_cube = load\_cube(\'filename\')
|
|
|
|
>
|
|
|
|
> Note that if this does not work and tells you ether are too many
|
|
|
|
> cubes, but can try:
|
|
|
|
>
|
|
|
|
> my\_cube = load(\'filename\')
|
|
|
|
>
|
|
|
|
> This is more flexible about how to reads in data, but you might find
|
|
|
|
> that you have read in various different things at one, so will want to
|
|
|
|
> check this after (print my\_cube), then potentially extract what you
|
|
|
|
> want form my\_cube
|
|
|
|
|
|
|
|
##### Converting monthly data to yearly data in an iris cube
|
|
|
|
|
|
|
|
> add\_year(my\_monthly\_cube, \'time\', name=\'year\')
|
|
|
|
>
|
|
|
|
> my\_new\_annual\_cube= my\_monthly\_cube.aggregated\_by(\'year\', MEAN)
|
|
|
|
|
|
|
|
##### Averaging an iris cube of data along some dimension (typically across the time-intervals, or latitude etc.)
|
|
|
|
|
|
|
|
> average\_across\_time = my\_cube.collapsed(\[\'time\'\],MEAN)
|
|
|
|
>
|
|
|
|
> or when you want to average latitudinal and longitudinally and need to
|
|
|
|
> account for the different sizes of a (e.g.) 1x1 degree grid box on the
|
|
|
|
> equator verses a 1x1 degree grid box at the pole
|
|
|
|
>
|
|
|
|
> my\_cube.coord(\'latitude\').guess\_bounds()\
|
|
|
|
> my\_cube.coord(\'longitude\').guess\_bounds() \#These two lines are
|
|
|
|
> just something you sometimes have to do when the original data did not
|
|
|
|
> include all of the latitude/longitude data we require\
|
|
|
|
> grid\_areas = area\_weights(my\_cube)\
|
|
|
|
> global\_average\_variable =
|
|
|
|
> my\_cube.collapsed(\[\'latitude\', \'longitude\'\],MEAN,
|
|
|
|
> weights=grid\_areas) \#This does the clever maths to work out the grid
|
|
|
|
> box areas
|
|
|
|
|
|
|
|
##### Extracting horizontal slices from a cube (e.g. at a given depth)
|
|
|
|
|
|
|
|
> my\_depth\_slice\_variable = my\_cube.extract(Constraint(depth = 0))
|
|
|
|
>
|
|
|
|
> This example extracts the surface level from a 3D cube
|
|
|
|
|
|
|
|
##### Extracting vertical slices from a cube
|
|
|
|
|
|
|
|
> my\_meridional\_slice\_variable =
|
|
|
|
> my\_cube.extract(Constraint(longitude = 182.5))
|
|
|
|
>
|
|
|
|
> Of course here \'longitude\' could be (in most situations) replaced by
|
|
|
|
> latitude, depth of time - although you would have to specify
|
|
|
|
> a different value because (for example) 182.5 does not mean anything
|
|
|
|
> sensible as a latitude
|
|
|
|
>
|
|
|
|
> Averaging together all of the latitudes - i.e. like a depth slice, but
|
|
|
|
> averaged over the region of your dataset
|
|
|
|
|
|
|
|
variable\_collapsed\_along\_longitude =
|
|
|
|
my\_cube.collapsed(\'longitude\',MEAN)
|
|
|
|
|
|
|
|
##### Extracting a spatial region from a global dataset
|
|
|
|
|
|
|
|
> west = -70\
|
|
|
|
> east = 50\
|
|
|
|
> south = -20\
|
|
|
|
> north = 60
|
|
|
|
>
|
|
|
|
> temporary\_cube = my\_cube.intersection(longitude = (west, east))\
|
|
|
|
> my\_regional\_cube = temporary\_cube.intersection(latitude = (south,
|
|
|
|
> north))
|
|
|
|
>
|
|
|
|
> note that while I\'ve left the variables \'west\' \'east\' etc. red,
|
|
|
|
> there is no reason why these need to have those names, bit I thought
|
|
|
|
> it easier to see what this was doing this way
|
|
|
|
|
|
|
|
##### Extracting the data part of the iris cube (rather than the metadata)
|
|
|
|
|
|
|
|
> my\_data = my\_cube.data
|
|
|
|
|
|
|
|
##### Getting the year (or month or day) values for a cube of data
|
|
|
|
|
|
|
|
> note if you just look at the time coordinate it is usually something
|
|
|
|
> crazy like \'number of days since 1st Jan 1850\'
|
|
|
|
>
|
|
|
|
> coord = my\_cube.coord(\'time\')\
|
|
|
|
> date\_variable = array(\[coord.units.num2date(value).year for value in
|
|
|
|
> coord.points\])
|
|
|
|
|
|
|
|
##### Correlation maps
|
|
|
|
|
|
|
|
> import iris.analysis.stats as istats\
|
|
|
|
> my\_correlation\_variable
|
|
|
|
> = istats.pearsonr(my\_cube\_1,my\_cube\_2,corr\_coords*=*\[\'time\'\])
|
|
|
|
|
|
|
|
##### Calculating the difference point-by-point between two cubes of data
|
|
|
|
|
|
|
|
> my\_difference\_cube = my\_cube\_1 - my\_cube\_2
|
|
|
|
|
|
|
|
##### Calculating the mean of a bunch of cubes
|
|
|
|
|
|
|
|
> my\_mean\_variable = mean(\[my\_cube\_1,my\_cube\_2,my\_cube\_3\])
|
|
|
|
|
|
|
|
##### Calculating the variance of a bunch of cubes
|
|
|
|
|
|
|
|
> my\_mean\_variable = var(\[my\_cube\_1,my\_cube\_2,my\_cube\_3\])
|
|
|
|
|
|
|
|
##### Doing the carbonate-chemistry calculations
|
|
|
|
|
|
|
|
>
|
|
|
|
>
|
|
|
|
> import carbchem\_modified
|
|
|
|
>
|
|
|
|
> co2\_cube =
|
|
|
|
> carbchem\_modified.carbchem(1,temperature\_cube.data.fill\_value,temperature\_cube,salinity\_cube,dissolved\_carbon\_cube,alkalinity\_cube)
|
|
|
|
>
|
|
|
|
> There is actually a bit more flexibility here than the colours make
|
|
|
|
> out\... change the one to a 1 and you can calculate pH for example -
|
|
|
|
> check out the help (assuming I wrote them) for more details.\
|
|
|
|
> Also see the week 8 practical form more details on this:
|
|
|
|
> [GEO3231Week8Practical](file:////display/HalloranResearchMain/GEO3231Week8Practical)
|
|
|
|
|
|
|
|
### Plotting two time-series (or similar) datasets on the same x-axis but with different y axes:
|
|
|
|
|
|
|
|
timeseries\_1 = \[0.7135553808947991, 0.4861775160058641,
|
|
|
|
0.8254392727351161, 0.5098580281973034, 0.13077277305355084,
|
|
|
|
0.7123233371092856, 0.13198748832181673, 0.8517834038288471,
|
|
|
|
0.014955556391444969, 0.39952309433852173\]
|
|
|
|
|
|
|
|
timeseries\_2 = \[ 90.01896882, 74.23638798, 76.70026728,
|
|
|
|
29.56024534, 6.43137557, 19.57561375, 91.57594645, 21.33809188,
|
|
|
|
66.18203436, 68.80092629\]
|
|
|
|
|
|
|
|
We can plot then both on one graph, but because the values
|
|
|
|
in timeseries\_2 are much bigger we will not be able to visually compare
|
|
|
|
any variation between the two time series because the variability in the
|
|
|
|
small numbers will be invisible. e.g.:
|
|
|
|
|
|
|
|
plt.plot(timeseries\_1)\
|
|
|
|
plt.plot(timeseries\_2) \
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
produces [this](file:////display/HalloranResearchMain/plot1)
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
fig, ax1 = plt.subplots()\
|
|
|
|
\# this tells python that you will want a figure and it will have one
|
|
|
|
set of axes called \'ax1\'
|
|
|
|
|
|
|
|
ax1.plot(timeseries\_1,\'red\')\
|
|
|
|
\#This plots data on that axis
|
|
|
|
|
|
|
|
ax2 = ax1.twinx()\
|
|
|
|
\#This tells python that you want a second set of axes for
|
|
|
|
a different dataset. We want to share (twin) the x-axes between the two
|
|
|
|
different set of axes this is what twinx() does. The new set of axes is
|
|
|
|
called \'ax2\'
|
|
|
|
|
|
|
|
ax2.plot(timeseries\_2,\'blue\')\
|
|
|
|
\#Now we plot some data on this second set of axes.
|
|
|
|
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
produces [this](file:////display/HalloranResearchMain/plot2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Saving a figure you have plotted:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
from import matplotlib.ptplot import \*
|
|
|
|
|
|
|
|
timeseries\_1 = \[0.7135553808947991, 0.4861775160058641,
|
|
|
|
0.8254392727351161, 0.5098580281973034, 0.13077277305355084,
|
|
|
|
0.7123233371092856, 0.13198748832181673, 0.8517834038288471,
|
|
|
|
0.014955556391444969, 0.39952309433852173\]
|
|
|
|
|
|
|
|
timeseries\_2 = \[ 90.01896882, 74.23638798, 76.70026728,
|
|
|
|
29.56024534, 6.43137557, 19.57561375, 91.57594645, 21.33809188,
|
|
|
|
66.18203436, 68.80092629\]
|
|
|
|
|
|
|
|
plot(timeseries\_1,timeseries\_2)
|
|
|
|
|
|
|
|
savefig(\'/my\_directory/my\_filename.png\')
|
|
|
|
|
|
|
|
where /my\_directory/my\_filename.png is specfying the location and
|
|
|
|
name of the file you want to save
|
|
|
|
|
|
|
|
### Regridding and doing initial processing of non-CMIP5 data. Example here is SODA data (not python but related)
|
|
|
|
|
|
|
|
Your files may have a number of different variables in them (e.g.
|
|
|
|
temperature and sailinity). We therefore first want to see what is in
|
|
|
|
the files. Logging on to atoll (and possibly not logging from that to a
|
|
|
|
node) type the following. NOTE YOU ARE TYPING THIS INTO THE COMMAND LINE
|
|
|
|
WITHOUT 1st OPENING PYTHON! This is not to be done in python.
|
|
|
|
|
|
|
|
ncdump -h file\_name.nc (where file\_name.nc is the name of your file).
|
|
|
|
This will print out something like
|
|
|
|
[this](file:////display/HalloranResearchMain/SodaHeader). Here you can
|
|
|
|
see what the variables are. It also tells you what the code is for each
|
|
|
|
variable, for example find where it says \'TEMPERATURE\', on the line
|
|
|
|
above that you see that this is a variable called \'temp\' with depth,
|
|
|
|
latitude and longitude information. Above \'SALINITY\' you see you have
|
|
|
|
a variable called \'salt\' etc\... So say you want to look at
|
|
|
|
temperature, the variable name you are after is \'temp\'
|
|
|
|
|
|
|
|
We can then make a new file that holds just that temperature (temp)
|
|
|
|
variable by typing:
|
|
|
|
|
|
|
|
cdo selname,temp file\_name.nc new\_file.nc
|
|
|
|
|
|
|
|
Now just the temperature data is in the file called \'new\_file.nc\'
|
|
|
|
|
|
|
|
If you have a number of files (e.g. each monthly file of the SODA
|
|
|
|
dataset) you will first want to combine these in to one file:
|
|
|
|
|
|
|
|
cdo mergetime file\_name\_1.nc file\_name\_2.nc my\_output\_file.nc
|
|
|
|
|
|
|
|
You can cheat if you want to merge together all of the \'.nc\' files in
|
|
|
|
a directory and type cdo mergetime \*.nc my\_output\_file.nc
|
|
|
|
|
|
|
|
You can then regard the dataset onto a 360 by 180 degree (one by one
|
|
|
|
degree) grid (or other grid if you choose) with:
|
|
|
|
|
|
|
|
cdo remapbil,r360x180 my\_input\_file.nc my\_output\_file.nc
|
|
|
|
|
|
|
|
This takes whatever is in \'my\_input\_file.nc\' and puts it on a nice
|
|
|
|
simple one by one grid in \'my\_output\_file.nc\'. Note if you were to
|
|
|
|
change the 360 or 180 degree numbers you would end up with
|
|
|
|
a different number of latitude and longitude points.
|
|
|
|
|
|
|
|
Your data is now nicely processed and ready to analyse exactly as the
|
|
|
|
various tutorials explain.
|
|
|
|
|
|
|
|
### Reading numerical data in from a text file:
|
|
|
|
|
|
|
|
data = np.genfromtxt(\'input\_data\_file.txt\')
|
|
|
|
|
|
|
|
column1\_data = data\[:,0\]
|
|
|
|
|
|
|
|
column2\_data = data\[:,1\]
|
|
|
|
|
|
|
|
### Saving data to a text file:
|
|
|
|
|
|
|
|
np.savetxt(\'output\_data\_file.txt\',np.c(\[column1\_data,column2\_data\])
|
|
|
|
|
|
|
|
### A function to create running means (smoothing data):
|
|
|
|
|
|
|
|
To use this, just copy the code below into your Python window, then use
|
|
|
|
as any other function.
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
def running\_mean(x, N):
|
|
|
|
|
|
|
|
y = np.zeros((len(x),))
|
|
|
|
|
|
|
|
for ctr in range(len(x)):
|
|
|
|
|
|
|
|
y\[ctr\] = np.sum(x\[(ctr-np.round(N/2)):ctr+np.round(N/2)\])
|
|
|
|
|
|
|
|
out = y/N
|
|
|
|
|
|
|
|
out\[0:np.round(N/2)\] = np.NAN
|
|
|
|
|
|
|
|
out\[-1.0\*np.round(N/2)::\] = np.NAN
|
|
|
|
|
|
|
|
return out
|
|
|
|
|
|
|
|
Here x is the dataset and N the smoothing window (the number of
|
|
|
|
datapoint you want to average together to create your smoothing)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Adding latitude and longitude lables to your map
|
|
|
|
------------------------------------------------
|
|
|
|
|
|
|
|
<http://scitools.org.uk/cartopy/docs/latest/examples/tick_labels.py>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Calculating the overturning stream function example (using the CMIP5 msftmyz variable):
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
\#Read in cube
|
|
|
|
|
|
|
|
cube = iris.load\_cube(\'MPI-ESM-P\_msftmyz\_piControl\_regridded.nc\')
|
|
|
|
|
|
|
|
\#work out what out latitude coordinate is called
|
|
|
|
|
|
|
|
print cube.coords
|
|
|
|
|
|
|
|
\#We see that the latitude coordinate is called something like
|
|
|
|
grid\_latitude. Make a note of this.
|
|
|
|
|
|
|
|
\#Using the name identified above, we can print out all of the latitude
|
|
|
|
points
|
|
|
|
|
|
|
|
print cube.coord(\'grid\_latitude\').points
|
|
|
|
|
|
|
|
\#we can then find the position of the latitude of interest - probably
|
|
|
|
that closes to 26N. E.g. item 116
|
|
|
|
|
|
|
|
cube.coord(\'grid\_latitude\').points\[116\]
|
|
|
|
|
|
|
|
\#Make a new cube (called cube2) with just depth and time for that
|
|
|
|
latitude
|
|
|
|
|
|
|
|
cube2 = cube\[:,0,:,116\]
|
|
|
|
|
|
|
|
\#Make an array to hold the values of all of the stream function values
|
|
|
|
- i.e. an array the same size as the number of time elements in our cube
|
|
|
|
|
|
|
|
my\_MOC\_array = np.zeros(np.shape(cube)\[0\])
|
|
|
|
|
|
|
|
\#Then produce a loop to pull out the maximum values from this latitude
|
|
|
|
for all times.
|
|
|
|
|
|
|
|
for i,slice in enumerate(cube2.slices(\[\'depth\'\])):
|
|
|
|
|
|
|
|
my\_MOC\_array\[i\] = np.max(slice.data)
|
|
|
|
|
|
|
|
\#Plot to make sure the data makes sense:
|
|
|
|
|
|
|
|
plt.plot(my\_MOC\_array)
|
|
|
|
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
Extracting a range of years from a cube
|
|
|
|
---------------------------------------
|
|
|
|
|
|
|
|
> import numpy as np
|
|
|
|
>
|
|
|
|
> coord = my\_cube.coord(\'time\')
|
|
|
|
>
|
|
|
|
> date\_variable = np.array(\[coord.units.num2date(value).year for value
|
|
|
|
> in coord.points\])
|
|
|
|
>
|
|
|
|
> years\_of\_interest = np.where((date\_variable \<= 2100) &
|
|
|
|
> (date\_variable \>= 2080))
|
|
|
|
>
|
|
|
|
> my\_cube = my\_cube\[years\_of\_interest\]
|
|
|
|
|
|
|
|
Color maps:
|
|
|
|
-----------
|
|
|
|
|
|
|
|
Accent, Accent\_r, Blues, Blues\_r, BrBG, BrBG\_r, BuGn, BuGn\_r, BuPu,
|
|
|
|
BuPu\_r, CMRmap, CMRmap\_r, Dark2, Dark2\_r, GnBu, GnBu\_r, Greens,
|
|
|
|
Greens\_r, Greys, Greys\_r, OrRd, OrRd\_r, Oranges, Oranges\_r, PRGn,
|
|
|
|
PRGn\_r, Paired, Paired\_r, Pastel1, Pastel1\_r, Pastel2, Pastel2\_r,
|
|
|
|
PiYG, PiYG\_r, PuBu, PuBuGn, PuBuGn\_r, PuBu\_r, PuOr, PuOr\_r, PuRd,
|
|
|
|
PuRd\_r, Purples, Purples\_r, RdBu, RdBu\_r, RdGy, RdGy\_r, RdPu,
|
|
|
|
RdPu\_r, RdYlBu, RdYlBu\_r, RdYlGn, RdYlGn\_r, Reds, Reds\_r, Set1,
|
|
|
|
Set1\_r, Set2, Set2\_r, Set3, Set3\_r, Spectral, Spectral\_r, Wistia,
|
|
|
|
Wistia\_r, YlGn, YlGnBu, YlGnBu\_r, YlGn\_r, YlOrBr, YlOrBr\_r, YlOrRd,
|
|
|
|
YlOrRd\_r, afmhot, afmhot\_r, autumn, autumn\_r, binary, binary\_r,
|
|
|
|
bone, bone\_r, brewer\_Accent\_08, brewer\_Blues\_09, brewer\_BrBG\_11,
|
|
|
|
brewer\_BuGn\_09, brewer\_BuPu\_09, brewer\_Dark2\_08, brewer\_GnBu\_09,
|
|
|
|
brewer\_Greens\_09, brewer\_Greys\_09, brewer\_OrRd\_09,
|
|
|
|
brewer\_Oranges\_09, brewer\_PRGn\_11, brewer\_Paired\_12,
|
|
|
|
brewer\_Pastel1\_09, brewer\_Pastel2\_08, brewer\_PiYG\_11,
|
|
|
|
brewer\_PuBuGn\_09, brewer\_PuBu\_09, brewer\_PuOr\_11,
|
|
|
|
brewer\_PuRd\_09, brewer\_Purples\_09, brewer\_RdBu\_11,
|
|
|
|
brewer\_RdGy\_11, brewer\_RdPu\_09, brewer\_RdYlBu\_11,
|
|
|
|
brewer\_RdYlGn\_11, brewer\_Reds\_09, brewer\_Set1\_09,
|
|
|
|
brewer\_Set2\_08, brewer\_Set3\_12, brewer\_Spectral\_11,
|
|
|
|
brewer\_YlGnBu\_09, brewer\_YlGn\_09, brewer\_YlOrBr\_09,
|
|
|
|
brewer\_YlOrRd\_09, brg, brg\_r, bwr, bwr\_r, cool, cool\_r, coolwarm,
|
|
|
|
coolwarm\_r, copper, copper\_r, cubehelix, cubehelix\_r, flag, flag\_r,
|
|
|
|
gist\_earth, gist\_earth\_r, gist\_gray, gist\_gray\_r, gist\_heat,
|
|
|
|
gist\_heat\_r, gist\_ncar, gist\_ncar\_r, gist\_rainbow,
|
|
|
|
gist\_rainbow\_r, gist\_stern, gist\_stern\_r, gist\_yarg,
|
|
|
|
gist\_yarg\_r, gnuplot, gnuplot2, gnuplot2\_r, gnuplot\_r, gray,
|
|
|
|
gray\_r, hot, hot\_r, hsv, hsv\_r, inferno, inferno\_r, jet, jet\_r,
|
|
|
|
magma, magma\_r, nipy\_spectral, nipy\_spectral\_r, ocean, ocean\_r,
|
|
|
|
pink, pink\_r, plasma, plasma\_r, prism, prism\_r, rainbow, rainbow\_r,
|
|
|
|
seismic, seismic\_r, spectral, spectral\_r, spring, spring\_r, summer,
|
|
|
|
summer\_r, terrain, terrain\_r, viridis, viridis\_r, winter, winter\_r
|
|
|
|
|
|
|
|
Producing pretty plots
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
[example
|
|
|
|
file](file:////download/attachments/29984259/pretty_plotting.py%3fversion=2&modificationDate=1453994726000&api=v2)
|
|
|
|
|
|
|
|
or copy of script
|
|
|
|
[here](file:////display/HalloranResearchMain/PrettyPlotting)
|
|
|
|
|
|
|
|
High, low and band-pass filtering cubes
|
|
|
|
---------------------------------------
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
import scipy
|
|
|
|
|
|
|
|
import scipy.signal
|
|
|
|
|
|
|
|
import iris.quickplot as qplt
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
We will use the following functions, so make sure they are available
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
def butter\_bandpass(lowcut, cutoff):
|
|
|
|
|
|
|
|
order = 2
|
|
|
|
|
|
|
|
low = 1/lowcut
|
|
|
|
|
|
|
|
b, a = scipy.signal.butter(order, low , btype=cutoff,analog = False)
|
|
|
|
|
|
|
|
return b, a
|
|
|
|
|
|
|
|
def low\_pass\_filter(cube,limit\_years):
|
|
|
|
|
|
|
|
b1, a1 = butter\_bandpass(limit\_years, \'low\')
|
|
|
|
|
|
|
|
output = scipy.signal.filtfilt(b1, a1, cube,axis = 0)
|
|
|
|
|
|
|
|
return output
|
|
|
|
|
|
|
|
def high\_pass\_filter(cube,limit\_years):
|
|
|
|
|
|
|
|
b1, a1 = butter\_bandpass(limit\_years, \'high\')
|
|
|
|
|
|
|
|
output = scipy.signal.filtfilt(b1, a1, cube,axis = 0)
|
|
|
|
|
|
|
|
return output
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Initially just reading in a dataset to work with, and averaging lats and
|
|
|
|
longs to give us a timeseries to plot - you can obviously swap in your
|
|
|
|
timeseries
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
file =
|
|
|
|
\'/media/usb\_external1/cmip5/tas\_regridded/MPI-ESM-P\_tas\_piControl\_regridded.nc\'
|
|
|
|
|
|
|
|
cube = iris.load\_cube(file)
|
|
|
|
|
|
|
|
timeseries1 =
|
|
|
|
cube.collapsed(\[\'latitude\',\'longitude\'\],iris.analysis.MEAN)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Filtering out everything happening on timescales shorter than than X
|
|
|
|
years (where x is called lower\_limit\_years)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
lower\_limit\_years = 10.0
|
|
|
|
|
|
|
|
output\_cube = cube.copy()
|
|
|
|
|
|
|
|
output\_cube.data = low\_pass\_filter(cube.data,lower\_limit\_years)
|
|
|
|
|
|
|
|
timeseries2 =
|
|
|
|
output\_cube.collapsed(\[\'latitude\',\'longitude\'\],iris.analysis.MEAN)
|
|
|
|
|
|
|
|
plt.close(\'all\')
|
|
|
|
|
|
|
|
qplt.plot(timeseries1 - np.mean(timeseries1.data),\'r\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
qplt.plot(timeseries2 - np.mean(timeseries2.data),\'g\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
plt.show(block = True)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Filtering out everything happening on timescales longer than than X
|
|
|
|
years (where x is called upper\_limit\_years)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
upper\_limit\_years = 5.0
|
|
|
|
|
|
|
|
output\_cube = cube.copy()
|
|
|
|
|
|
|
|
output\_cube.data = high\_pass\_filter(cube.data,upper\_limit\_years)
|
|
|
|
|
|
|
|
timeseries3 =
|
|
|
|
output\_cube.collapsed(\[\'latitude\',\'longitude\'\],iris.analysis.MEAN)
|
|
|
|
|
|
|
|
plt.close(\'all\')
|
|
|
|
|
|
|
|
qplt.plot(timeseries1 - np.mean(timeseries1.data),\'r\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
qplt.plot(timeseries3 - np.mean(timeseries3.data),\'b\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
plt.show(block = True)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Filtering out everything happening on timescales longer than than X
|
|
|
|
years (where x is called upper\_limit\_years) but shorter than y years
|
|
|
|
(where y is called lower\_limit\_years)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
upper\_limit\_years = 50.0
|
|
|
|
|
|
|
|
output\_cube = cube.copy()
|
|
|
|
|
|
|
|
output\_cube.data = high\_pass\_filter(cube.data,upper\_limit\_years)
|
|
|
|
|
|
|
|
lower\_limit\_years = 5.0
|
|
|
|
|
|
|
|
output\_cube.data =
|
|
|
|
low\_pass\_filter(output\_cube.data,lower\_limit\_years)
|
|
|
|
|
|
|
|
timeseries4 =
|
|
|
|
output\_cube.collapsed(\[\'latitude\',\'longitude\'\],iris.analysis.MEAN)
|
|
|
|
|
|
|
|
plt.close(\'all\')
|
|
|
|
|
|
|
|
qplt.plot(timeseries1 - np.mean(timeseries1.data),\'r\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
qplt.plot(timeseries4 - np.mean(timeseries4.data),\'y\',alpha =
|
|
|
|
0.5,linewidth = 2)
|
|
|
|
|
|
|
|
plt.show(block = True)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Hopefully this tells you everything you need. Just be aware that strange
|
|
|
|
things can happen at he ends of the timeseries (just check it is doing
|
|
|
|
something sensible)
|
|
|
|
|
|
|
|
\'\'\'
|
|
|
|
|
|
|
|
Calculating the depths at which a variable in a cube moved from above to below a critical value
|
|
|
|
-----------------------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
for example, for calculating the saturation horizon, or the depth of a
|
|
|
|
isopycnal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
import unicodedata
|
|
|
|
|
|
|
|
directory = \'/data/NAS-ph290/ph290/cmip5/last1000/\' \#the location of
|
|
|
|
the directory holding the input file
|
|
|
|
|
|
|
|
file =
|
|
|
|
\'HadCM3\_thetao\_past1000\_r1i1p1\_regridded\_not\_vertically\_Omon.nc\'
|
|
|
|
\#the name of the input file
|
|
|
|
|
|
|
|
critical\_value = 1.0 \# i.e. identify the depth for which values move
|
|
|
|
from above to below this value
|
|
|
|
|
|
|
|
\#load the cube in from which you want to calculate the depth data
|
|
|
|
|
|
|
|
cube = iris.load\_cube(directory + file)
|
|
|
|
|
|
|
|
\#This script is a little more complicated than you might expect, so
|
|
|
|
that it can work with cubes of 3 or 4 dimensions, and with cubes that
|
|
|
|
have used various names to describe the depth axis
|
|
|
|
|
|
|
|
shape = np.shape(cube)
|
|
|
|
|
|
|
|
test = np.size(shape)
|
|
|
|
|
|
|
|
if test == 4:
|
|
|
|
|
|
|
|
print \'cube has 4 dimensions, assuming time, depth, latitude,
|
|
|
|
longitude\'
|
|
|
|
|
|
|
|
depth\_coord = 1
|
|
|
|
|
|
|
|
elif test == 3:
|
|
|
|
|
|
|
|
print \'cube has 3 dimensions, assuming depth, latitude, longitude\'
|
|
|
|
|
|
|
|
depth\_coord = 0
|
|
|
|
|
|
|
|
else:
|
|
|
|
|
|
|
|
print \'are you sure cube has a depth dimension?\'
|
|
|
|
|
|
|
|
\#Identify the values for each depth level
|
|
|
|
|
|
|
|
depths = cube.coord(dimensions = depth\_coord).points
|
|
|
|
|
|
|
|
\#Make a cube that just holds the depth data
|
|
|
|
|
|
|
|
depths\_cube = cube.copy()
|
|
|
|
|
|
|
|
if test == 4:
|
|
|
|
|
|
|
|
depths\_cube.data = np.ma.masked\_array(np.swapaxes(np.tile(depths,
|
|
|
|
(shape\[0\],shape\[3\], shape\[2\],1)),3,1))
|
|
|
|
|
|
|
|
elif test == 3:
|
|
|
|
|
|
|
|
depths\_cube.data = np.ma.masked\_array(np.swapaxes(np.tile(depths,
|
|
|
|
(shape\[2\],shape\[1\],1)),2,0))
|
|
|
|
|
|
|
|
depths\_cube.data.mask = cube.data.mask
|
|
|
|
|
|
|
|
\#mask that cube were depths are less than your chosen value
|
|
|
|
|
|
|
|
mask = np.where(cube.data \<= critical\_value)
|
|
|
|
|
|
|
|
depths\_cube.data.mask\[mask\] = True
|
|
|
|
|
|
|
|
\#make a cube without a depth dimension to hold the output
|
|
|
|
|
|
|
|
depth\_name = cube.coord(dimensions =
|
|
|
|
depth\_coord).standard\_name.encode(\'ascii\',\'ignore\')
|
|
|
|
|
|
|
|
output\_cube = cube.collapsed(depth\_name,iris.analysis.MEAN)
|
|
|
|
|
|
|
|
\#Find the greatest depth at which data is found above the value of
|
|
|
|
interest
|
|
|
|
|
|
|
|
if test == 4:
|
|
|
|
|
|
|
|
output\_cube.data = np.max(depths\_cube.data,axis = 1)
|
|
|
|
|
|
|
|
elif test == 3:
|
|
|
|
|
|
|
|
output\_cube.data = np.max(depths\_cube.data,axis = 0)
|
|
|
|
|
|
|
|
\#Give the cube the right metadata
|
|
|
|
|
|
|
|
output\_cube.standard\_name = \'depth\'
|
|
|
|
|
|
|
|
output\_cube.units = \'m\'
|
|
|
|
|
|
|
|
\#output\_cube contains the depths
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Calculating the NINO3.4 index from a cube
|
|
|
|
|
|
|
|
\#\#\#\#\#
|
|
|
|
|
|
|
|
\# NOTE, this is a bit of a fudge at present - revisit and improve
|
|
|
|
|
|
|
|
\#\#\#\#\#
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import iris.coord\_categorisation
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
print \'please note, this script has not yet been tested with
|
|
|
|
observations, so perform your own tests to make sure that it can
|
|
|
|
replicate the official index\'
|
|
|
|
|
|
|
|
\#Reading in the MONTHLY SST data from the netcdf file
|
|
|
|
|
|
|
|
cube =
|
|
|
|
iris.load\_cube(\'/data/NAS-ph290/ph290/cmip5/historical/tos\_Omon\_HadCM3\_historical\_r1i1p1\_193412-195911.nc\')
|
|
|
|
|
|
|
|
iris.coord\_categorisation.add\_season\_year(cube, \'time\',
|
|
|
|
name=\'season\_year\')
|
|
|
|
|
|
|
|
iris.coord\_categorisation.add\_month\_number(cube, \'time\',
|
|
|
|
name=\'month\_number\')
|
|
|
|
|
|
|
|
cube = cube.aggregated\_by(\[\'season\_year\',\'month\_number\'\],
|
|
|
|
iris.analysis.MEAN)
|
|
|
|
|
|
|
|
coord = cube.coord(\'time\')
|
|
|
|
|
|
|
|
dt = coord.units.num2date(coord.points)
|
|
|
|
|
|
|
|
years = np.array(\[coord.units.num2date(value).year for value in
|
|
|
|
coord.points\])
|
|
|
|
|
|
|
|
lon\_west = -170
|
|
|
|
|
|
|
|
lon\_east = -120
|
|
|
|
|
|
|
|
lat\_south = -0.5
|
|
|
|
|
|
|
|
lat\_north = 0.5
|
|
|
|
|
|
|
|
cube\_region\_tmp = cube.intersection(longitude=(lon\_west, lon\_east))
|
|
|
|
|
|
|
|
cube\_region = cube\_region\_tmp.intersection(latitude=(lat\_south,
|
|
|
|
lat\_north))
|
|
|
|
|
|
|
|
timeseries =
|
|
|
|
cube\_region.collapsed(\[\'latitude\',\'longitude\'\],iris.analysis.MEAN).data
|
|
|
|
|
|
|
|
years\_1 = years\[np.where(cube.coord(\'month\_number\').points == 1)\]
|
|
|
|
|
|
|
|
timeseries\_1 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 1)\]
|
|
|
|
|
|
|
|
years\_2 = years\[np.where(cube.coord(\'month\_number\').points == 2)\]
|
|
|
|
|
|
|
|
timeseries\_2 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 2)\]
|
|
|
|
|
|
|
|
years\_3 = years\[np.where(cube.coord(\'month\_number\').points == 3)\]
|
|
|
|
|
|
|
|
timeseries\_3 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 3)\]
|
|
|
|
|
|
|
|
years\_4 = years\[np.where(cube.coord(\'month\_number\').points == 4)\]
|
|
|
|
|
|
|
|
timeseries\_4 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 4)\]
|
|
|
|
|
|
|
|
years\_5 = years\[np.where(cube.coord(\'month\_number\').points == 5)\]
|
|
|
|
|
|
|
|
timeseries\_5 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 5)\]
|
|
|
|
|
|
|
|
years\_6 = years\[np.where(cube.coord(\'month\_number\').points == 6)\]
|
|
|
|
|
|
|
|
timeseries\_6 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 6)\]
|
|
|
|
|
|
|
|
years\_7 = years\[np.where(cube.coord(\'month\_number\').points == 7)\]
|
|
|
|
|
|
|
|
timeseries\_7 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 7)\]
|
|
|
|
|
|
|
|
years\_8 = years\[np.where(cube.coord(\'month\_number\').points == 8)\]
|
|
|
|
|
|
|
|
timeseries\_8 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 8)\]
|
|
|
|
|
|
|
|
years\_9 = years\[np.where(cube.coord(\'month\_number\').points == 9)\]
|
|
|
|
|
|
|
|
timeseries\_9 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 9)\]
|
|
|
|
|
|
|
|
years\_10 = years\[np.where(cube.coord(\'month\_number\').points ==
|
|
|
|
10)\]
|
|
|
|
|
|
|
|
timeseries\_10 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 10)\]
|
|
|
|
|
|
|
|
years\_11 = years\[np.where(cube.coord(\'month\_number\').points ==
|
|
|
|
11)\]
|
|
|
|
|
|
|
|
timeseries\_11 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 11)\]
|
|
|
|
|
|
|
|
years\_12 = years\[np.where(cube.coord(\'month\_number\').points ==
|
|
|
|
12)\]
|
|
|
|
|
|
|
|
timeseries\_12 =
|
|
|
|
timeseries\[np.where(cube.coord(\'month\_number\').points == 12)\]
|
|
|
|
|
|
|
|
thirty\_yr\_means = {}
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'1\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'2\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'3\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'4\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'5\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'6\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'7\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'8\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'9\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'10\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'11\'\] = \[\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\[\'12\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years = {}
|
|
|
|
|
|
|
|
meaning\_years\[\'1\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'2\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'3\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'4\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'5\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'6\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'7\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'8\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'9\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'10\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'11\'\] = \[\]
|
|
|
|
|
|
|
|
meaning\_years\[\'12\'\] = \[\]
|
|
|
|
|
|
|
|
for j in np.arange(12)+1:
|
|
|
|
|
|
|
|
if j == 0:
|
|
|
|
|
|
|
|
years\_tmp = years\_0
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_0
|
|
|
|
|
|
|
|
if j == 1:
|
|
|
|
|
|
|
|
years\_tmp = years\_1
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_1
|
|
|
|
|
|
|
|
if j == 2:
|
|
|
|
|
|
|
|
years\_tmp = years\_2
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_2
|
|
|
|
|
|
|
|
if j == 3:
|
|
|
|
|
|
|
|
years\_tmp = years\_3
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_3
|
|
|
|
|
|
|
|
if j == 4:
|
|
|
|
|
|
|
|
years\_tmp = years\_4
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_4
|
|
|
|
|
|
|
|
if j == 5:
|
|
|
|
|
|
|
|
years\_tmp = years\_5
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_5
|
|
|
|
|
|
|
|
if j == 6:
|
|
|
|
|
|
|
|
years\_tmp = years\_6
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_6
|
|
|
|
|
|
|
|
if j == 7:
|
|
|
|
|
|
|
|
years\_tmp = years\_7
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_7
|
|
|
|
|
|
|
|
if j == 8:
|
|
|
|
|
|
|
|
years\_tmp = years\_8
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_8
|
|
|
|
|
|
|
|
if j == 9:
|
|
|
|
|
|
|
|
years\_tmp = years\_9
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_9
|
|
|
|
|
|
|
|
if j == 10:
|
|
|
|
|
|
|
|
years\_tmp = years\_10
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_10
|
|
|
|
|
|
|
|
if j == 11:
|
|
|
|
|
|
|
|
years\_tmp = years\_11
|
|
|
|
|
|
|
|
timeseries\_tmp = timeseries\_11
|
|
|
|
|
|
|
|
for i in years\_tmp\[::5\]:
|
|
|
|
|
|
|
|
meaning\_years\[str(j)\].append(i)
|
|
|
|
|
|
|
|
loc = np.where(years\_tmp == i)\[0\]\[0\]
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
|
|
|
thirty\_yr\_means\[str(j)\].append(np.mean(timeseries\_tmp\[loc-30:loc+30\]))
|
|
|
|
|
|
|
|
except:
|
|
|
|
|
|
|
|
thirty\_yr\_means\[str(j)\].append(np.NAN)
|
|
|
|
|
|
|
|
nino34 = \[\]
|
|
|
|
|
|
|
|
for i,timeseries\_value in enumerate(timeseries):
|
|
|
|
|
|
|
|
meaning\_years\_tmp =
|
|
|
|
meaning\_years\[str(cube.coord(\'month\_number\').points\[i\])\]
|
|
|
|
|
|
|
|
thirty\_yr\_means\_tmp =
|
|
|
|
thirty\_yr\_means\[str(cube.coord(\'month\_number\').points\[i\])\]
|
|
|
|
|
|
|
|
loc = np.searchsorted(meaning\_years\_tmp, years\[i\], side=\"left\")
|
|
|
|
|
|
|
|
if loc \>= np.size(meaning\_years\_tmp):
|
|
|
|
|
|
|
|
loc = np.size(meaning\_years\_tmp) -1
|
|
|
|
|
|
|
|
if meaning\_years\_tmp\[loc\] == np.NAN:
|
|
|
|
|
|
|
|
nino34.append(np.NAN)
|
|
|
|
|
|
|
|
else:
|
|
|
|
|
|
|
|
nino34.append(timeseries\_value - thirty\_yr\_means\_tmp\[loc\])
|
|
|
|
|
|
|
|
nino34 = np.array(nino34)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
##### Extracting contour lines from a cube of data and saving them out as a shape file for ARC GIS
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
import shapefile
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#function to extract the coordinates from the contour lines
|
|
|
|
|
|
|
|
def get\_contour\_verts(cn):
|
|
|
|
|
|
|
|
contours = \[\]
|
|
|
|
|
|
|
|
\# for each contour line
|
|
|
|
|
|
|
|
for cc in cn.collections:
|
|
|
|
|
|
|
|
paths = \[\]
|
|
|
|
|
|
|
|
\# for each separate section of the contour line
|
|
|
|
|
|
|
|
for pp in cc.get\_paths():
|
|
|
|
|
|
|
|
xy = \[\]
|
|
|
|
|
|
|
|
\# for each segment of that section
|
|
|
|
|
|
|
|
for vv in pp.iter\_segments():
|
|
|
|
|
|
|
|
xy.append(vv\[0\])
|
|
|
|
|
|
|
|
paths.append(np.vstack(xy))
|
|
|
|
|
|
|
|
contours.append(paths)
|
|
|
|
|
|
|
|
return contours
|
|
|
|
|
|
|
|
\#Input file (if you need to read data in). Replace with your chosen
|
|
|
|
file and directory
|
|
|
|
|
|
|
|
my\_file =
|
|
|
|
\'/data/NAS-ph290/ph290/cmip5/historical/regridded/CCSM4\_tos\_historical\_r1i1p1\_regridded\_not\_vertically.nc\'
|
|
|
|
|
|
|
|
\#Read the data in (note, here I\'m adding \[0\] to only read in the
|
|
|
|
first year of data)
|
|
|
|
|
|
|
|
cube = iris.load\_cube(my\_file)\[0\]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#specify the number of contour levels you want
|
|
|
|
|
|
|
|
no\_contours = 20
|
|
|
|
|
|
|
|
cn = plt.contour(cube.data,no\_contours)
|
|
|
|
|
|
|
|
contours = get\_contour\_verts(cn)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#write the coordinates to a shapefile
|
|
|
|
|
|
|
|
w = shapefile.Writer()
|
|
|
|
|
|
|
|
for cont in contours:
|
|
|
|
|
|
|
|
w.poly(shapeType=3, parts=cont)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#enter the filename here:
|
|
|
|
|
|
|
|
filename = \'/home/ph290/Downloads/my\_shapefile.shp\'
|
|
|
|
|
|
|
|
w.save(filename)
|
|
|
|
|
|
|
|
### Cube Statistics/Maths
|
|
|
|
|
|
|
|
##### Finding the maximum value in latitude-longitude space:
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import iris.analysis
|
|
|
|
|
|
|
|
my\_result = my\_cube.collapsed(\[\'longitude\',\'latitude\'\],
|
|
|
|
iris.analysis.MAX)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
##### Finding the minimum value in latitude-longitude space:
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import iris.analysis
|
|
|
|
|
|
|
|
my\_result = my\_cube.collapsed(\[\'longitude\',\'latitude\'\],
|
|
|
|
iris.analysis.MIN)
|
|
|
|
|
|
|
|
Other operations similar to MIN and MIN can be found
|
|
|
|
[here](http://scitools.org.uk/iris/docs/latest/iris/iris/analysis.html#iris.analysis.MAX),
|
|
|
|
and substituted in for where MAX and MIN are use din the above two
|
|
|
|
examples
|
|
|
|
|
|
|
|
##### Adding a value (5 in this example) to the data in a cube
|
|
|
|
|
|
|
|
my\_cube = my\_cube + 5.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
##### Dividing the data in a cube by a certain value(5 in this example)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
my\_cube = my\_cube / 5.0
|
|
|
|
|
|
|
|
##### Reading and concatenating EN4 data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
EN4 data is a bit messy. This reads it in for you.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def my\_callback(cube, field,files\_tmp):
|
|
|
|
|
|
|
|
\# there are some bad attributes in the NetCDF files which make the data
|
|
|
|
incompatible for merging.
|
|
|
|
|
|
|
|
cube.attributes.pop(\'history\')
|
|
|
|
|
|
|
|
cube.attributes.pop(\'creation\_date\')
|
|
|
|
|
|
|
|
cube.attributes.pop(\'time\', None)
|
|
|
|
|
|
|
|
\# if np.size(cube) \> 1:
|
|
|
|
|
|
|
|
\# cube = iris.experimental.concatenate.concatenate(cube)
|
|
|
|
|
|
|
|
return cube
|
|
|
|
|
|
|
|
import glob
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
files =
|
|
|
|
glob.glob(\'/home/cns205/data/EN.4.2.0.f.analysis.g10.2013\*.nc\')
|
|
|
|
|
|
|
|
cubes =
|
|
|
|
iris.load(files,\'sea\_water\_potential\_temperature\',callback=my\_callback)
|
|
|
|
|
|
|
|
iris.util.unify\_time\_units(cubes)
|
|
|
|
|
|
|
|
for i in range(0,len(files)):
|
|
|
|
|
|
|
|
del cubes\[i\].dim\_coords\[0\].attributes\[\'time\_origin\'\]
|
|
|
|
|
|
|
|
cube = cubes.concatenate\_cube()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Identifying the grid coordinates in a cube of certain latitude and longitude points:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def find\_lat\_lon(cube,longitude,latitude):
|
|
|
|
|
|
|
|
lon = cube.coord(\'longitude\').points.copy()
|
|
|
|
|
|
|
|
lat = cube.coord(\'latitude\').points.copy()
|
|
|
|
|
|
|
|
if np.max(lon) \> 350:
|
|
|
|
|
|
|
|
lon -= 180.0
|
|
|
|
|
|
|
|
if np.max(lat) \> 170:
|
|
|
|
|
|
|
|
lat -= 90.0
|
|
|
|
|
|
|
|
print \'required longitude grid box is: \',np.where(lon \>
|
|
|
|
longitude)\[0\]\[0\]
|
|
|
|
|
|
|
|
print \'required latitude grid box is: \',np.where(lat \>
|
|
|
|
latitude)\[0\]\[0\]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#Edit the three lines below to fit what you want, then copy and paste
|
|
|
|
the whole thing in to python after running your analysis so that the
|
|
|
|
cube (here MIROC5\_SOIL\_DJF\_1990\_2012) exists
|
|
|
|
|
|
|
|
cube = MIROC5\_SOIL\_DJF\_1990\_2012
|
|
|
|
|
|
|
|
longitude = -22
|
|
|
|
|
|
|
|
latitude = -16
|
|
|
|
|
|
|
|
find\_lat\_lon(cube,longitude,latitude)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
or
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def find\_lat\_lon(cube,longitude,latitude):
|
|
|
|
|
|
|
|
lon = cube.coord(\'longitude\').points.copy()
|
|
|
|
|
|
|
|
lat = cube.coord(\'latitude\').points.copy()
|
|
|
|
|
|
|
|
if np.max(lat) \> 170:
|
|
|
|
|
|
|
|
lat -= 90.0
|
|
|
|
|
|
|
|
print \'required longitude grid box is: \',np.where(lon \>
|
|
|
|
longitude)\[0\]\[0\]
|
|
|
|
|
|
|
|
print \'required latitude grid box is: \',np.where(lat \>
|
|
|
|
latitude)\[0\]\[0\]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\#Edit the three lines below to fit what you want, then copy and paste
|
|
|
|
the whole thing in to python after running your analysis so that the
|
|
|
|
cube (here MIROC5\_SOIL\_DJF\_1990\_2012) exists
|
|
|
|
|
|
|
|
cube = MIROC5\_SOIL\_DJF\_1990\_2012
|
|
|
|
|
|
|
|
longitude = -22
|
|
|
|
|
|
|
|
latitude = -16
|
|
|
|
|
|
|
|
find\_lat\_lon(cube,longitude,latitude)
|
|
|
|
|
|
|
|
### Lead/Lagged correlation e.g.
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
from scipy.stats import \*
|
|
|
|
|
|
|
|
x = np.random.rand(30)
|
|
|
|
|
|
|
|
y = np.roll(x,3)
|
|
|
|
|
|
|
|
max\_lag = -5
|
|
|
|
|
|
|
|
max\_lead = 5
|
|
|
|
|
|
|
|
for lag\_tmp in range(max\_lag,max\_lead+1):
|
|
|
|
|
|
|
|
tmp\_x = np.roll(x,lag\_tmp)
|
|
|
|
|
|
|
|
my\_correlation\_variable = spearmanr(tmp\_x,y)
|
|
|
|
|
|
|
|
plt.scatter(lag\_tmp,my\_correlation\_variable\[0\])
|
|
|
|
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
### Hovmuller plots
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
from iris.analysis import \*
|
|
|
|
|
|
|
|
from iris.coord\_categorisation import \*
|
|
|
|
|
|
|
|
import iris.quickplot as quickplot
|
|
|
|
|
|
|
|
from matplotlib.pyplot import \*
|
|
|
|
|
|
|
|
cube =
|
|
|
|
iris.load\_cube(\'/data/NAS-ph290/ph290/cmip5/last1000/HadCM3\_sos\_past1000\_r1i1p1\_regridded\_not\_vertically.nc\')
|
|
|
|
|
|
|
|
add\_year(cube, \'time\', name=\'year\')
|
|
|
|
|
|
|
|
west = -30
|
|
|
|
|
|
|
|
east = 0
|
|
|
|
|
|
|
|
south = 0
|
|
|
|
|
|
|
|
north = 90
|
|
|
|
|
|
|
|
temporary\_cube = cube.intersection(longitude = (west, east))
|
|
|
|
|
|
|
|
my\_regional\_cube = temporary\_cube.intersection(latitude = (south,
|
|
|
|
north))
|
|
|
|
|
|
|
|
average\_across\_time =
|
|
|
|
my\_regional\_cube.collapsed(\[\'longitude\'\],MEAN)
|
|
|
|
|
|
|
|
years = my\_regional\_cube.coord(\'year\').points
|
|
|
|
|
|
|
|
lats = my\_regional\_cube.coord(\'latitude\').points
|
|
|
|
|
|
|
|
close(\'all\')
|
|
|
|
|
|
|
|
CS = contourf(lats,years,average\_across\_time.data,30)
|
|
|
|
|
|
|
|
cbar = colorbar(CS)
|
|
|
|
|
|
|
|
show()
|
|
|
|
|
|
|
|
and as anomalies
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import numpy as np
|
|
|
|
|
|
|
|
from iris.analysis import \*
|
|
|
|
|
|
|
|
from iris.coord\_categorisation import \*
|
|
|
|
|
|
|
|
import iris.quickplot as quickplot
|
|
|
|
|
|
|
|
from matplotlib.pyplot import \*
|
|
|
|
|
|
|
|
cube =
|
|
|
|
iris.load\_cube(\'/data/NAS-ph290/ph290/cmip5/last1000/HadCM3\_sos\_past1000\_r1i1p1\_regridded\_not\_vertically.nc\')
|
|
|
|
|
|
|
|
add\_year(cube, \'time\', name=\'year\')
|
|
|
|
|
|
|
|
west = -30
|
|
|
|
|
|
|
|
east = 0
|
|
|
|
|
|
|
|
south = 0
|
|
|
|
|
|
|
|
north = 90
|
|
|
|
|
|
|
|
temporary\_cube = cube.intersection(longitude = (west, east))
|
|
|
|
|
|
|
|
my\_regional\_cube = temporary\_cube.intersection(latitude = (south,
|
|
|
|
north))
|
|
|
|
|
|
|
|
average\_across\_time =
|
|
|
|
my\_regional\_cube.collapsed(\[\'longitude\'\],MEAN)
|
|
|
|
|
|
|
|
average\_across\_time2 =
|
|
|
|
average\_across\_time.collapsed(\[\'time\'\],MEAN)
|
|
|
|
|
|
|
|
data = average\_across\_time.data
|
|
|
|
|
|
|
|
shape = np.shape(data)
|
|
|
|
|
|
|
|
for i in range(shape\[0\]):
|
|
|
|
|
|
|
|
data\[i,:\] -= average\_across\_time2.data
|
|
|
|
|
|
|
|
years = my\_regional\_cube.coord(\'year\').points
|
|
|
|
|
|
|
|
lats = my\_regional\_cube.coord(\'latitude\').points
|
|
|
|
|
|
|
|
close(\'all\')
|
|
|
|
|
|
|
|
CS = contourf(lats,years,data,30)
|
|
|
|
|
|
|
|
cbar = colorbar(CS)
|
|
|
|
|
|
|
|
show()
|
|
|
|
|
|
|
|
### Drawing rectangles on plots:
|
|
|
|
|
|
|
|
<http://matthiaseisen.com/pp/patterns/p0203/>
|
|
|
|
|
|
|
|
### Changing font size
|
|
|
|
|
|
|
|
from matplotlib.pyplot import \*
|
|
|
|
|
|
|
|
import matplotlib
|
|
|
|
|
|
|
|
font = {\'family\' : \'normal\',
|
|
|
|
|
|
|
|
\'weight\' : \'bold\',
|
|
|
|
|
|
|
|
\'size\' : 22}
|
|
|
|
|
|
|
|
matplotlib.rc(\'font\', \*\*font)
|
|
|
|
|
|
|
|
matplotlib.rcParams.update({\'font.size\': 22})
|
|
|
|
|
|
|
|
x = \[1,2,3,4,5\]
|
|
|
|
|
|
|
|
y = \[6,7,8,9,10\]
|
|
|
|
|
|
|
|
plot(x,y)
|
|
|
|
|
|
|
|
ylabel(\'y-value\')
|
|
|
|
|
|
|
|
tight\_layout()
|
|
|
|
|
|
|
|
show()
|
|
|
|
|
|
|
|
### Contour plot with colour bar with title
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import iris.plot as iplt
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
cube = iris.load\_cube(\'my\_directory/my\_cube.nc\')
|
|
|
|
|
|
|
|
CS = iplt.contourf(cube\[0\])
|
|
|
|
|
|
|
|
cbar = plt.colorbar(CS,orientation = \'horizontal\')
|
|
|
|
|
|
|
|
cbar.ax.set\_title(\'my colour bar\')
|
|
|
|
|
|
|
|
plt.show()
|
|
|
|
|
|
|
|
### Plotting GLODAP profiles
|
|
|
|
|
|
|
|
import iris
|
|
|
|
|
|
|
|
import matplotlib.pyplot as plt
|
|
|
|
|
|
|
|
import iris.quickplot as qplt
|
|
|
|
|
|
|
|
import iris.analysis
|
|
|
|
|
|
|
|
cube\_tmp = iris.load\_cube(\'GLODAPv2.2016b.TAlk.nc\',\'seawater
|
|
|
|
alkalinity expressed as mole equivalent per unit mass\')
|
|
|
|
|
|
|
|
cube =
|
|
|
|
iris.load\_cube(\'/data/NAS-ph290/ph290/misc\_data/temperature\_annual\_1deg.nc\',\'sea\_water\_temperature\')\[0\]
|
|
|
|
|
|
|
|
cube.data = cube\_tmp.data
|
|
|
|
|
|
|
|
cube.coord(\'latitude\').guess\_bounds()
|
|
|
|
|
|
|
|
cube.coord(\'longitude\').guess\_bounds() \#These two lines are just
|
|
|
|
something you sometimes have to do when the original data did not
|
|
|
|
include all of the latitude/longitude data we require
|
|
|
|
|
|
|
|
grid\_areas = iris.analysis.cartography.area\_weights(cube)
|
|
|
|
|
|
|
|
global\_average\_variable = cube.collapsed(\[\'latitude\',
|
|
|
|
\'longitude\'\],iris.analysis.MEAN, weights=grid\_areas) \#This does the
|
|
|
|
clever maths to work out the grid box areas
|
|
|
|
|
|
|
|
plt.plot(global\_average\_variable.coord(\'depth\').points,global\_average\_variable.data)
|
|
|
|
\#plotting globally averaged GLODAP values against depth. Units mol/kg
|
|
|
|
|
|
|
|
plt.show() |