Pandas

Pandas is a an open source library providing high-performance, easy-to-use data structures and data analysis tools. Pandas is particularly suited to the analysis of tabular data, i.e. data that can can go into a table. In other words, if you can imagine the data in an Excel spreadsheet, then Pandas is the tool for the job.

A recent analysis of questions from Stack Overflow showed that python is the fastest growing and most widely used programming language in the world (in developed countries).

python growth

A follow-up analysis showed that this growth is driven by the data science packages such as numpy, matplotlib, and especially pandas.

pandas growth

The exponential growth of pandas is due to the fact that it just works. It saves you time and helps you do science more efficiently and effictively.

Pandas capabilities (from the Pandas website):

  • A fast and efficient DataFrame object for data manipulation with integrated indexing;
  • Tools for reading and writing data between in-memory data structures and different formats: CSV and text files, Microsoft Excel, SQL databases, and the fast HDF5 format;
  • Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form;
  • Flexible reshaping and pivoting of data sets;
  • Intelligent label-based slicing, fancy indexing, and subsetting of large data sets;
  • Columns can be inserted and deleted from data structures for size mutability;
  • Aggregating or transforming data with a powerful group by engine allowing split-apply-combine operations on data sets;
  • High performance merging and joining of data sets;
  • Hierarchical axis indexing provides an intuitive way of working with high-dimensional data in a lower-dimensional data structure;
  • Time series-functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging. Even create domain-specific time offsets and join time series without losing data;
  • Highly optimized for performance, with critical code paths written in Cython or C.
  • Python with pandas is in use in a wide variety of academic and commercial domains, including Finance, Neuroscience, Economics, Statistics, Advertising, Web Analytics, and more.

In this lecture, we will go over the basic capabilities of Pandas. It is a very deep library, and you will need to dig into the documentation for more advanced usage.

Pandas was created by Wes McKinney. Many of the examples here are drawn from Wes McKinney's book Python for Data Analysis, which includes a github repo of code samples.

In [1]:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline

Pandas Data Structures: Series

A Series represents a one-dimensional array of data. The main difference between a Series and numpy array is that a Series has an index. The index contains the labels that we use to access the data.

There are many ways to create a Series. We will just show a few.

In [2]:
names = ['Ryan', 'Chiara', 'Johnny']
values = [36, 37, 2.7]
ages = pd.Series(values, index=names)
ages
Out[2]:
Ryan      36.0
Chiara    37.0
Johnny     2.7
dtype: float64

Series have built in plotting methods.

In [3]:
ages.plot(kind='bar')
Out[3]:

Arithmetic operations and most numpy function can be applied to Series. An important point is that the Series keep their index during such operations.

In [4]:
np.log(ages) / ages**2
Out[4]:
Ryan      0.002765
Chiara    0.002638
Johnny    0.136249
dtype: float64

We can access the underlying index object if we need to:

In [5]:
ages.index
Out[5]:
Index(['Ryan', 'Chiara', 'Johnny'], dtype='object')

Indexing

We can get values back out using the index via the .loc attribute

In [6]:
ages.loc['Johnny']
Out[6]:
2.7000000000000002

Or by raw position using .iloc

In [7]:
ages.iloc[2]
Out[7]:
2.7000000000000002

We can pass a list or array to loc to get multiple rows back:

In [8]:
ages.loc[['Ryan', 'Johnny']]
Out[8]:
Ryan      36.0
Johnny     2.7
dtype: float64

And we can even use slice notation

In [9]:
ages.loc['Ryan':'Johnny']
Out[9]:
Ryan      36.0
Chiara    37.0
Johnny     2.7
dtype: float64
In [10]:
ages.iloc[:2]
Out[10]:
Ryan      36.0
Chiara    37.0
dtype: float64

If we need to, we can always get the raw data back out as well

In [11]:
ages.values # a numpy array
Out[11]:
array([ 36. ,  37. ,   2.7])
In [12]:
ages.index # a pandas Index object
Out[12]:
Index(['Ryan', 'Chiara', 'Johnny'], dtype='object')

Pandas Data Structures: DataFrame

There is a lot more to Series, but they are limit to a single "column". A more useful Pandas data structure is the DataFrame. A DataFrame is basically a bunch of series that share the same index. It's a lot like a table in a spreadsheet.

Below we create a DataFrame.

In [13]:
# first we create a dictionary
data = {'age': [36, 37, 1.7],
        'height': [180, 155, 90],
        'weight': [78, np.nan, 11.3]}
df = pd.DataFrame(data, index=['Ryan', 'Chiara', 'Johnny'])
df
Out[13]:
age height weight
Ryan 36.0 180 78.0
Chiara 37.0 155 NaN
Johnny 1.7 90 11.3

Pandas handles missing data very elegantly, keeping track of it through all calculations.

In [14]:
df.info()

Index: 3 entries, Ryan to Johnny
Data columns (total 3 columns):
age       3 non-null float64
height    3 non-null int64
weight    2 non-null float64
dtypes: float64(2), int64(1)
memory usage: 96.0+ bytes

A wide range of statistical functions are available on both Series and DataFrames.

In [15]:
df.min()
Out[15]:
age        1.7
height    90.0
weight    11.3
dtype: float64
In [16]:
df.mean()
Out[16]:
age        24.900000
height    141.666667
weight     44.650000
dtype: float64
In [17]:
df.std()
Out[17]:
age       20.098010
height    46.457866
weight    47.164022
dtype: float64
In [18]:
df.describe()
Out[18]:
age height weight
count 3.00000 3.000000 2.000000
mean 24.90000 141.666667 44.650000
std 20.09801 46.457866 47.164022
min 1.70000 90.000000 11.300000
25% 18.85000 122.500000 27.975000
50% 36.00000 155.000000 44.650000
75% 36.50000 167.500000 61.325000
max 37.00000 180.000000 78.000000

We can get a single column as a Series using python's getitem syntax on the DataFrame object.

In [19]:
df['height']
Out[19]:
Ryan      180
Chiara    155
Johnny     90
Name: height, dtype: int64

...or using attribute syntax.

In [20]:
df.height
Out[20]:
Ryan      180
Chiara    155
Johnny     90
Name: height, dtype: int64

Indexing works very similar to series

In [21]:
df.loc['Johnny']
Out[21]:
age        1.7
height    90.0
weight    11.3
Name: Johnny, dtype: float64
In [22]:
df.iloc[2]
Out[22]:
age        1.7
height    90.0
weight    11.3
Name: Johnny, dtype: float64

But we can also specify the column we want to access

In [23]:
df.loc['Johnny', 'age']
Out[23]:
1.7
In [24]:
df.iloc[:2, 0]
Out[24]:
Ryan      36.0
Chiara    37.0
Name: age, dtype: float64

If we make a calculation using columns from the DataFrame, it will keep the same index:

In [25]:
df.weight / df.height
Out[25]:
Ryan      0.433333
Chiara         NaN
Johnny    0.125556
dtype: float64

Which we can easily add as another column to the DataFrame:

In [26]:
df['density'] = df.weight / df.height
df
Out[26]:
age height weight density
Ryan 36.0 180 78.0 0.433333
Chiara 37.0 155 NaN NaN
Johnny 1.7 90 11.3 0.125556

Merging Data

Pandas supports a wide range of methods for merging different datasets. These are described extensively in the documentation. Here we just give a few examples.

In [27]:
education = pd.Series(['PhD', 'PhD', None, 'masters'],
                     index=['Ryan', 'Chiara', 'Johnny', 'Xiaomeng'],
                     name='education')
education
Out[27]:
Ryan            PhD
Chiara          PhD
Johnny         None
Xiaomeng    masters
Name: education, dtype: object
In [28]:
# returns a new DataFrame
df.join(education)
Out[28]:
age height weight density education
Ryan 36.0 180 78.0 0.433333 PhD
Chiara 37.0 155 NaN NaN PhD
Johnny 1.7 90 11.3 0.125556 None
In [29]:
# returns a new DataFrame
df.join(education, how='right')
Out[29]:
age height weight density education
Ryan 36.0 180.0 78.0 0.433333 PhD
Chiara 37.0 155.0 NaN NaN PhD
Johnny 1.7 90.0 11.3 0.125556 None
Xiaomeng NaN NaN NaN NaN masters
In [30]:
# returns a new DataFrame
everyone = df.reindex(['Ryan', 'Chiara', 'Johnny', 'Xiaomeng', 'Lorenzo'])
everyone
Out[30]:
age height weight density
Ryan 36.0 180.0 78.0 0.433333
Chiara 37.0 155.0 NaN NaN
Johnny 1.7 90.0 11.3 0.125556
Xiaomeng NaN NaN NaN NaN
Lorenzo NaN NaN NaN NaN

We can also index using a boolean series. This is very useful

In [31]:
adults = df[df.age > 18]
adults
Out[31]:
age height weight density
Ryan 36.0 180 78.0 0.433333
Chiara 37.0 155 NaN NaN
In [32]:
df['is_adult'] = df.age > 18
df
Out[32]:
age height weight density is_adult
Ryan 36.0 180 78.0 0.433333 True
Chiara 37.0 155 NaN NaN True
Johnny 1.7 90 11.3 0.125556 False

Modifying Values

We often want to modify values in a dataframe based on some rule. To modify values, we need to use .loc or .iloc

In [33]:
df.loc['Johnny', 'height'] = 95
df.loc['Ryan', 'weight'] += 1
df
Out[33]:
age height weight density is_adult
Ryan 36.0 180 79.0 0.433333 True
Chiara 37.0 155 NaN NaN True
Johnny 1.7 95 11.3 0.125556 False

Plotting

DataFrames have all kinds of useful plotting built in.

In [34]:
df.plot(kind='scatter', x='age', y='height', grid=True)
Out[34]:
In [35]:
df.plot(kind='bar')
Out[35]:

Time Indexes

Indexes are very powerful. They are a big part of why Pandas is so useful. There are different indices for different types of data. Time Indexes are especially great!

In [36]:
two_years = pd.date_range(start='2014-01-01', end='2016-01-01', freq='D')
timeseries = pd.Series(np.sin(2 *np.pi *two_years.dayofyear / 365),
                       index=two_years)
timeseries.plot()
Out[36]:

We can use python's slicing notation inside .loc to select a date range.

In [37]:
timeseries.loc['2015-01-01':'2015-07-01'].plot()
Out[37]:

The TimeIndex object has lots of useful attributes

In [38]:
timeseries.index.month
Out[38]:
Int64Index([ 1,  1,  1,  1,  1,  1,  1,  1,  1,  1,
            ...
            12, 12, 12, 12, 12, 12, 12, 12, 12,  1],
           dtype='int64', length=731)
In [39]:
timeseries.index.day
Out[39]:
Int64Index([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10,
            ...
            23, 24, 25, 26, 27, 28, 29, 30, 31,  1],
           dtype='int64', length=731)

Reading Data Files: Weather Station Data

In this example, we will use NOAA weather station data from https://www.ncdc.noaa.gov/data-access/land-based-station-data.

The details of files we are going to read are described in this README file.

In [40]:
# some shell code to download some data
! wget ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/daily01/HEADERS.txt
! wget ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/daily01/2017/CRND0103-2017-NY_Millbrook_3_W.txt
! head -2 HEADERS.txt | tail -1 > data.txt
! cat CRND0103-2017-NY_Millbrook_3_W.txt >> data.txt
--2018-10-04 09:20:55--  ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/daily01/HEADERS.txt
           => ‘HEADERS.txt.1’
Resolving ftp.ncdc.noaa.gov... 205.167.25.101, 2610:20:8040:2::101
Connecting to ftp.ncdc.noaa.gov|205.167.25.101|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /pub/data/uscrn/products/daily01 ... done.
==> SIZE HEADERS.txt ... 730
==> PASV ... done.    ==> RETR HEADERS.txt ... done.
Length: 730 (unauthoritative)

HEADERS.txt.1       100%[===================>]     730  --.-KB/s    in 0.01s   

2018-10-04 09:20:55 (68.4 KB/s) - ‘HEADERS.txt.1’ saved [730]

--2018-10-04 09:20:55--  ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/daily01/2017/CRND0103-2017-NY_Millbrook_3_W.txt
           => ‘CRND0103-2017-NY_Millbrook_3_W.txt.1’
Resolving ftp.ncdc.noaa.gov... 205.167.25.101, 2610:20:8040:2::101
Connecting to ftp.ncdc.noaa.gov|205.167.25.101|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /pub/data/uscrn/products/daily01/2017 ... done.
==> SIZE CRND0103-2017-NY_Millbrook_3_W.txt ... 79205
==> PASV ... done.    ==> RETR CRND0103-2017-NY_Millbrook_3_W.txt ... done.
Length: 79205 (77K) (unauthoritative)

CRND0103-2017-NY_Mi 100%[===================>]  77.35K  --.-KB/s    in 0.08s   

2018-10-04 09:20:55 (992 KB/s) - ‘CRND0103-2017-NY_Millbrook_3_W.txt.1’ saved [79205]

We now have a text file on our hard drive called data.txt. Examine it.

To read it into pandas, we will use the read_csv function. This function is incredibly complex and powerful. You can use it to extract data from almost any text file. However, you need to understand how to use its various options.

With no options, this is what we get.

In [41]:
df = pd.read_csv('data.txt')
df.head()
Out[41]:
WBANNO LST_DATE CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC SOLARAD_DAILY SUR_TEMP_DAILY_TYPE SUR_TEMP_DAILY_MAX SUR_TEMP_DAILY_MIN SUR_TEMP_DAILY_AVG RH_DAILY_MAX RH_DAILY_MIN RH_DAILY_AVG SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
0 64756 20170101 2.422 -73.74 41.79 6.6 ...
1 64756 20170102 2.422 -73.74 41.79 4.0 ...
2 64756 20170103 2.422 -73.74 41.79 4.9 ...
3 64756 20170104 2.422 -73.74 41.79 8.7 ...
4 64756 20170105 2.422 -73.74 41.79 -0.5 ...

Pandas failed to identify the different columns. This is because it was expecting standard CSV (comma-separated values) file. In our file, instead, the values are separated by whitespace. And not a single whilespace--the amount of whitespace between values varies. We can tell pandas this using the sep keyword.

In [42]:
df = pd.read_csv('data.txt', sep='\s+')
df.head()
Out[42]:
WBANNO LST_DATE CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
0 64756 20170101 2.422 -73.74 41.79 6.6 -5.4 0.6 2.2 0.0 ... -99.0 -99.0 0.207 0.152 0.175 -0.1 0.0 0.6 1.5 3.4
1 64756 20170102 2.422 -73.74 41.79 4.0 -6.8 -1.4 -1.2 0.0 ... -99.0 -99.0 0.205 0.151 0.173 -0.2 0.0 0.6 1.5 3.3
2 64756 20170103 2.422 -73.74 41.79 4.9 0.7 2.8 2.7 13.1 ... -99.0 -99.0 0.205 0.150 0.173 -0.1 0.0 0.5 1.5 3.3
3 64756 20170104 2.422 -73.74 41.79 8.7 -1.6 3.6 3.5 1.3 ... -99.0 -99.0 0.215 0.153 0.174 -0.1 0.0 0.5 1.5 3.2
4 64756 20170105 2.422 -73.74 41.79 -0.5 -4.6 -2.5 -2.8 0.0 ... -99.0 -99.0 0.215 0.154 0.177 -0.1 0.0 0.5 1.4 3.1

5 rows × 28 columns

Great! It worked.

If we look closely, we will see there are lots of -99 and -9999 values in the file. The README file tells us that these are values used to represent missing data. Let's tell this to pandas.

In [43]:
df = pd.read_csv('data.txt', sep='\s+', na_values=[-9999.0, -99.0])
df.head()
Out[43]:
WBANNO LST_DATE CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
0 64756 20170101 2.422 -73.74 41.79 6.6 -5.4 0.6 2.2 0.0 ... NaN NaN 0.207 0.152 0.175 -0.1 0.0 0.6 1.5 3.4
1 64756 20170102 2.422 -73.74 41.79 4.0 -6.8 -1.4 -1.2 0.0 ... NaN NaN 0.205 0.151 0.173 -0.2 0.0 0.6 1.5 3.3
2 64756 20170103 2.422 -73.74 41.79 4.9 0.7 2.8 2.7 13.1 ... NaN NaN 0.205 0.150 0.173 -0.1 0.0 0.5 1.5 3.3
3 64756 20170104 2.422 -73.74 41.79 8.7 -1.6 3.6 3.5 1.3 ... NaN NaN 0.215 0.153 0.174 -0.1 0.0 0.5 1.5 3.2
4 64756 20170105 2.422 -73.74 41.79 -0.5 -4.6 -2.5 -2.8 0.0 ... NaN NaN 0.215 0.154 0.177 -0.1 0.0 0.5 1.4 3.1

5 rows × 28 columns

Great. The missing data is now represented by NaN.

What data types did pandas infer?

In [44]:
df.info()

RangeIndex: 365 entries, 0 to 364
Data columns (total 28 columns):
WBANNO                     365 non-null int64
LST_DATE                   365 non-null int64
CRX_VN                     365 non-null float64
LONGITUDE                  365 non-null float64
LATITUDE                   365 non-null float64
T_DAILY_MAX                364 non-null float64
T_DAILY_MIN                364 non-null float64
T_DAILY_MEAN               364 non-null float64
T_DAILY_AVG                364 non-null float64
P_DAILY_CALC               364 non-null float64
SOLARAD_DAILY              364 non-null float64
SUR_TEMP_DAILY_TYPE        365 non-null object
SUR_TEMP_DAILY_MAX         364 non-null float64
SUR_TEMP_DAILY_MIN         364 non-null float64
SUR_TEMP_DAILY_AVG         364 non-null float64
RH_DAILY_MAX               364 non-null float64
RH_DAILY_MIN               364 non-null float64
RH_DAILY_AVG               364 non-null float64
SOIL_MOISTURE_5_DAILY      317 non-null float64
SOIL_MOISTURE_10_DAILY     317 non-null float64
SOIL_MOISTURE_20_DAILY     336 non-null float64
SOIL_MOISTURE_50_DAILY     364 non-null float64
SOIL_MOISTURE_100_DAILY    359 non-null float64
SOIL_TEMP_5_DAILY          364 non-null float64
SOIL_TEMP_10_DAILY         364 non-null float64
SOIL_TEMP_20_DAILY         364 non-null float64
SOIL_TEMP_50_DAILY         364 non-null float64
SOIL_TEMP_100_DAILY        364 non-null float64
dtypes: float64(25), int64(2), object(1)
memory usage: 79.9+ KB

One problem here is that pandas did not recognize the LDT_DATE column as a date. Let's help it.

In [45]:
df = pd.read_csv('data.txt', sep='\s+',
                 na_values=[-9999.0, -99.0],
                 parse_dates=[1])
df.info()

RangeIndex: 365 entries, 0 to 364
Data columns (total 28 columns):
WBANNO                     365 non-null int64
LST_DATE                   365 non-null datetime64[ns]
CRX_VN                     365 non-null float64
LONGITUDE                  365 non-null float64
LATITUDE                   365 non-null float64
T_DAILY_MAX                364 non-null float64
T_DAILY_MIN                364 non-null float64
T_DAILY_MEAN               364 non-null float64
T_DAILY_AVG                364 non-null float64
P_DAILY_CALC               364 non-null float64
SOLARAD_DAILY              364 non-null float64
SUR_TEMP_DAILY_TYPE        365 non-null object
SUR_TEMP_DAILY_MAX         364 non-null float64
SUR_TEMP_DAILY_MIN         364 non-null float64
SUR_TEMP_DAILY_AVG         364 non-null float64
RH_DAILY_MAX               364 non-null float64
RH_DAILY_MIN               364 non-null float64
RH_DAILY_AVG               364 non-null float64
SOIL_MOISTURE_5_DAILY      317 non-null float64
SOIL_MOISTURE_10_DAILY     317 non-null float64
SOIL_MOISTURE_20_DAILY     336 non-null float64
SOIL_MOISTURE_50_DAILY     364 non-null float64
SOIL_MOISTURE_100_DAILY    359 non-null float64
SOIL_TEMP_5_DAILY          364 non-null float64
SOIL_TEMP_10_DAILY         364 non-null float64
SOIL_TEMP_20_DAILY         364 non-null float64
SOIL_TEMP_50_DAILY         364 non-null float64
SOIL_TEMP_100_DAILY        364 non-null float64
dtypes: datetime64[ns](1), float64(25), int64(1), object(1)
memory usage: 79.9+ KB

It worked! Finally, let's tell pandas to use the date column as the index.

In [46]:
df = df.set_index('LST_DATE')
df.head()
Out[46]:
WBANNO CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC SOLARAD_DAILY ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
LST_DATE
2017-01-01 64756 2.422 -73.74 41.79 6.6 -5.4 0.6 2.2 0.0 8.68 ... NaN NaN 0.207 0.152 0.175 -0.1 0.0 0.6 1.5 3.4
2017-01-02 64756 2.422 -73.74 41.79 4.0 -6.8 -1.4 -1.2 0.0 2.08 ... NaN NaN 0.205 0.151 0.173 -0.2 0.0 0.6 1.5 3.3
2017-01-03 64756 2.422 -73.74 41.79 4.9 0.7 2.8 2.7 13.1 0.68 ... NaN NaN 0.205 0.150 0.173 -0.1 0.0 0.5 1.5 3.3
2017-01-04 64756 2.422 -73.74 41.79 8.7 -1.6 3.6 3.5 1.3 2.85 ... NaN NaN 0.215 0.153 0.174 -0.1 0.0 0.5 1.5 3.2
2017-01-05 64756 2.422 -73.74 41.79 -0.5 -4.6 -2.5 -2.8 0.0 4.90 ... NaN NaN 0.215 0.154 0.177 -0.1 0.0 0.5 1.4 3.1

5 rows × 27 columns

We can now access values by time:

In [47]:
df.loc['2017-08-07']
Out[47]:
WBANNO                     64756
CRX_VN                     2.422
LONGITUDE                 -73.74
LATITUDE                   41.79
T_DAILY_MAX                 19.3
T_DAILY_MIN                 12.3
T_DAILY_MEAN                15.8
T_DAILY_AVG                 16.3
P_DAILY_CALC                 4.9
SOLARAD_DAILY               3.93
SUR_TEMP_DAILY_TYPE            C
SUR_TEMP_DAILY_MAX          22.3
SUR_TEMP_DAILY_MIN          11.9
SUR_TEMP_DAILY_AVG          17.7
RH_DAILY_MAX                94.7
RH_DAILY_MIN                76.4
RH_DAILY_AVG                89.5
SOIL_MOISTURE_5_DAILY      0.148
SOIL_MOISTURE_10_DAILY     0.113
SOIL_MOISTURE_20_DAILY     0.094
SOIL_MOISTURE_50_DAILY     0.114
SOIL_MOISTURE_100_DAILY    0.151
SOIL_TEMP_5_DAILY           21.4
SOIL_TEMP_10_DAILY          21.7
SOIL_TEMP_20_DAILY          22.1
SOIL_TEMP_50_DAILY          22.2
SOIL_TEMP_100_DAILY         21.5
Name: 2017-08-07 00:00:00, dtype: object

Or use slicing to get a range:

In [48]:
df.loc['2017-07-01':'2017-07-31']
Out[48]:
WBANNO CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC SOLARAD_DAILY ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
LST_DATE
2017-07-01 64756 2.422 -73.74 41.79 28.0 19.7 23.9 23.8 0.2 19.28 ... 0.157 0.136 0.144 0.129 0.163 25.7 25.4 23.7 21.9 19.9
2017-07-02 64756 2.422 -73.74 41.79 29.8 18.4 24.1 23.7 4.0 27.67 ... 0.146 0.135 0.143 0.129 0.162 26.8 26.4 24.5 22.3 20.1
2017-07-03 64756 2.422 -73.74 41.79 28.3 15.0 21.7 21.4 0.0 27.08 ... 0.141 0.132 0.139 0.128 0.162 26.4 26.3 24.8 22.8 20.3
2017-07-04 64756 2.422 -73.74 41.79 26.8 12.6 19.7 20.0 0.0 29.45 ... 0.131 0.126 0.136 0.126 0.161 25.9 25.8 24.6 22.9 20.6
2017-07-05 64756 2.422 -73.74 41.79 28.0 11.9 20.0 20.7 0.0 26.90 ... 0.116 0.114 0.131 0.125 0.161 25.3 25.3 24.2 22.8 20.7
2017-07-06 64756 2.422 -73.74 41.79 25.7 14.3 20.0 20.3 0.0 19.03 ... 0.105 0.104 0.126 0.124 0.160 24.7 24.7 23.9 22.7 20.9
2017-07-07 64756 2.422 -73.74 41.79 25.8 16.8 21.3 20.0 11.5 13.88 ... 0.114 0.100 0.123 0.123 0.160 24.2 24.2 23.4 22.4 20.8
2017-07-08 64756 2.422 -73.74 41.79 29.0 15.3 22.1 21.5 0.0 21.92 ... 0.130 0.106 0.122 0.123 0.159 25.5 25.3 23.9 22.4 20.8
2017-07-09 64756 2.422 -73.74 41.79 26.3 10.9 18.6 19.4 0.0 29.72 ... 0.119 0.103 0.119 0.121 0.158 24.8 24.8 23.8 22.5 20.8
2017-07-10 64756 2.422 -73.74 41.79 27.6 11.8 19.7 21.3 0.0 23.67 ... 0.106 0.096 0.113 0.120 0.158 24.7 24.7 23.6 22.5 20.9
2017-07-11 64756 2.422 -73.74 41.79 27.4 19.2 23.3 22.6 8.5 17.79 ... 0.106 0.093 0.110 0.120 0.158 25.6 25.4 24.1 22.6 20.9
2017-07-12 64756 2.422 -73.74 41.79 29.4 18.5 23.9 23.1 1.9 16.27 ... 0.108 0.094 0.108 0.118 0.157 25.8 25.6 24.2 22.8 21.0
2017-07-13 64756 2.422 -73.74 41.79 29.5 18.3 23.9 23.4 23.3 13.61 ... 0.134 0.110 0.108 0.118 0.156 25.7 25.7 24.4 23.0 21.0
2017-07-14 64756 2.422 -73.74 41.79 18.5 15.9 17.2 17.5 4.1 5.36 ... 0.194 0.152 0.114 0.120 0.155 23.0 23.3 23.4 22.9 21.2
2017-07-15 64756 2.422 -73.74 41.79 26.6 16.5 21.5 21.0 0.8 21.13 ... 0.190 0.163 0.119 0.122 0.155 24.6 24.4 23.2 22.2 21.2
2017-07-16 64756 2.422 -73.74 41.79 27.9 13.3 20.6 21.0 0.0 27.03 ... 0.171 0.154 0.123 0.123 0.155 25.4 25.3 23.9 22.6 21.1
2017-07-17 64756 2.422 -73.74 41.79 29.2 16.1 22.6 22.9 0.0 20.47 ... 0.155 0.143 0.124 0.122 0.156 25.7 25.6 24.4 22.9 21.2
2017-07-18 64756 2.422 -73.74 41.79 30.3 19.3 24.8 24.7 0.0 24.99 ... 0.142 0.132 0.122 0.122 0.156 27.0 26.7 24.9 23.2 21.3
2017-07-19 64756 2.422 -73.74 41.79 31.2 19.1 25.1 25.0 0.0 27.69 ... 0.126 0.118 0.118 0.122 0.156 27.6 27.4 25.6 23.7 21.5
2017-07-20 64756 2.422 -73.74 41.79 31.8 16.6 24.2 23.4 0.7 21.53 ... 0.111 0.103 0.114 0.121 0.156 27.0 27.0 25.6 24.0 21.7
2017-07-21 64756 2.422 -73.74 41.79 30.6 16.6 23.6 23.6 0.0 25.55 ... 0.100 0.094 0.109 0.120 0.155 27.1 27.0 25.5 24.0 21.9
2017-07-22 64756 2.422 -73.74 41.79 27.7 15.6 21.7 21.2 0.5 16.04 ... 0.092 0.086 0.104 0.120 0.156 25.9 26.1 25.3 24.1 22.0
2017-07-23 64756 2.422 -73.74 41.79 26.4 18.5 22.5 22.2 0.0 19.03 ... 0.087 0.082 0.100 0.118 0.155 26.0 26.0 24.9 23.8 22.1
2017-07-24 64756 2.422 -73.74 41.79 19.4 14.8 17.1 16.7 29.2 9.10 ... 0.145 0.118 0.102 0.117 0.154 23.1 23.6 23.9 23.5 22.1
2017-07-25 64756 2.422 -73.74 41.79 18.6 13.7 16.2 16.2 0.0 7.35 ... 0.167 0.133 0.107 0.116 0.153 21.9 22.2 22.4 22.5 21.9
2017-07-26 64756 2.422 -73.74 41.79 24.7 11.2 18.0 18.3 0.0 22.22 ... 0.155 0.128 0.108 0.118 0.152 22.9 23.0 22.3 22.0 21.7
2017-07-27 64756 2.422 -73.74 41.79 24.2 15.2 19.7 19.5 0.0 8.28 ... 0.144 0.122 0.109 0.118 0.154 22.5 22.7 22.4 22.0 21.4
2017-07-28 64756 2.422 -73.74 41.79 26.5 16.9 21.7 20.9 0.0 21.06 ... 0.137 0.117 0.110 0.119 0.155 24.1 24.1 22.8 22.0 21.3
2017-07-29 64756 2.422 -73.74 41.79 24.2 10.4 17.3 18.1 0.0 21.28 ... 0.126 0.108 0.108 0.118 0.154 23.3 23.6 23.0 22.2 21.3
2017-07-30 64756 2.422 -73.74 41.79 25.5 8.2 16.8 17.3 0.0 27.68 ... 0.113 0.099 0.104 0.117 0.154 22.8 23.0 22.4 22.0 21.3
2017-07-31 64756 2.422 -73.74 41.79 29.4 10.1 19.7 20.1 0.0 25.49 ... 0.101 0.090 0.099 0.116 0.153 23.8 23.8 22.7 21.9 21.2

31 rows × 27 columns

Quick Statistics

In [49]:
df.describe()
Out[49]:
WBANNO CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC SOLARAD_DAILY ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
count 365.0 365.000000 3.650000e+02 3.650000e+02 364.000000 364.000000 364.000000 364.000000 364.000000 364.000000 ... 317.000000 317.000000 336.000000 364.000000 359.000000 364.000000 364.000000 364.000000 364.000000 364.000000
mean 64756.0 2.470767 -7.374000e+01 4.179000e+01 15.720055 4.037912 9.876374 9.990110 2.796703 13.067665 ... 0.183826 0.181009 0.156542 0.138294 0.162850 12.343956 12.308242 12.060440 11.960714 11.971703
std 0.0 0.085997 5.265234e-13 3.842198e-13 10.502087 9.460676 9.727451 9.619168 7.238308 7.953528 ... 0.047479 0.052698 0.042763 0.019228 0.013827 9.367795 9.350478 8.761415 8.082871 7.170032
min 64756.0 2.422000 -7.374000e+01 4.179000e+01 -12.300000 -21.800000 -17.000000 -16.700000 0.000000 0.100000 ... 0.076000 0.073000 0.069000 0.100000 0.128000 -0.700000 -0.400000 0.200000 0.900000 1.900000
25% 64756.0 2.422000 -7.374000e+01 4.179000e+01 6.900000 -2.775000 2.100000 2.275000 0.000000 6.225000 ... 0.149000 0.136000 0.118000 0.118000 0.155000 2.275000 2.075000 2.575000 3.300000 4.100000
50% 64756.0 2.422000 -7.374000e+01 4.179000e+01 17.450000 4.350000 10.850000 11.050000 0.000000 12.860000 ... 0.192000 0.198000 0.169000 0.147000 0.166000 13.300000 13.350000 13.100000 12.850000 11.650000
75% 64756.0 2.422000 -7.374000e+01 4.179000e+01 24.850000 11.900000 18.150000 18.450000 1.400000 19.740000 ... 0.221000 0.219000 0.188000 0.152250 0.173000 21.025000 21.125000 20.400000 19.800000 19.325000
max 64756.0 2.622000 -7.374000e+01 4.179000e+01 33.400000 20.700000 25.700000 26.700000 65.700000 29.910000 ... 0.294000 0.321000 0.231000 0.170000 0.192000 27.600000 27.400000 25.600000 24.100000 22.100000

8 rows × 26 columns

Plotting Values

We can now quickly make plots of the data

In [50]:
fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(14,14))

df.iloc[:, 4:8].boxplot(ax=ax[0,0])
df.iloc[:, 10:14].boxplot(ax=ax[0,1])
df.iloc[:, 14:17].boxplot(ax=ax[1,0])
df.iloc[:, 18:22].boxplot(ax=ax[1,1])


ax[1, 1].set_xticklabels(ax[1, 1].get_xticklabels(), rotation=90);

Pandas is very "time aware":

In [51]:
df.T_DAILY_MEAN.plot()
Out[51]:

Note: we could also manually create an axis and plot into it.

In [52]:
fig, ax = plt.subplots()
df.T_DAILY_MEAN.plot(ax=ax)
ax.set_title('Pandas Made This!')
Out[52]:
Text(0.5,1,'Pandas Made This!')
In [53]:
df[['T_DAILY_MIN', 'T_DAILY_MEAN', 'T_DAILY_MAX']].plot()
Out[53]:

Resampling

Since pandas understands time, we can use it to do resampling.

In [54]:
# monthly reampler object
rs_obj = df.resample('MS')
rs_obj
Out[54]:
DatetimeIndexResampler [freq=, axis=0, closed=left, label=left, convention=start, base=0]
In [55]:
rs_obj.mean()
Out[55]:
WBANNO CRX_VN LONGITUDE LATITUDE T_DAILY_MAX T_DAILY_MIN T_DAILY_MEAN T_DAILY_AVG P_DAILY_CALC SOLARAD_DAILY ... SOIL_MOISTURE_5_DAILY SOIL_MOISTURE_10_DAILY SOIL_MOISTURE_20_DAILY SOIL_MOISTURE_50_DAILY SOIL_MOISTURE_100_DAILY SOIL_TEMP_5_DAILY SOIL_TEMP_10_DAILY SOIL_TEMP_20_DAILY SOIL_TEMP_50_DAILY SOIL_TEMP_100_DAILY
LST_DATE
2017-01-01 64756 2.422000 -73.74 41.79 3.945161 -3.993548 -0.025806 0.038710 3.090323 4.690000 ... 0.237100 0.248400 0.204550 0.152871 0.175194 0.209677 0.267742 0.696774 1.438710 2.877419
2017-02-01 64756 2.422000 -73.74 41.79 7.246429 -4.360714 1.442857 1.839286 2.414286 10.364286 ... 0.226333 0.243000 0.207545 0.152857 0.175786 1.125000 1.100000 1.192857 1.492857 2.367857
2017-03-01 64756 2.422000 -73.74 41.79 5.164516 -5.335484 -0.090323 0.167742 3.970968 13.113548 ... 0.218100 0.229300 0.196258 0.153581 0.174548 2.122581 2.161290 2.345161 2.700000 3.387097
2017-04-01 64756 2.422000 -73.74 41.79 17.813333 5.170000 11.493333 11.540000 2.300000 14.645000 ... 0.199767 0.210300 0.190667 0.151000 0.172500 11.066667 10.666667 9.636667 8.426667 6.903333
2017-05-01 64756 2.422000 -73.74 41.79 19.151613 7.338710 13.229032 13.638710 4.141935 16.519677 ... 0.206613 0.211000 0.185645 0.147774 0.170032 16.454839 16.290323 15.361290 14.270968 12.696774
2017-06-01 64756 2.422000 -73.74 41.79 25.423333 12.176667 18.796667 18.986667 3.743333 21.655000 ... 0.185167 0.184300 0.173167 0.142567 0.167000 22.350000 22.166667 20.880000 19.370000 17.333333
2017-07-01 64756 2.422000 -73.74 41.79 26.912903 15.183871 21.048387 20.993548 2.732258 20.566129 ... 0.131258 0.115839 0.116645 0.121065 0.156742 24.993548 24.980645 23.925806 22.745161 21.164516
2017-08-01 64756 2.422000 -73.74 41.79 25.741935 12.954839 19.351613 19.477419 2.758065 18.360000 ... 0.143968 0.122290 0.105452 0.115355 0.151034 23.374194 23.519355 22.848387 22.193548 21.377419
2017-09-01 64756 2.422000 -73.74 41.79 24.186667 11.300000 17.746667 17.463333 1.893333 15.154667 ... 0.145267 0.139667 0.117367 0.112167 0.142000 20.256667 20.386667 19.966667 19.766667 19.530000
2017-10-01 64756 2.602645 -73.74 41.79 21.043333 7.150000 14.100000 13.976667 3.500000 10.393667 ... 0.140533 0.131400 0.084967 0.105533 0.133267 16.133333 16.186667 16.320000 16.836667 17.466667
2017-11-01 64756 2.622000 -73.74 41.79 10.346667 -2.093333 4.120000 4.336667 0.820000 6.721333 ... 0.215367 0.211167 0.167000 0.149700 0.164300 7.226667 7.190000 8.056667 9.543333 11.746667
2017-12-01 64756 2.622000 -73.74 41.79 1.496774 -7.412903 -2.967742 -2.838710 2.103226 4.471290 ... 0.231464 0.226107 0.177581 0.155484 0.169129 2.222581 2.183871 2.912903 4.187097 6.303226

12 rows × 26 columns

We can chain all of that together

In [56]:
df_mm = df.resample('MS').mean()
df_mm[['T_DAILY_MIN', 'T_DAILY_MEAN', 'T_DAILY_MAX']].plot()
Out[56]:

Next time we will dig deeper into resampling, rolling means, and grouping operations (groupby).

In [ ]: