Getting the NASA bio-Optical Marine Algorithm Dataset (NOMAD) into a Pandas DataFrame
In this post, I'm going to briefly describe how a I download the NASA bio-Optical Marine Algorithm Dataset or NOMAD created for algorithm development, extract the data I need and store it all neatly in a Pandas DataFrame. Here I use the latest dataset, NOMAD v.2, created in 2008. First things first; imports!
import requests
import pandas as pd
import re
import numpy as np
import pickle
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
The data can be accessed through a URL that I'll store in a string below.
NOMADV2url='https://seabass.gsfc.nasa.gov/wiki/NOMAD/nomad_seabass_v2.a_2008200.txt'
Next, I'll write a couple of functions. The first to get the data from the url. The second function will parse the text returned by the first function and put in a Pandas DataFrame. This second function makes more sense after inspecting the content of the page at the url above.
def GetNomad(url=NOMADV2url):
"""Download and return data as text"""
resp = requests.get(NOMADV2url)
content = resp.text.splitlines()
resp.close()
return content
def ParseTextFile(textFile, topickle=False, convert2DateTime=False, **kwargs):
"""
* topickle: pickle resulting DataFrame if True
* convert2DateTime: join date/time columns and convert entries to datetime objects
* kwargs:
pkl_fname: pickle file name to save DataFrame by, if topickle=True
"""
# Pre-compute some regex
columns = re.compile('^/fields=(.+)') # to get field/column names
units = re.compile('^/units=(.+)') # to get units -- optional
endHeader = re.compile('^/end_header') # to know when to start storing data
# Set some milestones
noFields = True
getData = False
# loop through the text data
for line in textFile:
if noFields:
fieldStr = columns.findall(line)
if len(fieldStr)>0:
noFields = False
fieldList = fieldStr[0].split(',')
dataDict = dict.fromkeys(fieldList)
continue # nothing left to do with this line, keep looping
if not getData:
if endHeader.match(line):
# end of header reached, start acquiring data
getData = True
else:
dataList = line.split(',')
for field,datum in zip(fieldList, dataList):
if not dataDict[field]:
dataDict[field] = []
dataDict[field].append(datum)
df = pd.DataFrame(dataDict, columns=fieldList)
if convert2DateTime:
datetimelabels=['year', 'month', 'day', 'hour', 'minute', 'second']
df['Datetime']= pd.to_datetime(df[datetimelabels],
format='%Y-%m-%dT%H:%M:%S')
df.drop(datetimelabels, axis=1, inplace=True)
if topickle:
fname=kwargs.pop('pkl_fname', 'dfNomad2.pkl')
df.to_pickle(fname)
return df
df = ParseTextFile(GetNomad(), topickle=True, convert2DateTime=True,
pkl_fname='./bayesianChl_DATA/dfNomadRaw.pkl')
df.head()
This DataFrame quite large and unwieldy with 212 columns. But Pandas makes it easy to extract the necessary data for a particular project. For my current project, which I'll go over in a subsequent post, I need field data relevant to the SeaWiFS sensor, in particular optical data at wavelengths 412, 443, 490, 510, 555, and 670 nm. First let's look at the available bands as they appear in spectral surface irradiance column labels, which start with 'es'.
bandregex = re.compile('es([0-9]+)')
bands = bandregex.findall(''.join(df.columns))
print(bands)
Now I can extract data with bands that are the closest to what I need. In the process I'm going to use water leaving radiance and spectral surface irradiance to compute remote sensing reflectance, rrs. I will store this new data in a new DataFrame, dfSwf.
swfBands = ['411','443','489','510','555','670']
dfSwf = pd.DataFrame(columns=['rrs%s' % b for b in swfBands])
for b in swfBands:
dfSwf.loc[:,'rrs%s'%b] = df.loc[:,'lw%s' % b].astype('f8') / df.loc[:,'es%s' % b].astype('f8')
dfSwf.head()
For the projects I'm currently working on, I'll need to select a few more features from the inital dataset.
dfSwf['id'] = df.id.astype('i4') # in case I need to relate this data to the original
dfSwf['datetime'] = df.Datetime
dfSwf['hplc_chl'] = df.chl_a.astype('f8')
dfSwf['fluo_chl'] = df.chl.astype('f8')
dfSwf['lat'] = df.lat.astype('f8')
dfSwf['lon'] = df.lon.astype('f8')
dfSwf['depth'] = df.etopo2.astype('f8')
dfSwf['sst'] = df.oisst.astype('f8')
for band in swfBands:
addprods=['a','ad','ag','ap','bb']
for prod in addprods:
dfSwf['%s%s' % (prod,band)] = df['%s%s' % (prod, band)].astype('f8')
dfSwf.replace(-999,np.nan, inplace=True)
Tallying the features I've gathered...
print(dfSwf.columns)
That seems like a good dataset to start with. I'll pickle this DataFrame just in case.
dfSwf.to_pickle('./bayesianChl_DATA/dfNomadSWF.pkl')
The first project that I'll first tackle is a recasting of the OCx empirical band ratio algorithms within a Bayesian framework. For that I can further cull the dataset following the "Data Source" section in a paper I am using for comparison by Hu et al., 2012. This study draws from this same data set, applying the following criteria:
- only hplc chlorophyll
- chl>0 where rrs>0
- depth>30
- lat $\in\left[-60,60\right]$
Applying these criteria should result in a dataset reduced to 136 observations.
rrsCols = [col for col in dfSwf.columns if 'rrs' in col]
iwantcols=rrsCols + ['id', 'depth','hplc_chl','sst','lat','lon']
dfSwfHu = dfSwf[iwantcols].copy()
del dfSwf, df
dfSwfHu.info()
Apparently the only null entries are in the hplc_chl column. Dropping the nulls in that column takes care of the first of the criteria listed above.
dfSwfHu.dropna(inplace=True)
dfSwfHu.describe()
According to the summary table above, I don't need to worry about 0 chl as per the criteria above. However, it appears several reflectances have spurious 1.0000 values. Since these were never mentioned in the paper, I'll first cull the dataset according to depth and lat criteria, see if that takes care of cleaning those values as well. This should land me with 136 observations
dfSwfHu=dfSwfHu.loc[((dfSwfHu.depth>30) &\
(dfSwfHu.lat>=-60) & (dfSwfHu.lat<=60)),:]
dfSwfHu.describe()
Nope. We're down to 964 observations. So much for reproducibility via publication. Getting rid of spurions rrs values...
dfSwfHu = dfSwfHu.loc[((dfSwfHu.rrs411<1.0) & (dfSwfHu.rrs510<1.0)&\
(dfSwfHu.rrs555<1.0) & (dfSwfHu.rrs670<1.0)),:]
dfSwfHu.describe()
136 values. Success! Once again, I'll pickle this DataFrame.
dfSwfHu.to_pickle('/accounts/ekarakoy/DATA/NOMAD/dfSwfHuOcxCI_2012.pkl')
That's it. Until next time, Happy Hacking!
Comments