DR1#
Sanity Check, Merging
Note
Write completely because this step has been performed and
Good as template for following steps
This reduction step clean and merge all the individual scans obtain experimentally together.
Implement the Temperature Ramp
Workflow#
Note
May be svg figure
DR1#
Input
- *_smooth.csv
Output
DR1_Date_{}.csv
{}_data Annex.csv
Plots
All_scan
This reduction step is performed per sample and the same notebook is used to process all the samples of dataset.
DR1.1#
Input
- *_smooth.csv
Output Data
DR1_Date_{}.csv
{}_data Annex.csv
Output Plots
All_scan
Merge all the sample together into one big dataframe
Notebook analysis#
DR1#
Code
First, I create a file_path
that is the folder location of the Raw (smoothed) scans.
for file in glob.glob(file_path):
Content 2
Then, I created a for loop:
That do (for each scans) the following actions:
Read the data file (csv) and create a
data-frame
df = pd.read_csv(file, names=["Wavenumber", str(spl)+"_"+str(date)+"_"+str(file_number)])
Append the df into a previously created empty list, All_data_frame
All_data_frame.append(df)
Naming convention#
Samples#
From Omnic I obtain individual .spa scans named:
\(\color{red}{\text{ASW_}}\)\(\color{blue}{\text{2020_09_15_}}\)\(\color{green}{\text{0001}}\).spa
\(\color{red}{\text{Sample type}}\) : can take value : ASW, C2H6, C2H6_ASW
\(\color{blue}{\text{Sample date}}\) : format yyyy_mm_dd (is the id of every sample).
\(\color{green}{\text{Scan number}}\) : is allocate incrementaly and represent each scan.
The Data is quickly processsed using Omnic (Smoothing with a window of (15)) and finally a collection of .CSV files of shape (ASW_2020_09_15_0001_smooth.CSV, ASW_2020_09_15_0002_smooth.CSV, …) is exported within a dated folder of shape
2020_09_15
2020_09_16
…
Storage#
The exported data is considered Raw and the dated folder containing the data is located in a hardrive uner the location