Functions

Reference for all functions used in DNPLab.

Analysis

Hydration

dnplab.analysis.hydration.calculate_ksigma(ksigma_sp=False, powers=False, smax=1)

Get ksigma and E_power at half max of ksig

Parameters:
  • ksig (numpy.array) -- Array of ksigmas

  • powers (numpy.array) -- Array of E_powers

Returns:

calculated ksigma ksigma_stdd (float): standard deviation in ksigma p_12 (float): power at half max for ksigma fit

Return type:

ksigma (float)

J.M. Franck et al. / Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.calculate_ksigma_array(powers=False, ksigma_smax=95.4, p_12=False)

Function to calcualte ksig array for any given ksigma and p_12

Parameters:
  • powers (numpy.array) -- Array of powers

  • ksigma_smax (float) -- product of ksigma and smax (s^-1 * M^-1)

  • p_12 (float) -- power at half max for ksigma fit

Returns:

calculated ksigma array

Return type:

ksig_fit (numpy.array)

J.M. Franck et al. / Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.calculate_smax(spin_C=False)

Returns maximal saturation factor.

Parameters:

spin_C (float) -- unpaired spin concentration (M)

Returns:

maximal saturation factor (unitless)

Return type:

smax (float)

\[\mathrm{s_{max}} = 1 - (2 / (3 + (3 * (\mathrm{spin\_C} * 198.7))))\]

M.T. Türke, M. Bennati, Phys. Chem. Chem. Phys. 13 (2011) 3630. & J. Hyde, J. Chien, J. Freed, J. Chem. Phys. 48 (1968) 4211.

dnplab.analysis.hydration.calculate_tcorr(coupling_factor=0.27, omega_e=0.0614, omega_H=9.3231e-05)

Returns translational correlation time (tcorr) in pico second

Parameters:
  • coupling_factor (float) -- coupling factor

  • omega_e (float) -- electron gyromagnetic ratio

  • omega_H (float) -- proton gyromagnetic ratio

Returns:

translational diffusion correlation time (s)

Return type:

tcorr (float)

J.M. Franck et al. / Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.calculate_uncorrected_Ep(uncorrected_xi=0.33, p_12_unc=0, E_powers=False, T10=2.0, T100=2.5, omega_ratio=658.5792, smax=1)

Function for E(p) for any given xi and p_12

Parameters:
  • uncorrected_xi (float) -- uncorrected coupling factor

  • p_12_unc (float) -- power at half max for uncorrected_xi fit

  • E_array (numpy.array) -- Array of enhancements

  • E_powers (numpy.array) -- Array of E_powers

  • T10 (float) -- T1(0), proton T1 with microwave power=0 (s)

  • T100 (float) -- T10(0), proton T1 with spin_C=0 and microwave power=0 (s)

  • omega_ratio (float) -- ratio of electron & proton gyromagnetic ratios

  • smax (float) -- maximal saturation factor

Returns:

uncorrected enhancement curve

Return type:

Ep_fit (numpy.array)

J.M. Franck et al. / Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.calculate_uncorrected_xi(E_array=False, E_powers=False, T10=2.0, T100=2.5, omega_ratio=658.5792, smax=1)

Get coupling_factor and E_power at half saturation

Parameters:
  • E_array (numpy.array) -- Array of enhancements

  • E_powers (numpy.array) -- Array of powers

  • T10 (float) -- T1(0), proton T1 with microwave power=0 (s)

  • T100 (float) -- T10(0), proton T1 with spin_C=0 and microwave power=0 (s)

  • omega_ratio (float) -- ratio of electron & proton gyromagnetic ratios

  • smax (float) -- maximal saturation factor

Returns:

uncorrected coupling factor p_12_unc (float): power at half max for uncorrected_xi fit

Return type:

uncorrected_xi (float)

J.M. Franck et al.; Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.calculate_xi(tcorr=5.4e-11, omega_e=0.0614, omega_H=9.3231e-05)

Returns coupling_factor for any given tcorr

Parameters:
  • tcorr (float) -- translational diffusion correlation time (s)

  • omega_e (float) -- electron gyromagnetic ratio

  • omega_H (float) -- proton gyromagnetic ratio

Returns:

coupling factor

Return type:

xi (float)

J.M. Franck et al. / Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56

dnplab.analysis.hydration.hydration(data={}, constants={})

Function for performing ODNP calculations

Parameters:
  • data (dict) -- keys and values are described in the example

  • constants (dict) -- (optional) keys and values are described in the example

Returns:

keys and values are described in the example

Return type:

(dict)

J.M. Franck et al.; Progress in Nuclear Magnetic Resonance Spectroscopy 74 (2013) 33–56 https://www.sciencedirect.com/science/article/abs/pii/S0079656513000629

J.M. Franck, S. Han; Methods in Enzymology, Chapter 5, Volume 615, (2019) 131-175 https://www.sciencedirect.com/science/article/abs/pii/S0076687918303872

dnplab.analysis.hydration.interpolate_T1(E_powers=False, T1_powers=False, T1_array=False, interpolate_method='linear', delta_T1_water=False, T1_water=False, macro_C=False, spin_C=1, T10=2.0, T100=2.5)

Returns interpolated T1 data.

Parameters:
  • E_powers (numpy.array) -- The microwave powers at which to evaluate

  • T1_powers (numpy.array) -- The microwave powers of the T1s to interpolate

  • T1_array (numpy.array) -- The original T1s (s)

  • interpolate_method (str) -- "second_order" or "linear"

  • spin_C (float) -- unpaired electron spin concentration (M)

  • T10 (float) -- T1 measured with unpaired electrons (s)

  • T100 (float) -- T1 measured without unpaired electrons (s)

  • delta_T1_water (optional) (float) -- change in T1 of water at max microwave power (s)

  • T1_water (optional) (float) -- T1 of pure water (s)

  • macro_C (optional) (float) -- concentration of macromolecule (M)

Returns:

Array of T1 values same shape as E_powers and E_array

Return type:

interpolated_T1 (numpy.array)

T1 data is interpolated using Eq. 39 of http://dx.doi.org/10.1016/j.pnmrs.2013.06.001 for "linear" or Eq. 22 of https://doi.org/10.1016/bs.mie.2018.09.024 for "second_order"

Constants

Constants

mrProperties

dnplab.constants.mrProperties.mr_properties(nucleus, *args)

Return magnetic resonance property of specified isotope.

This function is modeled after the Matlab function gmr written by Mirko Hrovat: https://www.mathworks.com/matlabcentral/fileexchange/12078-gmr-m-nmr-mri-properties

Also see: R.K.Harris et. al., Pure and Applied Chemistry, 2001, 73:1795-1818. Electron value comes from 1998 CODATA values, http://physics.nist.gov/cuu/Constants, or https://physics.nist.gov/cuu/Constants/index.html. Xenon gyromagnetic ratio was calculated from 27.661 MHz value from Bruker's web site.

Parameters:
  • nucleus (str) -- '0e', '1H', '2H', '6Li', '13C', 14N', etc.

  • B0 (float) -- (optional) B0 field in (mT)

  • flags (Additional) -- gamma: Return Gyromagnetic Ratio (Hz/T) spin: Return spin number of selected nucleus qmom: Return quadrupole moment [fm^2] (100 barns) natAbundance: Return natural abundance (%) relSensitivity: Return relative sensitivity with respect to 1H at constant B0 moment: Return magnetic dipole moment, abs(u)/uN = abs(gamma)*hbar[I(I + 1)]^1/2/uN qlw: Return quadrupolar line-width factor, Qlw = Q^2(2I + 3)/[I^2(2I + 1)]

Examples

dnp.dnpTools.mr_Properties('1H') = 26.7522128 # 1H Gyromagnetic Ratio (10^7r/Ts)

dnp.dnpTools.mr_Properties('1H', 0.35) = 14902114.17018196 # 1H Larmor Freq at .35 T (Hz)

dnp.dnpTools.mr_Properties('2H', 'qmom') = 0.286 # Nuclear Quadrupole Moment (fm^2)

dnp.dnpTools.mr_Properties('6Li', 'natAbundance') = 7.59 # % Natural Abundance

dnp.dnpTools.mr_Properties('6Li', 'relSensitivity') = 0.000645 # Relative sensitivity

radicalProperties

dnplab.constants.radicalProperties.radical_properties(name)

Return properties of different radicals. At the minimum the g value is returned. If available, large hyperfine couplings to a nucleus are returned. Add new properties or new radicals to radicalProperties.py

arg

returns

"gfree"

2.00231930436153

"tempo1"

[[2.00980, 2.00622, 2.00220], "14N", [16.8, 20.5, 95.9]]

"tempo2"

[[2.00909, 2.00621, 2.00222], "14N", [20.2, 20.2, 102.1]]

"bdpa"

[[2.00263, 2.00260, 2.00257], "1H", [50.2, 34.5, 13.0]]

"ddph_neat"

2.0036

Parameters:

name (str) -- Name of the radical

Returns:

Principle g values and hyperfine coupling tensor

Return type:

radicalProperties (dict)

Examples

Return g value of a free electron

>>> radical_properties("gfree")
dnplab.constants.radicalProperties.show_dnp_properties(radical, mwFrequency, dnpNucleus)

Calculate DNP Properties for a given radical

Parameters:
  • radical (str) -- Radical name, see mrProperties.py for radicals that are currently implemented

  • mwFrequency (float) -- Microwave frequency in (Hz)

  • dnpNucleus (str) -- Nucleus for DNP-NMR experiments

Returns:

Function returns a table of DNP parameters to the screen

Examples

>>> dnp.show_dnp_poperties('gfree', 9.45e9, '1H')

Core

Base

class dnplab.core.base.ABCData(values=array([], dtype=float64), dims=[], coords=[], attrs={}, dnp_attrs={}, error=None, **kwargs)

Bases: object

N-Dimensional Data Object

values

Data values in

Type:

numpy.ndarray

dims

List of strings giving dimension labels

Type:

list

coords

Collection of numpy.ndarrays defining the axes

Type:

Coords

attrs

dictionary of parameters

Type:

dict

error

If not None, error for values which are propagated during mathematical operations

Type:

numpy.ndarray

proc_attrs

List of processing steps

Type:

list

property abs

DNPData with absolute part of values

Type:

DNPData

align(b)

Align two data objects for numerical operations

Parameters:

b -- Object to align with self

Returns:

self and b aligned data objects

Return type:

tuple

argmax(dim)

Return value of coord at values maximum for given dim

Parameters:

dim (str) -- Dimension to perform operation along

argmax_index(dim)

Return index of coord at values maximum for given dim

Parameters:

dim (str) -- Dimension to perform operation along

argmin(dim)

Return value of coord at values minimum for given dim

Parameters:

dim (str) -- Dimension to perform operation along

argmin_index(dim)

Return index of coord at values minimum for given dim

Parameters:

dim (str) -- Dimension to perform operation along

chunk(dim, new_dims, new_sizes)

Note

This is a placeholder for a function that's not yet implemented

Parameters:
  • dim (str) -- Assume that the dimension dim is a direct product of the dimensions given in new_dims, and chunk it out into those new dimensions.

  • new_dims (list of str) --

    The new dimensions to generate. Note that one of the elements of the list can be dim if you like.

    It's assumed that the ordering of dim is a direct product given in C-ordering (i.e. the inner dimensions are listed last and the outer dimensions are listed first -- here "inner" means that changes to the index of the inner-most dimension correspond to adjacent positions in memory and/or adjacent indeces in the original dimension that you are chunking)

  • new_sizes (list of int) -- sizes of the new dimensions`

Returns:

self -- The new nddata object. Note that uniformly ascending or descending coordinates are manipulated in a rational way, e.g. [1,2,3,4,5,6] when chunked to a size of [2,3] will yield coordinates for the two new dimensions: [1,4] and [0,1,2]. Coordinates that are not uniformly ascending or descending will yield and error and must be manually modified by the user.

Return type:

nddata_core

concatenate(b, dim)

Concatenate DNPData objects

Parameters:
  • b (DNPData) -- Data object to append to current data object

  • dim (str) -- dimension to concatenate along

copy()

Return deepcopy of dnpdata object

Returns:

deep copy of data object

cumulative_sum(dim)

Calculate Cumulative sum of dnpdata object

Returns:

cumulative sum of data object

property dtype

Values type

Type:

type

fold()

Fold 2d data to original ND shape

get_coord(dim)

Return coord corresponding to given dimension name

Parameters:

dim (str) -- Name of dim to retrieve coordinates from

Returns:

array of coordinates

Return type:

numpy.ndarray

property imag

DNPData with imaginary part of values

Type:

DNPData

index(dim)

Find index of given dimension name

Parameters:

dim (str) -- Name of dimension to index

Returns:

Index value of dim

Return type:

int

is_sorted(dim)

Determine if coords corresponding to give dim are sorted in ascending order :param dim: Dimension to check if sorted :type dim: str

Returns:

True if sorted, False otherwise.

Return type:

bool

maximum(dim)

Return max for given dim

Parameters:

dim (str) -- Dimension to take maximum along

merge_attrs(b)

Merge the given dictionaries

Parameters:

b (nddata_core) -- attributes to merge into object

minimum(dim)

Return min for given dim

Parameters:

dim (str) -- Dimension to perform operation along

property ndim

Number of dimensions

Type:

str

new_dim(dim, coord)

Add new dimension with length 1

Parameters:
  • dim (str) -- Name of new dimension

  • coord (int, float) -- New coord

property real

DNPData with real part of values

Type:

DNPData

rename(dim, new_name)

Rename dim

Parameters:
  • dim (str) -- Name of dimension to rename

  • new_name (str) -- New name for dim

reorder(dims)

Reorder dimensions

Parameters:

dims (list) -- List of strings in new order

property shape

Shape of values

Type:

tuple

property size

Returns values.size. Total number of elements in numpy array.

smoosh(old_dims, new_name)

Note

Not yet implemented.

smoosh does the opposite of chunk -- see :func`:~nddata_core.chunk`

sort(dim)

Sort the coords corresponding to the given dim in ascending order

Parameters:

dim (str) -- dimension to sort

sort_dims()

Sort the dimensions

split(dim, new_dim, coord)

Split the dimension dim into

squeeze(dim)

Remove length 1 axes

sum(dim)

Perform sum down given dimension

Parameters:

dim (str) -- Dimension to perform sum down

unfold(dim)

Unfold ND data to 2d data

Parameters:

dim (str) -- Dimension to make first (length N), all other dimensions unfolded so that values has shape (N x M)

Coord

Data

DNPData object for storing N-dimensional data with coordinates

class dnplab.core.data.DNPData(values=array([], dtype=float64), dims=[], coords=[], attrs={}, dnplab_attrs={'attenuation': 23, 'center_field': 3495.55, 'conversion_time': 20.0, 'data_format': 'Prospa', 'data_type': 'NMR', 'frequency': 14855000.0, 'modulation_amplitude': 1.0, 'modulation_frequency': 100.0, 'power': 1.002, 'receiver_gain': 13, 'repetition_time': 1.0, 'scans': 4, 'time_constant': 10.24}, proc_attrs=None, **kwargs)

Bases: ABCData

DNPData Class for handling dnp data

The DNPData class is inspired by pyspecdata nddata object which handles n-dimensional data, axes, and other relevant information together.

This class is designed to handle data and axes together so that performing NMR processing can be performed easily.

values

Numpy Array containing data

Type:

numpy.ndarray

coords

List of numpy arrays containing axes of data

Type:

list

dims

List of axes labels for data

Type:

list

attrs

Dictionary of parameters for data

Type:

dict

add_proc_attrs(proc_attr_name, proc_dict)

Stamp processing step to DNPData object

Parameters:
  • proc_attr_name (str) -- Name of processing step (e.g. "fourier_transform")

  • proc_dict (dict) -- Dictionary of processing parameters for this processing step.

dnplab_info()

Print parameters currently in used in dnplab

exp_info()

Print experiment attributes currently in attrs dictionary

phase()

Return phase of DNPData object

Returns:

phase of data calculated from sum of imaginary

divided by sum of real components

Return type:

phase (float,int)

proc_info(step_name=None)

Print processing steps and parameters currently in proc_attrs list

select(selection)

Select subset of 2D data object

Parameters:

selection (int, range, list, tuple) -- list or tuple of slices to keep

Returns:

subset of DNPData object

Return type:

DNPData object

Example

data.select((1, range(5,10), 15)) # keeps slices: 1, 5, 6, 7, 8, 9, and 15

show_attrs(show_exp_info=False, show_dnplab_info=True, show_proc_info=True)

Print experiment attributes, dnplab attributes and processing steps

squeeze()

Remove all length 1 dimensions from data

Warning

Axes information is lost

Example

data.squeeze()

UFunc

Util

dnplab.core.util.concat(data_list, dim, coord=None, casting='same_kind')

Concatenates list of data objects down another dimension

Parameters:
  • data_list (list) -- List of DNPData objects to concatentate

  • dim (str) -- new dimension name

  • coord -- coords for new dimension

Returns:

concatenated data object

Return type:

data (DNPData)

dnplab.core.util.get_slice(data, dim, slice_index)

Get data slice of DNPData object

Parameters:
  • data (DNPData) -- Input data object

  • dim (str) -- Selected dimension

  • slice_index (int) -- Index of slice to be returned

Returns:

DNPData object with selected slice

Return type:

data (DNPData)

dnplab.core.util.implements_np(np_function)

register an numpy function for special handling in SPECIAL_NO_HANDLED

dnplab.core.util.update_axis(data, start_stop, dim=0, new_dims=0, spacing='lin', verbose=False)

Update axis

Update dimensions (dims) and axis (coords) of a dnpDate object. The name of the dims will be replaced with the name giving in new_dims. The variable start_stop defines the values of the new coords. This can be either a tuple (start values, stop value) or a vector with values. If the start and stop value is provided, either a linear axis (spacing = "lin", default) or a logarithmically space (spacing = "log") will be created. The new axis will replace the coords in the dnpdata object.

The function is currently implemented for 1D objects only.

Parameters:
  • data (DNPData) -- dnpData object

  • start_stop (tuple or vector) -- Coords for new dimension

  • dim (int) -- Dimension to act on

  • new_dims (str) -- Name of the new dimension. If None the name will not be changed.

  • spacing (str) -- "lin" for linear spaced axis or "log" for logarithmically spaced axis

Returns:

concatenated data object

Return type:

data (DNPData)

Fitting

General

dnplab.fitting.general.fit(f, data, dim, p0, fit_points=None, sigma=None, absolute_sigma=False, check_finite=True, bounds=(-inf, inf), method=None, jac=None, **kwargs)

Fitting function for DNPData

Parameters:
  • f (func) -- Function used in scipy.curve_fit

  • data (DNPData) -- Data for fit

  • dim (str) -- Dimension to perform fit along

  • p0 (tuple) -- Initial guess for fit

  • fit_points (int) -- Number of points to use in the fit. If None (default), the number of points is the same as the data.

  • kwargs -- Additional parameters for scipy.curve_fit

Returns:

Dictionary of fit, fitting parameters, and error

Return type:

out (dict)

IO

Bes3t

Functions to import Bruker EPR data

dnplab.io.bes3t.import_bes3t(path)

Import Bruker BES3T data and return dnpdata object

Parameters:

path (str) -- Path to either .DSC or .DTA file

Returns:

DNPData object containing Bruker BES3T data

Return type:

bes3t_data (object)

dnplab.io.bes3t.load_dsc(path)

Import contents of .DSC file

Parameters:

path (str) -- Path to .DSC file

Returns:

dictionary of parameters

Return type:

attrs (dict)

dnplab.io.bes3t.load_dta(path_dta, path_xgf=None, path_ygf=None, path_zgf=None, attrs={})

Import data from .DTA file. Uses .DSC and .XGF, .YGF, or .ZGF files if they exists

Parameters:
  • path_dta (str) -- Path to .DTA file

  • path_xgf (str) -- path to .XGF file for 1D data with nonlinear axis, "none" otherwise

  • path_ygf (str) -- path to .YGF file for 2D data, "none" if 1D or linear y axis

  • path_zgf (str) -- path to .ZGF file for 3D data, "none" if 1D/2D or linear z axis

  • attrs (dict) -- dictionary of parameters

Returns:

Spectrum for 1D or spectra for 2D dims (list) : dimensions coords (ndarray) : coordinates for spectrum or spectra attrs (dict) : updated dictionary of parameters

Return type:

values (ndarray)

dnplab.io.bes3t.load_gf_files(path, axis_type='', axis_format='', axis_points=1, axis_min=1, axis_width=1, endian='')

Import data from .XGF, .YGF, or .ZGF files

Parameters:
  • path (str) -- Path to ._GF file

  • axis_type (str) -- linear or nonlinear

  • axis_format (str) -- format of file data

  • axis_points (int) -- number of points in axis

  • axis_min (float) -- minimum value of axis

  • axis_width (float) -- total width of axis

  • endian (float) -- endian of data

Returns:

axis coordinates

Return type:

coords (ndarray)

CNSI

dnplab.io.cnsi.get_powers(path, power_file, experiment_list)

Split power readings files into array of power measurements equal in length to number of spectra in dataset

Parameters:
  • path (str) -- Path to base folder containing power file

  • power_file (str) -- filename, "power" or "t1_powers"

  • experiment_list (list) -- list of folder numbers of experiments corresponding to power_file

Returns:

list of power readings equal in length to experiment_list

Return type:

power_list (list)

Delta

dnplab.io.delta.import_delta(path, verbose=False)

Import Delta data and return DNPData object

Currently only 1D and 2D data sets are supported.

Parameters:

path (str) -- Path to .jdf file

Returns:

DNPData object containing Delta data

Return type:

dnpdata (DNPData)

dnplab.io.delta.import_delta_data(path, params={}, verbose=False)

Import spectrum or spectra of Delta data

Currently only 1D and 2D data sets are supported.

Parameters:
  • path (str) -- Path to .jdf file

  • params (dict) -- dictionary of parameters

Returns:

spectrum or spectra if >1D abscissa (list) : coordinates of axes dims (list) : axes names params (dict) : updated dictionary of parameters

Return type:

y_data (ndarray)

dnplab.io.delta.import_delta_pars(path, context_start)

Import parameter fields of Delta data

Parameters:
  • path (str) -- Path to .jdf file

  • context_start (int) -- the index where the context starts

Returns:

dictionary of parameter fields and values

Return type:

params (dict)

H5

dnplab.io.h5.load_h5(path, *args, **kwargs)

Returns Dictionary of dnpDataObjects

Parameters:

path (str) -- Path to h5 file

Returns:

workspace object with data

Return type:

dnpdata_collection

dnplab.io.h5.save_h5(dataDict, path, overwrite=False)

Save workspace in .h5 format

Parameters:
  • dataDict (dict) -- dnpdata_collection object to save.

  • path (str) -- Path to save data

  • overwrite (bool) -- If True, h5 file can be overwritten. Otherwise, h5 file cannot be overwritten

dnplab.io.h5.write_dict(dnpDataGroup, dnpDataObject)

Writes dictionary to h5 file

Parameters:
  • dnpDataGroup (h5py.Group) -- h5 group to write attrs dictionary

  • dnpDataObject (DNPData) -- DNPData object to write

dnplab.io.h5.write_dnpdata(dnpDataGroup, dnpDataObject)

Takes file/group and writes dnpData object to it

Parameters:
  • dnpDataGroup -- h5 group to save data to

  • dnpDataObject -- dnpdata object to save in h5 format

Load

dnplab.io.load.autodetect(test_path, verbose=False)

Automatically detect data format

Parameters:
  • test_path (str) -- Test directory

  • verbose (bool) -- If true, print output for debugging

Returns:

Data format as string

Return type:

str

dnplab.io.load.load(path, data_format=None, dim=None, coord=[], verbose=False, *args, **kwargs)

Import data from different spectrometer formats

Parameters:
  • path (str, list) -- Path to data directory or list of directories

  • data_format (str) -- format of spectrometer data to import (optional). Allowed values: "prospa", "topspin", "delta", "vnmrj", "tnmr", "specman", "xenon", "xepr", "winepr", "esp", "h5", "power", "vna", "cnsi_powers", "rs2d"

  • dim (str) -- If giving directories as list, name of dimension to concatenate data along

  • coord (numpy.ndarray) -- If giving directories as list, coordinates of new dimension

  • verbose (bool) -- If true, print debugging output

  • args -- Args passed to spectrometer specific import function

  • kwargs -- Key word args passed to spectrometer specific import function

Returns:

Data object

Return type:

data (dnpData)

Examples

Load a data file

>>> data = dnp.load('Path/To/File')

Load a list of files and concatenate down a new dimension called 't1' with coordinates

>>> data = dnp.load(['1/data.1d','2/data.1d','3/data.1d'], dim = 't1', coord = np.r_[0.1,0.2,0.3])
dnplab.io.load.load_file(path, data_format=None, verbose=False, *args, **kwargs)

Import data from different spectrometer formats

Parameters:
  • path (str) -- Path to data directory or file

  • data_format (str) -- Format of spectrometer data to import (optional). Allowed values: "prospa", "topspin", "delta", "vnmrj", "tnmr", "specman", "xenon", "xepr", "winepr", "esp", "h5", "power", "vna", "cnsi_powers"

  • verbose (bool) -- If true, print additional debug outputs

  • args -- Arguments passed to spectrometer specific import function

  • kwargs -- Key word arguments passed to spectrometer specific import function

Returns:

Data object

Return type:

data (dnpData)

Power

dnplab.io.power.assign_power(dataDict, expNumList, powersList)

Given a dictionary of dnpData objects with key being folder string, return the data with power values assigned to a new axis dimension

Parameters:
  • dataDict (dict) -- dictionary of data objects

  • expNumList (list) -- List of experiment numbers

  • powersList (list) -- List of powers

Returns:

Data object with powers

Return type:

DNPData

dnplab.io.power.chop_power(t, p, threshold=0.1)

Use Derivative to chop Powers

Parameters:
  • t (numpy.ndarray) -- Array of time points

  • p (numpy.ndarray) -- Array of powers

  • threshold (float) -- Threshold to chop powers

Returns:

Array of average time values averagePowerArray: Array of average power values

Return type:

averageTimeArray

dnplab.io.power.import_power(path, filename='')

import powers file

Parameters:
  • path (str) -- Directory of powers

  • filename (str) -- filename of powers if given

Returns:

Array of time points p (numpy.ndarray): Array of powers

Return type:

t (numpy.ndarray)

Prospa

dnplab.io.prospa.import_csv(path, return_raw=False, is_complex=True)

Import Kea csv file

Parameters:

path (str) -- Path to csv file

Returns:

x(numpy.array): axes if return_raw = False data(numpy.array): Data in csv file

Return type:

tuple

dnplab.io.prospa.import_nd(path)

Import Kea binary 1d, 2d, 3d, 4d files

Parameters:

path (str) -- Path to file

Returns:

x (None, numpy.array): Axes if included in binary file, None otherwise data (numpy.array): Numpy array of data

Return type:

tuple

dnplab.io.prospa.import_par(path)

Import Kea parameters .par file

Parameters:

path (str) -- Path to parameters file

Returns:

Dictionary of Kea Parameters

Return type:

dict

dnplab.io.prospa.import_prospa(path, parameters_filename=None, experiment=None, verbose=False)

Import Kea data

Parameters:
  • path (str) -- Path to data

  • parameters_filename (str) --

  • experiment (str) -- Prospa experiment, used when calculating coords from parameters

  • verbose (bool) -- If true, prints additional information for troubleshooting

Returns:

dnpdata object with Kea data

dnplab.io.prospa.prospa_coords(attrs, data_shape, experiment)

Generate coords from prospa acquisition parameters

Parameters:
  • attrs (dict) -- Dictionary of prospa acqusition parameters

  • data_shape (tuple) -- Shape of data

Returns:

dims and coords

Return type:

tuple

Save

dnplab.io.save.save(data_object, filename, save_type=None, *args, **kwargs)

Save data to h5 format

Parameters:
  • data_object (DNPData) -- dnpdata object to save

  • filename (str) -- name of file, must include extension .h5

  • save_type (str) -- Type of file to save (optional). Allowed values: "h5"

Returns:

none

SpecMan

dnplab.io.specman.analyze_attrs(attrs)

Analyze the attrs and add some important attrs to existing dictionary

Parameters:

attrs (dict) -- Dictionary of specman acqusition parameters

Returns:

The dictionary of specman acqusition parameters and added parameters

Return type:

attrs (dict)

dnplab.io.specman.calculate_specman_coords(attrs, old_coords, dims=None)

Generate coords from specman acquisition parameters

Parameters:
  • attrs (dict) -- Dictionary of specman acqusition parameters

  • dims (list) -- (Optional) a list of dims

Returns:

a calculated coords

Return type:

coords (list)

dnplab.io.specman.generate_dims(attrs)

Generate dims from specman acquisition parameters

Parameters:

attrs (dict) -- Dictionary of specman acqusition parameters

Returns:

a new dims

Return type:

dims (list)

dnplab.io.specman.import_specman(path, autodetect_coords: bool = False, autodetect_dims: bool = False)

Import SpecMan data and return DNPData object

DNPLab function to import SpecMan4EPR data (https://specman4epr.com/). The function returns a DNPdata object with the spectral data.

The structure of the DNPdata object can be complex and the variables saved by SpecMan depend on the individual spectrometer configuration. Therefore, the import function returns a numpy array with the dimension "x0", "x1", "x2", "x3", "x4". In any case, the dimension "x0" corresponds to the variables stored in the data file. The spectroscopic data is stored in "x1" to "x4", depending on how many dimensions were recorded. The import function will require a parser script to properly assign the spectroscopic data and proper coordinates.

Parameters:
  • path (str) -- Path to either .exp file

  • autodetect_coords (bool) -- Autodetect coords based on attrs

  • autodetect_dims (bool) -- Autodetect dims based on attrs

Returns:

DNPData object containing SpecMan EPR data

Return type:

data (DNPData)

dnplab.io.specman.load_specman_d01(path, attrs, verbose=False)

Import SpecMan d01 data file

DNPLab function to import the SpecMan d01 data file. The format of the SpecMan data file is described here:

Parameters:

path (str) -- Path to either .d01 or .exp file

Returns:

SpecMan data as numpy array params (dict): Dictionary with import updated parameters dictionary

Return type:

data (ndarray)

dnplab.io.specman.load_specman_exp(path)

Import SpecMan parameters

DNPLab function to read and import the SpecMan exp file. The .exp file is a text file that stores the experimental data, the pulse program, and other spectrometer configuration files.

Parameters:

path (str) -- Path to either .d01 or .exp file

Returns:

Dictionary of parameter fields and values (DNPLab attributes)

Return type:

attrs (dict)

TNMR

dnplab.io.tnmr.import_tnmr(path, squeeze=True)

Import tnmr data and return DNPData object

Parameters:
  • path (str) -- Path to .jdf file

  • squeeze (bool) -- Automatically remove length 1 dimensions

Returns:

DNPData object containing tnmr data

Return type:

dnpdata (object)

dnplab.io.tnmr.import_tnmr_data(path)

Import spectrum or spectra of tnmr data

Parameters:

path (str) -- Path to .tnt file

Returns:

Spectrum or spectra if >1D abscissa (list): Coordinates of axes dims (list): Axes names

Return type:

data (ndarray)

TopSpin

dnplab.io.topspin.find_group_delay(attrs_dict)

Determine group delay from tables

Parameters:

attrs_dict (dict) -- dictionary of topspin acquisition parameters

Returns:

Group delay. Number of points FID is shifted by DSP. The ceiling of this number (group delay rounded up) is the number of points should be removed from the start of the FID.

Return type:

float

dnplab.io.topspin.import_topspin(path, assign_vdlist=False, remove_digital_filter=False, read_offset=False, verbose=False, **kwargs)

Import topspin data and return dnpdata object

Parameters:
  • path (str) -- Directory of data

  • assign_vdlist -- False, or the name of dimension to assign topspin vdlist

  • remove_digital_filter (bool) -- Option to remove group delay (see note below)

  • verbose (bool) -- Print additional output for troubleshooting

Note

The group delay is a consequence of the oversampling and digital filtering in Bruker spectrometers. For more details see these blog posts https://nmr-analysis.blogspot.com/2010/05/bruker-smiles.html and https://nmr-analysis.blogspot.com/2010/05/bruker-smiles.html

Returns:

topspin data

Return type:

dnpdata

dnplab.io.topspin.load_acqu(path, required_params=None, verbose=False)

Import topspin acqu or proc files

Parameters:
  • path (str) -- directory of acqu or proc file

  • required_params (list) -- Only return parameters given

  • verbose (bool) -- If true, print output for troubleshooting

Returns:

Dictionary of acqusition parameters

Return type:

dict

dnplab.io.topspin.load_bin(path, dtype='>i4')

Import Topspin Ser file

Parameters:
  • path (str) -- Directory of data

  • dtype (str) -- data format for import

Returns:

Data from ser file

Return type:

raw (np.ndarray)

dnplab.io.topspin.load_pdata(path, verbose=False)

Import prospa processed data

Parameters:
  • path (str) -- Directory of pdata

  • verbose (bool) -- If true, print output for troubleshooting

Returns:

Topspin processed data

Return type:

DNPData

dnplab.io.topspin.load_ser(path, dtype='>i4')

Depreciated. Use load bin. Import Topspin Ser file

Parameters:
  • path (str) -- Directory of data

  • dtype (str) -- data format for import

Returns:

Data from ser file

Return type:

raw (np.ndarray)

dnplab.io.topspin.load_title(path='1', title_path='pdata/1', title_filename='title')

Import Topspin Experiment Title File

Parameters:
  • path (str) -- Directory of title

  • title_path (str) -- Path within experiment of title

  • title_filename (str) -- filename of title

Returns:

Contents of experiment title file

Return type:

str

dnplab.io.topspin.load_topspin_jcamp_dx(path, verbose=False)

Return the contents of topspin JCAMP-DX file as dictionary

Parameters:
  • path (str) -- Path to file

  • verbose (bool) -- If true, print output for troubleshooting

Returns:

Dictionary of JCAMP-DX file parameters

Return type:

dict

dnplab.io.topspin.topspin_vdlist(path)

Return topspin vdlist

Parameters:

path (str) -- Directory of data

Returns:

vdlist as numpy array

Return type:

numpy.ndarray

VNA

VnmrJ

dnplab.io.vnmrj.array_coords(attrs)

Return array dimension coords from parameters dictionary

Parameters:

attrs (dict) -- Dictionary of procpar parameters

Returns:

dim and coord for array

Return type:

tuple

dnplab.io.vnmrj.import_fid(path, filename='fid')

Import VnmrJ fid file

Parameters:
  • path (str) -- Directory of fid file

  • filename (str) -- Name of fid file. "fid" by default

Returns:

Array of data

Return type:

numpy.ndarray

dnplab.io.vnmrj.import_procpar(path, filename='procpar')

Import VnmrJ procpar parameters file

Parameters:

path (str) -- Directory of file

Returns:

Dictionary of procpar parameters

Return type:

dict

dnplab.io.vnmrj.import_vnmrj(path, fidFilename='fid', paramFilename='procpar')

Import VnmrJ Data

Parameters:
  • path (str) -- path to experiment folder

  • fidFilename (str) -- FID file name

  • paramFilename (str) -- process parameter filename

Returns:

data in dnpdata object

Return type:

dnpdata

WinEPR

dnplab.io.winepr.import_winepr(path)

Import Bruker par/spc data and return DNPData object

Parameters:

path (str) -- Path to either .par or .spc file

Returns:

DNPData object containing Bruker par/spc data

Return type:

parspc_data (object)

dnplab.io.winepr.load_par(path)

Import contents of .par file

Parameters:

path (str) -- Path to .par file

Returns:

dictionary of parameters

Return type:

attrs (dict)

dnplab.io.winepr.load_spc(path, attrs)

Import data and axes of .spc file

Parameters:

path (str) -- Path to .spc file

Returns:

coordinates for spectrum or spectra values (ndarray) : data values attrs (dict) : updated dictionary of parameters dims (list) : dimension labels

Return type:

coords (ndarray)

Math

Lineshape

dnplab.math.lineshape.gaussian(x, x0, sigma, integral=1.0)

Gaussian distribution.

Parameters:
  • x (array_like) -- input x

  • x0 (float) -- Center of distribution

  • sigma (float) -- Standard deviation of Gaussian distribution

  • integral (float) -- Integral of distribution

Returns:

Gaussian distribution

Return type:

ndarray

The Gaussian distribution is defined as:

\[f(x; x_0, \sigma) = \frac{1}{\sigma \sqrt{2 \pi}} \exp{\left(\frac{(x-x_0)^2}{2 \sigma^2}\right)}\]
dnplab.math.lineshape.lorentzian(x, x0, gamma, integral=1.0, deriv=False)

Lorentzian Distribution.

Parameters:
  • x (array_like) -- input x

  • x0 (float) -- Center of distribution

  • gamma (float) -- Lorentzian width. 2*gamma is full width at half maximum (FWHM)

  • integral (float) -- Integral of distribution

  • deriv (boolean) -- Derivative of a Lorentzian Distribution (Imaginary part of a phased spectrum)

Returns:

Lorentzian distribution

Return type:

ndarray

The Lorentzian distribution is defined as:

\[f(x) = \frac{1}{\pi \gamma} \left[\frac{\gamma^2}{(x-x_0)^2 + \gamma^2}\right]\]

Derivative:

\[f(x) = \frac{1}{\pi \gamma} \left[\frac{- 2\gamma^2 (x-x_0)}{\left( (x-x_0)^2 + \gamma^2 \right)^2}\right]\]
dnplab.math.lineshape.voigtian(x, x0, sigma, gamma, integral=1.0, deriv=False)

Voigtian distribution. Lineshape given by a convolution of Gaussian and Lorentzian distributions.

Parameters:
  • x (array_like) -- input x

  • x0 (float) -- center of distribution

  • sigma (float) -- Gaussian Linewidth. Standard deviation of Gaussian distribution.

  • gamma (float) -- Lorentzian linewidth. 2*gamma is the full width at half maximum (FWHM)

  • integral (float) -- Integral of distribution

  • deriv (boolean) -- Derivative of a Voigtian distribution (Gaussian broadened imaginary part of a phased spectrum).

Returns:

Voigtian distribution

Return type:

ndarray

The Voigtian distribution is defined as:

\[f(x; x_0, \sigma, \gamma) = \frac{\operatorname{Re}[w(z)]}{\sigma \sqrt{2 \pi}}\]

with

\[z = \frac{x + i\gamma}{\sigma \sqrt{2}}\]

Derivative: .. math:

f(x) = \frac{1}{\sigma^3 \sqrt{2 \pi}} \left[ \gamma \operatorname{Im}[w(z)] - \left(x - x0\right) \operatorname{Re}[w(z)] \right]

with

\[z = \frac{\left( \left( x - x0 \right) + 1j \gamma \right)}{\sigma \sqrt{2}}\]

Relaxation

dnplab.math.relaxation.buildup_function(p, E_max, p_half)

Calculate asymptotic buildup curve

Parameters:
  • p (array) -- power series

  • E_max (float) -- maximum enhancement

  • p_half (float) -- power at half saturation

Returns:

buildup curve

Return type:

ndarray

\[f(p) = 1 + E_{max} * p / (p_{1/2} + p)\]
dnplab.math.relaxation.general_biexp(t, C1, C2, tau1, C3, tau2)

Calculate bi-exponential curve

Parameters:
  • t (array_like) -- time series

  • C1 (float) -- see equation

  • C2 (float) -- see equation

  • C3 (float) -- see equation

  • tau1 (float) -- see equation

  • tau2 (float) -- see equation

Returns:

bi-exponential curve

Return type:

ndarray

\[f(t) = C1 + C2 e^{-t/tau1} + C3 e^{-t/tau2}\]
dnplab.math.relaxation.general_exp(t, C1, C2, tau)

Calculate mono-exponential curve

Parameters:
  • t (array_like) -- time series

  • C1 (float) -- see equation

  • C2 (float) -- see equation

  • tau (float) -- see equation

Returns:

mono-exponential curve

Return type:

ndarray

\[f(t) = C1 + C2 e^{-t/tau}\]
dnplab.math.relaxation.ksigma_smax(p, E_max, p_half)

Calculate asymptotic buildup curve

Parameters:
  • p (array) -- power series

  • E_max (float) -- maximum enhancement

  • p_half (float) -- power at half saturation

Returns:

buildup curve

Return type:

ndarray

\[f(p) = E_{max} * p / (p_{1/2} + p)\]
dnplab.math.relaxation.logistic(x, c, x0, L, k)

Not Implemented. Placeholder for calculating asymptotic buildup curve

Parameters:
  • x (array) -- x values

  • c (float) -- offset

  • x0 (float) -- x-value of sigmoid's midpoint

  • L (float) -- maximum value

  • k (float) -- logistic growth steepness

Returns:

buildup curve

Return type:

ndarray

dnplab.math.relaxation.t1(t, T1, M_0, M_inf)

Exponential recovery for inversion recovery and saturation recovery T1 Measurements

Parameters:
  • t (array_like) -- time series

  • T_1 (float) -- T1 value

  • M_0 (float) -- see equation

  • M_inf (float) -- see equation

Returns:

T1 curve

Return type:

ndarray

\[f(t) = M_{\infty} - (M_{\infty} - M_0) e^{-t/T_1}\]
dnplab.math.relaxation.t2(t, M_0, T2, p=1.0)

Calculate stretched or un-stretched (p=1) exponential T2 curve

Parameters:
  • t (array_like) -- time series

  • M_0 (float) -- see equation

  • T_2 (float) -- T2 value

  • p (float) -- see equation

Returns:

T2 curve

Return type:

ndarray

\[f(t) = M_{0} e^{(-(t/T_{2})^{p}}\]

Window

dnplab.math.window.exponential(x, lw)

Calculate exponential window function

Parameters:
  • x (array_like) -- Vector of points

  • lw (int or float) -- linewidth

Returns:

exponential window function

Return type:

array

\[\mathrm{exponential} = e^{-\pi * x * lw}\]
dnplab.math.window.gaussian(x, lw)

Calculate gaussian window function

Parameters:
  • x (array_like) -- vector of points

  • lw (float) -- Standard deviation of gaussian window

Returns:

gaussian window function

Return type:

array

\[\mathrm{gaussian} = e^{(\sigma * x^{2})}\]
dnplab.math.window.hamming(x)

Calculate hamming window function

Parameters:
  • x (array_like) -- vector of points

  • N (int) -- number of points to return in window function

Returns:

hamming window function

Return type:

ndarray

\[\mathrm{hamming} = 0.53836 + 0.46164\cos(\pi * n / (N-1))\]
dnplab.math.window.hann(x)

Calculate hann window function

Parameters:
  • x (array_like) -- vector of points

  • N (int) -- number of points to return in window function

Returns:

hann window function

Return type:

ndarray

\[\mathrm{hann} = 0.5 + 0.5\cos(\pi * n / (N-1))\]
dnplab.math.window.lorentz_gauss(x, lw, gauss_lw, gaussian_max=0)

Calculate lorentz-gauss window function

Parameters:
  • x (array_like) -- vector of points

  • N (int) -- number of points to return in window function

  • lw (int or float) -- exponential linewidth

  • gauss_lw (int or float) -- gaussian linewidth

  • gaussian_max (int) -- location of maximum in gaussian window

Returns:

gauss_lorentz window function

Return type:

array

\[ \begin{align}\begin{aligned}\mathrm{lorentz\_gauss} &= \exp(L - G^{2}) &\\ L(t) &= \pi * \mathrm{linewidth[0]} * t &\\ G(t) &= 0.6\pi * \mathrm{linewidth[1]} * (\mathrm{gaussian\_max} * (N - 1) - t) &\end{aligned}\end{align} \]
dnplab.math.window.sin2(x)

Calculate sin-squared window function

Parameters:
  • x (array_like) -- vector of points

  • N (int) -- number of points to return in window function

Returns:

sin-squared window function

Return type:

array

\[\sin^{2} = \cos((-0.5\pi * n / (N - 1)) + \pi)^{2}\]
dnplab.math.window.traf(x, lw)

Calculate traf window function

Parameters:
  • x (array_like) -- vector of points

  • lw (str) -- linewidth of traficant window

Returns:

traf window function

Return type:

ndarray

\[ \begin{align}\begin{aligned}\mathrm{traf} &= (f1 * (f1 + f2)) / (f1^{2} + f2^{2}) &\\ f1(t) &= \exp(-t * \pi * \mathrm{linewidth[0]}) &\\ f2(t) &= \exp((t - T) * \pi * \mathrm{linewidth[1]}) &\end{aligned}\end{align} \]

Plotting

General

dnplab.plotting.general.fancy_plot(data, xlim=[], title='', showPar=False, *args, **kwargs)

Streamline Plot function for dnpdata objects

This function creates streamlined plots for NMR and EPR spectra. The type of the spectrum is detected from the attribute "experiment_type" of the dnpdata object. Currently the following types are implemented: nmr_spectrum, epr_spectrum, enhancements_P, and inversion_recovery. The function will automatically format the plot according to the "experiment_type" attribute.

Parameters:
  • data (DNPData) -- DNPData object with values to plot

  • xlim (tuple) -- List of limit values for plotting function

  • title (str) -- Plot title

  • showPar (boolean) -- Toggle whether to show experiment parameters

Returns:

Returns formatted matplotlib plot.

Example

Simply just plotting the dnpdata object:

>>> dnp.fancy_plot(data)

Plot EPR spectrum from 344 mT to 354 mT, show experimental parameters:

>>> dnp.fancy_plot(data, xlim=[344, 354], title="EPR Spectrum", showPar=True)
dnplab.plotting.general.plot(data, *args, **kwargs)

Plot function for dnpdata object

Parameters:
  • data (DNPData) -- DNPData object for matplotlib plot function

  • args -- args for matplotlib plot function

  • kwargs -- kwargs for matplotlib plot function

  • semilogy (if any of) --

  • semilogx --

  • polar --

  • loglog --

  • scatter --

  • with (errorbar or step is in kwargs the argument will be evaluated) --

  • plot (bool(). If this evaluates to True the corresponding matplotlib function is used instead of the standard) --

Returns:

Returns formated matplotlib plot.

Example

Plotting a DNPData object:

>>> dnp.plt.figure()
>>> dnp.plot(data)
>>> dnp.plt.show()

# Plotting two DNPData objects (data1 and data2) on the same figure:

>>> dnp.plt.figure()
>>> dnp.plot(data1)
>>> dnp.plot(data2)
>>> dnp.plt.show()

Plotting a DNPData object with some custom parameters:

>>> dnp.plt.figure()
>>> dnp.plot(data, 'k-', linewidth = 3.0, alpha = 0.5)
>>> dnp.plt.show()

Plotting a DNPData object with a semilogy plot (possible arguments: semilogy=1, semilogy=True, semilogy="True") Forwarded arguments: semilogy, semilogx, polar, loglog, scatter, errorbar or step The absolute value is taken to ensure that the y axis is always positive

>>> dnp.plt.figure()
>>> dnp.plot(np.abs(data), 'k-', linewidth = 3.0, alpha = 0.5, semilogy=1)
>>> dnp.plt.show()

Image

dnplab.plotting.image.imshow(data, *args, **kwargs)

Image Plot for dnpdata object

Parameters:
  • data (DNPData) -- DNPData object for image plot

  • args -- args for matplotlib imshow function

  • kwargs -- kwargs for matplotlib imshow function

Returns:

Returns formated matplotlib plot.

Example

Plotting a dnpdata object

>>> dnp.plt.figure()
>>> dnp.imshow(data)
>>> dnp.plt.show()

Plotting a workspace (dnpdata_collection)

>>> dnp.plt.figure()
>>> dnp.imshow(data)
>>> dnp.plt.show()

Stack Plot

dnplab.plotting.stack_plot.stack(data, *args, offset=None, **kwargs)

Stack Plot for 2D data

Parameters:
  • data (dnpdata) -- dnpdata object for matplotlib plot function

  • args -- args for matplotlib plot function

  • offset -- Value to offset each spectra, by default maximum of absolute value

  • kwargs -- kwargs for matplotlib plot function

Example:

dnp.dnpResults.plt.figure()
dnp.dnpResults.stack(data)
dnp.dnpResults.plt.show()
dnplab.plotting.stack_plot.waterfall(data, dx, dy, *args, **kwargs)

Waterfall plot for 2d data

Parameters:
  • data (dnpData) -- 2d Data object for waterfall plot

  • dx (float, int) -- x-increment for each line

  • dy (float, int) -- y-increment for each line

Example:

dnp.dnpResults.plt.figure()
dnp.dnpResults.waterfall(data)
dnp.dnpResults.plt.show()

Processing

Align

dnplab.processing.align.ndalign(data, dim='f2', reference=None, center=None, width=None)

Alignment of NMR spectra using FFT Cross Correlation

Parameters:
  • all_data (object) -- DNPData object

  • dim (str) -- Dimension to align along

  • reference (numpy) -- Reference spectra for alignment

  • center (float) -- Center of alignment range, by default entire range

  • width (float) -- Width of alignment range, by default entire range

Returns:

Aligned data

Return type:

DNPData

Examples

>>> data_aligned = dnp.ndalign(data)
>>> data_aligned = dnp.ndalign(data, center = 10, width = 20)

Apodization

dnplab.processing.apodization.apodize(data, dim='t2', kind='exponential', **kwargs)

Apply Apodization to data along a given dimension. Currently the following window functions are implemented: exponential, gaussian, hanning, hamming, and sin-squared. In addition the following window transformation functions are implemented: traf, and lorentz_gauss

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to apply apodization along, "t2" by default

  • kind (str) -- Type of apodization, "exponential" by default

  • kwargs -- Arguments to be passed to apodization function, e.g. line width parameter

Returns:

Data object with window function applied, including attr "window"

Return type:

DNPData

Examples

Examples of using apodize

Exponential line broadening using a line width of 2 Hz along the f2 dimension

>>> data = dnp.apodize(data, lw = 2)

Lorentz-Gauss transformation:

>>> data = dnp.apodize(data, dim = 't2', kind = 'lorentz_gauss', lw = 4, gauss_lw = 8)

Functions:

\[ \begin{align}\begin{aligned}\mathrm{exponential} &= \exp(-2t * \mathrm{linewidth}) &\\\mathrm{gaussian} &= \exp((\mathrm{linewidth[0]} * t) - (\mathrm{linewidth[1]} * t^{2})) &\\\mathrm{hamming} &= 0.53836 + 0.46164\cos(\pi * n/(N-1)) &\\\mathrm{han} &= 0.5 + 0.5\cos(\pi * n/(N-1)) &\\\mathrm{sin2} &= \cos((-0.5\pi * n/(N - 1)) + \pi)^{2} &\\\mathrm{lorentz\_gauss} &= \exp(L - G^{2}) &\\ L(t) &= \pi * \mathrm{linewidth[0]} * t &\\ G(t) &= 0.6\pi * \mathrm{linewidth[1]} * (\mathrm{gaussian\_max} * (N - 1) - t) &\\\mathrm{traf} &= (f1 * (f1 + f2)) / (f1^{2} + f2^{2}) &\\ f1(t) &= \exp(-t * \pi * \mathrm{linewidth[0]}) &\\ f2(t) &= \exp((t - T) * \pi * \mathrm{linewidth[1]}) &\end{aligned}\end{align} \]

FFT

dnplab.processing.fft.fourier_transform(data, dim='t2', zero_fill_factor=1, shift=True, convert_to_ppm=True)

Perform Fourier Transform along the dimension (dim) given in proc_parameters

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to Fourier Transform. The default is "t2"

  • zero_fill_factor (int) -- Increases the number of points in Fourier transformed dimension by this factor with zero filling. The default is 1

  • shift (bool) -- Apply fftshift to the Fourier transformed data, placing zero frequency at center of dimension

  • convert_to_ppm (bool) -- If true, convert Fourier transformed axis to ppm units by using the "frequency" stored in attrs

Returns:

Data object after Fourier Transformation

Return type:

data (DNPData)

Examples

Fourier transformation of a (NMR) FID stored in a DNPData object

>>> data = dnp.fourier_transform(data)

Fourier transform along t1 dimension and zero fill to twice the original length

>>> data = dnp.fourier_transform(data, dim = "t1", zero_fill_factor = 2)

Note

The fourier_transform function assumes dt = t[1] - t[0]

dnplab.processing.fft.inverse_fourier_transform(data, dim='f2', zero_fill_factor=1, shift=True, convert_from_ppm=True)

Perform an inverse Fourier Transform along the dimension (dim) given in proc_parameters

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to inverse Fourier transform. The default is "f2"

  • zero_fill_factor (int) -- Increases the number of points in inverse Fourier transformed dimension by this factor with zero filling. The default is 1

  • shift (bool) -- Apply fftshift to the inverse Fourier transformed data, placing zero frequency at center of dimension

  • convert_from_ppm (bool) -- If true, convert Fourier transformed axis from ppm units to Hz by using the "frequency" stored in attrs

Returns:

Data object after inverse Fourier Transformation

Return type:

data (DNPData)

Note

Assumes df = f[1] - f[0]

Helpers

dnplab.processing.helpers.calculate_enhancement(data, off_spectrum_index=0, return_complex_values=False)

Calculate enhancement of a power series. Needs integrals as input

Parameters:
  • integrals (DNPData) --

  • off_spectrum_index (int) --

  • return_complex_values (bool) --

Returns:

Enhancement values

Return type:

enhancements (DNPData)

dnplab.processing.helpers.create_complex(data, real, imag=None, real_index=0, imag_index=1)

Create complex array from input

This function can be used to concatenate a two dimensions of a DNPData object into a complex array. The unused dims and coords will be removed from the input DNPData object. When a String is provided as the second argument the index in that dimension given by real_index is assumed to be the real part of the dataset and the one by imag_index is the iamginary part. The dataset is then combined to form one complex dataset, imag is ignored. Note that dimension with size 1 are retained but will be placed at the end of the retuned DNPData object.

Parameters:
  • data (DNPData) -- DNPData input object

  • real (array, String) -- Real data if array or when a String is provided the dimension that contains real and imaginary part (the dimension must have length 2)

  • imag (array, None) -- Imaginary data or None, if None is provided a complex dataset is created with the imaginary part set to 0

  • real_index (Integer) -- Index of real part in chosen dimension, default=0, must be 0 or 1 and be different from imag_index

  • imag_index (Integer) -- Index of imaginary part in chosen dimension, default=1, must be 0 or 1 and be different from real_index

Returns:

New DNPData object

Return type:

data (DNPData)

Examples: In this example, first a data set is loaded. The data set is of the size 4000 x 2 (ndarray, float32) and the dims are called 't2','x'

With the first dimension ([...,0]) being the real data and the second ([...,1]) the imaginary data. Using the function create_complex the dnpdata object is converted into a complex data set.

data = dnp.load("MyFile.exp")       # Load example data

data_complex = dnp.create_complex(data, data.values[..., 0], data.values[..., 1])

Or with the second variant;

data = dnp.load("MyFile.exp")       # Load example data

data_complex = dnp.create_complex(data,'x')
dnplab.processing.helpers.left_shift(data, dim='t2', shift_points=0)

Remove points from the left

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Name of dimension to left shift, default is "t2"

  • shift_points (int) -- Number of points to left shift, default is 0.

Returns:

Shifted data object

Return type:

data (DNPDdata)

dnplab.processing.helpers.normalize(data, amplitude=True, dim=None, regions=None)

Normalize spectrum

The function is used to normalize the amplitude (or area) of a spectrum to a value of 1. The sign of the original data will be conserved.

Parameters:
  • data (DNPData) -- Data object

  • amplitude (boolean) -- True: normalize amplitude, false: normalize area. The default is True

  • dim (str or None) -- The dimension to normalize, if None the data is normalized to the maximum of the whole dataset, if a dimension is given the normalization is done along this dimension for each other dimension

  • regions (None, list) -- Tuple to specify range of normalize reference e.g. (-99., 99.), if None the whole range is used for normalization

Returns:

Normalized data object

Return type:

data (DNPDdata)

dnplab.processing.helpers.pseudo_modulation(data, modulation_amplitude, dim='B0', order=1, zero_padding=2)

Calculate the first derivative of an EPR spectrum due to field modulation

Calculation is based on: Hyde et al., “Pseudo Field Modulation in EPR Spectroscopy.”, Applied Magnetic Resonance 1 (1990): 483–96.

Parameters:
  • data (DNPData) -- DNPData object (typically an absorption line EPR spectrum)

  • modulation_amplitude -- Peak to peak modulation amplitude. The unit is equal to the unit of the axis. E.g. if the spectrum axis is given in (T), the unit of the modulation amplitude is in (T) as well.

  • dim -- Dimension to pseudo modulate (default is B0)

  • order -- Harmonic of field modulation (default is 1, 1st derivative)

  • zero_padding -- Number of points for zero-padding (multiples of spectrum vector length). Default is 2. Increase this number for short signal vectors.

Returns:

Pseudo modulated spectrum

Return type:

data (DNPData)

Examples

# Calculate pseudo_modulated spectrum (1st derivative). Field axis given in (T)
spec_mod = dnp.pseudo_modulation(spec, modulation_amplitude = 0.001)

# Calculate pseudo_modulated spectrum (2nd derivative). Field axis given in (T)
spec_mod = dnp.pseudo_modulation(spec, modulation_amplitude = 0.001, order = 2)
dnplab.processing.helpers.reference(data, dim='f2', old_ref=0, new_ref=0)

Function for referencing NMR spectra

Parameters:
  • data (DNPData) -- Data for referencing

  • dim (str) -- dimension to perform referencing down. By default this dimension is "f2".

  • old_ref (float) -- Value of old reference

  • new_ref (float) -- New reference value

Returns:

referenced data

Return type:

DNPData

dnplab.processing.helpers.signal_to_noise(data: DNPData, signal_region: list = slice(0, None, None), noise_region: list = (None, None), dim: str = 'f2', remove_background: list | None = None, complex_noise=False, **kwargs)

Find signal-to-noise ratio

Simplest implementation: select largest value in a signal_region and divide this value by the estimated std. deviation of another noise_region. If the noise_region list contains (None,None) (the default) then all points except the points +10% and -10% around the maximum are used for the noise_region.

Parameters:
  • data -- Spectrum data

  • signal_region (list) -- list with a single tuple (start,stop) of a region where a signal should be searched, default is [slice(0,None)] which is the whole spectrum

  • noise_region (list) -- list with tuples (start,stop) of regions that should be taken as noise, default is (None,None)

  • dim (str) -- dimension of data that is used for snr calculation, default is 'f2'

  • remove_background (list) -- if this is not None (a list of tuples, or a single tuple) this will be forwarded to dnp.remove_background, together with any kwargs

  • complex_noise (bool) -- Flag that indicates whether the noise should be calculated on the real part of the noise or on the complex data (default = False)

  • kwargs -- parameters for dnp.remove_background

Returns:

DNPData object that contains SNR values, the axis dim is replaced by an axis named "signal_region"

Return type:

SNR (DNPData)

Examples

A note for the usage: regions can be provided as (min,max), slices use indices. To use the standard values just use

>>> snr = dnp.signal_to_noise(data)

If you want to select a region for the noise and the signal:

>>> snr = dnp.signal_to_noise(data,[(-1.23,300.4)],noise_region=[(-400,-240.5),(123.4,213.5)])

With background subtracted:

>>> snr = dnp.signal_to_noise(data,[(-1.23,300.4)],noise_region=[(-400,-240.5),(123.4,213.5)],remove_background=[(123.4,213.5)])

This function allows to use a single tuple instead of a list with a single tuple for signal_region, noise_region and remove_background. This is for convenience, slices are currently only supoprted for signal_region and noise_region.

>>> snr = dnp.signal_to_noise(data,(-1.23,300.4),noise_region=[(-400,-240.5),(123.4,213.5],remove_background=(123.4,213.5))
dnplab.processing.helpers.smooth(data, dim='t2', window_length=11, polyorder=3)

Apply Savitzky-Golay Smoothing

This function is a wrapper function for the savgol_filter from the SciPy python package (https://scipy.org/). For a more detailed description see the SciPy help for this function.

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to perform smoothing

  • window_length (int) -- Length of window (number of coefficients)

  • polyorder (int) -- Polynomial order to fit samples

Returns:

Data with Savitzky-Golay smoothing applied

Return type:

data (DNPData)

Integration

dnplab.processing.integration.cumulative_integrate(data, dim='f2', regions=None)

Cumulative integration

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to perform cumulative integration

  • regions (None, list) -- List of tuples to specify range of integration [(min, max), ...]

Returns:

cumulative sum of data

Return type:

data

Examples

Example showing cumulative integration of lorentzian function

>>> import numpy as np
>>> from matplotlib.pylab import *
>>> import dnplab as dnp
>>> x = np.r_[-10:10:1000j]
>>> y = dnp.math.lineshape.lorentzian(x,0,1)
>>> data = dnp.DNPData(y, ['f2'], [x])
>>> data_int = dnp.cumulative_integrate(data)
>>> figure()
>>> dnp.plot(data)
>>> dnp.plot(data_int)
>>> show()
dnplab.processing.integration.integrate(data, dim='f2', regions=None)

Integrate data along given dimension. If no region is given, the integral is calculated over the entire range.

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to perform integration. Default is "f2"

  • regions (None, list) -- List of tuples defining the region to integrate

Returns:

Integrals of data. If multiple regions are given the first value corresponds to the first region, the second value corresponds to the second region, etc.

Return type:

data (DNPData)

Examples

Integrated entire data region:

>>> data = dnp.integrate(data)

Integrate single peak/region:

>>> data = dnp.integrate(data, regions=[(4, 5)])

Integrate two regions:

>>> data = dnp.integrate(data, regions=[(1.1, 2.1), (4.5, 4.9)])

Offset

dnplab.processing.offset.background(data, dim='t2', deg=0, regions=None, func: callable | None = None, **kwargs)

Remove background from data

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to perform background fit

  • deg (int) -- Polynomial degree

  • regions (None, list) -- Background regions, by default entire region is background corrected. Regions can be specified as a list of tuples [(min, max), ...]

  • func (optional callable) -- The fitting function to fit the background

  • **kwargs -- arguments for fitting function

Returns:

Background fit

Return type:

DNPData

Examples

0th-order background fit (DC offset)

>>> bg = dnp.background(data)

Background with a given fit function

>>> bg = dnp.background(data, dim = 'tau', func= dnp.relaxation.general_exp, p0=(1,-1,900))
dnplab.processing.offset.remove_background(data, dim='t2', deg=0, regions=None, func: callable | None = None, **kwargs)

Remove polynomial background from data

Parameters:
  • data (DNPData) -- Data object

  • dim (str) -- Dimension to perform background fit

  • deg (int) -- Polynomial degree

  • regions (None, list) -- Background regions, by default the entire region is used to calculate the background correction. Regions can be specified as a list of tuples [(min, max), ...]

  • func (optional callable) -- The fitting function to fit the background

  • **kwargs -- arguments for fitting function

Returns:

Background corrected data

Return type:

data (DNPData)

Examples

0th-order background removal (DC offset)

>>> data = dnp.remove_background(data)

Background removal with a given fit function

>>> data = dnp.remove_background(data, dim = 'tau', func= dnp.relaxation.general_exp, p0=(1,-1,900))

Phase

dnplab.processing.phase.autophase(inputData, dim='f2', reference_slice=False, deriv=1, gamma=0.005, full_proc_attr=True)

Autophase function to phase spectral data

The autophase function is based on: Chen et al., "An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization", JMR 158 (2002) 164-168

By default, the autophase function will phase all spectra independently along dim (default dimension is 'f2'). This can be changed by providing a specific dataset as a reference slice.

Parameters:
  • data (DNPData) -- DNPData object containing NMR spectra

  • dim (str) -- Dimension to autophase, default = 'f2'

  • reference_slice (bool,tuple) -- Tuple of (dimension, index) to select reference slice. The default value is 'False'

  • deriv (int) -- Integer for derivative value (1-4, default=1)

  • gamma (float) -- Scaling factor for phase optimization (default=5e-3)

  • full_proc_attr (bool) -- when this is true the phase tuple for each single autphased spectrum will be added. Default True.

Returns:

Phased data. The function will add the attribute "autophase" = {pivot,deriv,dim,(phasetuples)}. phasetuples is only included if reference_slice is True

Return type:

data (DNPData)

dnplab.processing.phase.autophase_dep(data, dim='f2', method='search', reference_range=None, pts_lim=None, order='zero', pivot=0, delta=0, phase=None, reference_slice=None, force_positive=False)

Automatically phase correct data, or apply manual phase correction

This function is deprecated and will be removed from DNPLab on 10/01/2023ß

Parameters:
  • data (DNPData) -- Data object to autophase

  • dim (str) -- Dimension to autophase

  • method (str) -- Autophase method, "search" by default

  • reference_range --

  • pts_lim --

  • order --

  • pivot --

  • delta --

  • phase --

  • reference_slice --

  • force_positive --

Returns:

Autophased data, including attrs "phase0" for order="zero", and "phase1" if order="first"

Return type:

DNPData

\[ \begin{align}\begin{aligned}\mathrm{data} &= \exp(-1j * \mathrm{phase}) &\\\mathrm{phase(arctan)} &= \mathrm{arctan}(\mathrm{sum}(\mathrm{data.imag}) / \mathrm{sum}(\mathrm{data.real})) &\\\mathrm{phase(search)} &= \mathrm{argmax}(\mathrm{sum}(phased\_real^{2}) / \mathrm{sum}(phased\_imag^{2})) &\\phased\_real &= \mathrm{data.real} * \exp(-1j * \mathrm{phase}) &\\phased\_imag &= \mathrm{data.imag} * \exp(-1j * \mathrm{phase}) &\end{aligned}\end{align} \]
dnplab.processing.phase.phase(data, dim='f2', p0=0.0, p1=0.0, pivot=None)

Apply phase correction to DNPData object

Parameters:
  • data (DNPData) -- Data object to phase

  • dim (str) -- Dimension to phase, default is "f2"

  • p0 (float, array) -- Zero order phase correction (degree, 0 - 360)

  • p1 (float, array) -- First order phase correction (degree, 0 - 360)

  • picot (float) -- Pivot point for first order phase correction

Returns:

Phased data, including new attributes "p0", "p1", and "pivot"

Return type:

data (DNPData)

Examples

0th-order phase correction of 1D or 2D DNPData object. If the DNPData object has multiple 1D spectra the same phase p0 is applied to all spectra.

>>> data = dnp.phase(data,p0)

0th-order phase correction of all spectra of a 2D DNPData object using a (numpy) array p0 of phases:

>>> p0 = np.array([15, 15, 5, -5, 0])
>>> data = dnp.phase(data, p0)

Note

A 2D DNPData object can either be phase using a single p0 (p1) value, or using an array of phases. When using an array, the size of the phase array has to be equal to the number of spectra to be phased.

dnplab.processing.phase.phase_cycle(data, dim, receiver_phase)

Apply phase cycle to data

Parameters:
  • all_data (dnpdata_collection, dnpdata) -- data to process

  • dim (str) -- dimension to perform phase cycle

  • receiver_phase (numpy.array, list) -- Receiver Phase 0 (x), 1 (y), 2 (-x), 3 (-y)

Returns:

data object after phase cycle applied

Return type:

dnpdata

Reporting