xarray.open_dataarray

xarray.open_dataarray(filename_or_obj, group=None, decode_cf=True, mask_and_scale=True, decode_times=True, autoclose=False, concat_characters=True, decode_coords=True, engine=None, chunks=None, lock=None, cache=None, drop_variables=None)

Open an DataArray from a netCDF file containing a single data variable.

This is designed to read netCDF files with only one data variable. If multiple variables are present then a ValueError is raised.

Parameters:

filename_or_obj : str, Path, file or xarray.backends.*DataStore

Strings and Paths are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ends with .gz, in which case the file is gunzipped and opened with scipy.io.netcdf (only netCDF3 supported). File-like objects are opened with scipy.io.netcdf (only netCDF3 supported).

group : str, optional

Path to the netCDF4 group in the given file to open (only works for netCDF4 files).

decode_cf : bool, optional

Whether to decode these variables, assuming they were saved according to CF conventions.

mask_and_scale : bool, optional

If True, replace array values equal to _FillValue with NA and scale values according to the formula original_values * scale_factor + add_offset, where _FillValue, scale_factor and add_offset are taken from variable attributes (if they exist). If the _FillValue or missing_value attribute contains multiple values a warning will be issued and all array values matching one of the multiple values will be replaced by NA.

decode_times : bool, optional

If True, decode times encoded in the standard NetCDF datetime format into datetime objects. Otherwise, leave them encoded as numbers.

autoclose : bool, optional

If True, automatically close files to avoid OS Error of too many files being open. However, this option doesn’t work with streams, e.g., BytesIO.

concat_characters : bool, optional

If True, concatenate along the last dimension of character arrays to form string arrays. Dimensions will only be concatenated over (and removed) if they have no corresponding variable and if they are only used as the last dimension of character arrays.

decode_coords : bool, optional

If True, decode the ‘coordinates’ attribute to identify coordinates in the resulting dataset.

engine : {‘netcdf4’, ‘scipy’, ‘pydap’, ‘h5netcdf’, ‘pynio’}, optional

Engine to use when reading files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’.

chunks : int or dict, optional

If chunks is provided, it used to load the new dataset into dask arrays.

lock : False, True or threading.Lock, optional

If chunks is provided, this argument is passed on to dask.array.from_array(). By default, a global lock is used when reading data from netCDF files with the netcdf4 and h5netcdf engines to avoid issues with concurrent access when using dask’s multithreaded backend.

cache : bool, optional

If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

drop_variables: string or iterable, optional

A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values.

See also

open_dataset

Notes

This is designed to be fully compatible with DataArray.to_netcdf. Saving using DataArray.to_netcdf and then loading with this function will produce an identical result.

All parameters are passed directly to xarray.open_dataset. See that documentation for further details.