xarray.DataArray

class xarray.DataArray(data=<NA>, coords=None, dims=None, name=None, attrs=None, indexes=None, fastpath=False)[source]

N-dimensional array with labeled coordinates and dimensions.

DataArray provides a wrapper around numpy ndarrays that uses labeled dimensions and coordinates to support metadata aware operations. The API is similar to that for the pandas Series or DataFrame, but DataArray objects can have any number of dimensions, and their contents have fixed data types.

Additional features over raw numpy arrays:

  • Apply operations over dimensions by name: x.sum('time').

  • Select or assign values by integer location (like numpy): x[:10] or by label (like pandas): x.loc['2014-01-01'] or x.sel(time='2014-01-01').

  • Mathematical operations (e.g., x - y) vectorize across multiple dimensions (known in numpy as “broadcasting”) based on dimension names, regardless of their original order.

  • Keep track of arbitrary metadata in the form of a Python dictionary: x.attrs

  • Convert to a pandas Series: x.to_series().

Getting items from or doing mathematical operations with a DataArray always returns another DataArray.

Parameters
  • data (array_like) – Values for this array. Must be an numpy.ndarray, ndarray like, or castable to an ndarray. If a self-described xarray or pandas object, attempts are made to use this array’s metadata to fill in other unspecified arguments. A view of the array’s data is used instead of a copy if possible.

  • coords (sequence or dict of array_like, optional) – Coordinates (tick labels) to use for indexing along each dimension. The following notations are accepted:

    • mapping {dimension name: array-like}

    • sequence of tuples that are valid arguments for xarray.Variable() - (dims, data) - (dims, data, attrs) - (dims, data, attrs, encoding)

    Additionally, it is possible to define a coord whose name does not match the dimension name, or a coord based on multiple dimensions, with one of the following notations:

    • mapping {coord name: DataArray}

    • mapping {coord name: Variable}

    • mapping {coord name: (dimension name, array-like)}

    • mapping {coord name: (tuple of dimension names, array-like)}

  • dims (hashable or sequence of hashable, optional) – Name(s) of the data dimension(s). Must be either a hashable (only for 1D data) or a sequence of hashables with length equal to the number of dimensions. If this argument is omitted, dimension names default to ['dim_0', ... 'dim_n'].

  • name (str or None, optional) – Name of this array.

  • attrs (dict-like or None, optional) – Attributes to assign to the new instance. By default, an empty attribute dictionary is initialized.

Examples

Create data:

>>> np.random.seed(0)
>>> temperature = 15 + 8 * np.random.randn(2, 2, 3)
>>> lon = [[-99.83, -99.32], [-99.79, -99.23]]
>>> lat = [[42.25, 42.21], [42.63, 42.59]]
>>> time = pd.date_range("2014-09-06", periods=3)
>>> reference_time = pd.Timestamp("2014-09-05")

Initialize a dataarray with multiple dimensions:

>>> da = xr.DataArray(
...     data=temperature,
...     dims=["x", "y", "time"],
...     coords=dict(
...         lon=(["x", "y"], lon),
...         lat=(["x", "y"], lat),
...         time=time,
...         reference_time=reference_time,
...     ),
...     attrs=dict(
...         description="Ambient temperature.",
...         units="degC",
...     ),
... )
>>> da
<xarray.DataArray (x: 2, y: 2, time: 3)>
array([[[29.11241877, 18.20125767, 22.82990387],
        [32.92714559, 29.94046392,  7.18177696]],

       [[22.60070734, 13.78914233, 14.17424919],
        [18.28478802, 16.15234857, 26.63418806]]])
Coordinates:
    lon             (x, y) float64 -99.83 -99.32 -99.79 -99.23
    lat             (x, y) float64 42.25 42.21 42.63 42.59
  * time            (time) datetime64[ns] 2014-09-06 2014-09-07 2014-09-08
    reference_time  datetime64[ns] 2014-09-05
Dimensions without coordinates: x, y
Attributes:
    description:  Ambient temperature.
    units:        degC

Find out where the coldest temperature was:

>>> da.isel(da.argmin(...))
<xarray.DataArray ()>
array(7.18177696)
Coordinates:
    lon             float64 -99.32
    lat             float64 42.21
    time            datetime64[ns] 2014-09-08
    reference_time  datetime64[ns] 2014-09-05
Attributes:
    description:  Ambient temperature.
    units:        degC
__init__(data=<NA>, coords=None, dims=None, name=None, attrs=None, indexes=None, fastpath=False)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([data, coords, dims, name, attrs, …])

Initialize self.

all([dim, axis])

Reduce this DataArray’s data by applying all along some dimension(s).

any([dim, axis])

Reduce this DataArray’s data by applying any along some dimension(s).

argmax([dim, axis, keep_attrs, skipna])

Index or indices of the maximum of the DataArray over one or more dimensions.

argmin([dim, axis, keep_attrs, skipna])

Index or indices of the minimum of the DataArray over one or more dimensions.

argsort([axis, kind, order])

Returns the indices that would sort this array.

assign_attrs(*args, **kwargs)

Assign new attrs to this object.

assign_coords([coords])

Assign new coordinates to this object.

astype(dtype, *[, order, casting, subok, …])

Copy of the xarray object, with data cast to a specified type.

bfill(dim[, limit])

Fill NaN values by propogating values backward

broadcast_equals(other)

Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions.

broadcast_like(other[, exclude])

Broadcast this DataArray against another Dataset or DataArray.

chunk([chunks, name_prefix, token, lock])

Coerce this array’s data into a dask arrays with the given chunks.

clip([min, max, keep_attrs])

Return an array whose values are limited to [min, max].

close()

Release any resources linked to this object.

coarsen([dim, boundary, side, coord_func, …])

Coarsen object.

combine_first(other)

Combine two DataArray objects, with union of coordinates.

compute(**kwargs)

Manually trigger loading of this array’s data from disk or a remote source into memory and return a new array.

conj()

Complex-conjugate all elements.

conjugate()

Return the complex conjugate, element-wise.

copy([deep, data])

Returns a copy of this array.

count([dim, axis])

Reduce this DataArray’s data by applying count along some dimension(s).

cumprod([dim, axis, skipna])

Apply cumprod along some dimension of DataArray.

cumsum([dim, axis, skipna])

Apply cumsum along some dimension of DataArray.

cumulative_integrate([coord, datetime_unit])

Integrate cumulatively along the given coordinate using the trapezoidal rule.

curvefit(coords, func[, reduce_dims, …])

Curve fitting optimization for arbitrary functions.

diff(dim[, n, label])

Calculate the n-th order discrete difference along given axis.

differentiate(coord[, edge_order, datetime_unit])

Differentiate the array with the second order accurate central differences.

dot(other[, dims])

Perform dot product of two DataArrays along their shared dims.

drop([labels, dim, errors])

Backward compatible method based on drop_vars and drop_sel

drop_duplicates(dim[, keep])

Returns a new DataArray with duplicate dimension values removed.

drop_isel([indexers])

Drop index positions from this DataArray.

drop_sel([labels, errors])

Drop index labels from this DataArray.

drop_vars(names, *[, errors])

Returns an array with dropped variables.

dropna(dim[, how, thresh])

Returns a new array with dropped labels for missing values along the provided dimension.

equals(other)

True if two DataArrays have the same dimensions, coordinates and values; otherwise False.

expand_dims([dim, axis])

Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.

ffill(dim[, limit])

Fill NaN values by propogating values forward

fillna(value)

Fill missing values in this object.

from_cdms2(variable)

Convert a cdms2.Variable into an xarray.DataArray

from_dict(d)

Convert a dictionary into an xarray.DataArray

from_iris(cube)

Convert a iris.cube.Cube into an xarray.DataArray

from_series(series[, sparse])

Convert a pandas.Series into an xarray.DataArray.

get_axis_num(dim)

Return axis number(s) corresponding to dimension(s) in this array.

get_index(key)

Get an index for a dimension, with fall-back to a default RangeIndex

groupby(group[, squeeze, restore_coord_dims])

Returns a GroupBy object for performing grouped operations.

groupby_bins(group, bins[, right, labels, …])

Returns a GroupBy object for performing grouped operations.

head([indexers])

Return a new DataArray whose data is given by the the first n values along the specified dimension(s).

identical(other)

Like equals, but also checks the array name and attributes, and attributes on all coordinates.

idxmax([dim, skipna, fill_value, keep_attrs])

Return the coordinate label of the maximum value along a dimension.

idxmin([dim, skipna, fill_value, keep_attrs])

Return the coordinate label of the minimum value along a dimension.

integrate([coord, datetime_unit, dim])

Integrate along the given coordinate using the trapezoidal rule.

interp([coords, method, assume_sorted, kwargs])

Multidimensional interpolation of variables.

interp_like(other[, method, assume_sorted, …])

Interpolate this object onto the coordinates of another object, filling out of range values with NaN.

interpolate_na([dim, method, limit, …])

Fill in NaNs by interpolating according to different methods.

isel([indexers, drop, missing_dims])

Return a new DataArray whose data is given by integer indexing along the specified dimension(s).

isin(test_elements)

Tests each value in the array for whether it is in test elements.

isnull([keep_attrs])

Test each value in the array for whether it is a missing value.

item(*args)

Copy an element of an array to a standard Python scalar and return it.

load(**kwargs)

Manually trigger loading of this array’s data from disk or a remote source into memory and return this array.

map_blocks(func[, args, kwargs, template])

Apply a function to each block of this DataArray.

max([dim, axis, skipna])

Reduce this DataArray’s data by applying max along some dimension(s).

mean([dim, axis, skipna])

Reduce this DataArray’s data by applying mean along some dimension(s).

median([dim, axis, skipna])

Reduce this DataArray’s data by applying median along some dimension(s).

min([dim, axis, skipna])

Reduce this DataArray’s data by applying min along some dimension(s).

notnull([keep_attrs])

Test each value in the array for whether it is not a missing value.

pad([pad_width, mode, stat_length, …])

Pad this array along one or more dimensions.

persist(**kwargs)

Trigger computation in constituent dask arrays

pipe(func, *args, **kwargs)

Apply func(self, *args, **kwargs)

polyfit(dim, deg[, skipna, rcond, w, full, cov])

Least squares polynomial fit.

prod([dim, axis, skipna])

Reduce this DataArray’s data by applying prod along some dimension(s).

quantile(q[, dim, interpolation, …])

Compute the qth quantile of the data along the specified dimension.

query([queries, parser, engine, missing_dims])

Return a new data array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the values in the array.

rank(dim[, pct, keep_attrs])

Ranks the data.

reduce(func[, dim, axis, keep_attrs, keepdims])

Reduce this array by applying func along some dimension(s).

reindex([indexers, method, tolerance, copy, …])

Conform this object onto the indexes of another object, filling in missing values with fill_value.

reindex_like(other[, method, tolerance, …])

Conform this object onto the indexes of another object, filling in missing values with fill_value.

rename([new_name_or_name_dict])

Returns a new DataArray with renamed coordinates or a new name.

reorder_levels([dim_order])

Rearrange index levels using input order.

resample([indexer, skipna, closed, label, …])

Returns a Resample object for performing resampling operations.

reset_coords([names, drop])

Given names of coordinates, reset them to become variables.

reset_index(dims_or_levels[, drop])

Reset the specified index(es) or multi-index level(s).

roll([shifts, roll_coords])

Roll this array by an offset along one or more dimensions.

rolling([dim, min_periods, center, keep_attrs])

Rolling window object.

rolling_exp([window, window_type])

Exponentially-weighted moving window.

round(*args, **kwargs)

searchsorted(v[, side, sorter])

Find indices where elements of v should be inserted in a to maintain order.

sel([indexers, method, tolerance, drop])

Return a new DataArray whose data is given by selecting index labels along the specified dimension(s).

set_close(close)

Register the function that releases any resources linked to this object.

set_index([indexes, append])

Set DataArray (multi-)indexes using one or more existing coordinates.

shift([shifts, fill_value])

Shift this array by an offset along one or more dimensions.

sortby(variables[, ascending])

Sort object by labels or values (along an axis).

squeeze([dim, drop, axis])

Return a new object with squeezed data.

stack([dimensions])

Stack any number of existing dimensions into a single new dimension.

std([dim, axis, skipna])

Reduce this DataArray’s data by applying std along some dimension(s).

sum([dim, axis, skipna])

Reduce this DataArray’s data by applying sum along some dimension(s).

swap_dims([dims_dict])

Returns a new DataArray with swapped dimensions.

tail([indexers])

Return a new DataArray whose data is given by the the last n values along the specified dimension(s).

thin([indexers])

Return a new DataArray whose data is given by each n value along the specified dimension(s).

to_cdms2()

Convert this array into a cdms2.Variable

to_dataframe([name, dim_order])

Convert this array and its coordinates into a tidy pandas.DataFrame.

to_dataset([dim, name, promote_attrs])

Convert a DataArray to a Dataset.

to_dict([data])

Convert this xarray.DataArray into a dictionary following xarray naming conventions.

to_index()

Convert this variable to a pandas.Index.

to_iris()

Convert this array into a iris.cube.Cube

to_masked_array([copy])

Convert this array into a numpy.ma.MaskedArray

to_netcdf(*args, **kwargs)

Write DataArray contents to a netCDF file.

to_pandas()

Convert this array into a pandas object with the same shape.

to_series()

Convert this array into a pandas.Series.

to_unstacked_dataset(dim[, level])

Unstack DataArray expanding to Dataset along a given level of a stacked coordinate.

transpose(*dims[, transpose_coords, …])

Return a new DataArray object with transposed dimensions.

unify_chunks()

Unify chunk size along all chunked dimensions of this DataArray.

unstack([dim, fill_value, sparse])

Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.

var([dim, axis, skipna])

Reduce this DataArray’s data by applying var along some dimension(s).

weighted(weights)

Weighted operations.

where(cond[, other, drop])

Filter elements from this object according to a condition.

Attributes

T

attrs

Dictionary storing arbitrary metadata with this array.

chunks

Block dimensions for this array’s data or None if it’s not a dask array.

coords

Dictionary-like container of coordinate arrays.

data

The array’s data as a dask or numpy array

dims

Tuple of dimension names associated with this array.

dtype

encoding

Dictionary of format-specific settings for how this array should be serialized.

imag

indexes

Mapping of pandas.Index objects used for label based indexing.

loc

Attribute for location based indexing like pandas.

name

The name of this array.

nbytes

ndim

real

shape

size

sizes

Ordered mapping from dimension names to lengths.

values

The array’s data as a numpy.ndarray

variable

Low level interface to the Variable object for this DataArray.

xindexes

Mapping of xarray Index objects used for label based indexing.