🍾 Xarray is now 10 years old! 🎉

xarray.core.resample.DatasetResample.sum

xarray.core.resample.DatasetResample.sum#

DatasetResample.sum(dim=None, *, skipna=None, min_count=None, keep_attrs=None, **kwargs)[source]#

Reduce this Dataset’s data by applying sum along some dimension(s).

Parameters:
  • dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply sum. For e.g. dim="x" or dim=["x", "y"]. If None, will reduce over the Resample dimensions. If “…”, will reduce over all dimensions.

  • skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).

  • min_count (int or None, optional) – The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Only used if skipna is set to True or defaults to True for the array’s dtype. Changed in version 0.17.0: if specified on an integer array and skipna=True, the result will be a float array.

  • keep_attrs (bool or None, optional) – If True, attrs will be copied from the original object to the new one. If False, the new object will be returned without attributes.

  • **kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating sum on this object’s data. These could include dask-specific kwargs like split_every.

Returns:

reduced (Dataset) – New Dataset with sum applied to its data and the indicated dimension(s) removed

See also

numpy.sum, dask.array.sum, Dataset.sum

Resampling and grouped operations

User guide on resampling operations.

Notes

Use the flox package to significantly speed up resampling computations, especially with dask arrays. Xarray will use flox by default if installed. Pass flox-specific keyword arguments in **kwargs. The default choice is method="cohorts" which generalizes the best, method="blockwise" might work better for your problem. See the flox documentation for more.

Non-numeric variables will be removed prior to reducing.

Examples

>>> da = xr.DataArray(
...     np.array([1, 2, 3, 0, 2, np.nan]),
...     dims="time",
...     coords=dict(
...         time=("time", pd.date_range("2001-01-01", freq="ME", periods=6)),
...         labels=("time", np.array(["a", "b", "c", "c", "b", "a"])),
...     ),
... )
>>> ds = xr.Dataset(dict(da=da))
>>> ds
<xarray.Dataset> Size: 120B
Dimensions:  (time: 6)
Coordinates:
  * time     (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30
    labels   (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
Data variables:
    da       (time) float64 48B 1.0 2.0 3.0 0.0 2.0 nan
>>> ds.resample(time="3ME").sum()
<xarray.Dataset> Size: 48B
Dimensions:  (time: 3)
Coordinates:
  * time     (time) datetime64[ns] 24B 2001-01-31 2001-04-30 2001-07-31
Data variables:
    da       (time) float64 24B 1.0 5.0 2.0

Use skipna to control whether NaNs are ignored.

>>> ds.resample(time="3ME").sum(skipna=False)
<xarray.Dataset> Size: 48B
Dimensions:  (time: 3)
Coordinates:
  * time     (time) datetime64[ns] 24B 2001-01-31 2001-04-30 2001-07-31
Data variables:
    da       (time) float64 24B 1.0 5.0 nan

Specify min_count for finer control over when NaNs are ignored.

>>> ds.resample(time="3ME").sum(skipna=True, min_count=2)
<xarray.Dataset> Size: 48B
Dimensions:  (time: 3)
Coordinates:
  * time     (time) datetime64[ns] 24B 2001-01-31 2001-04-30 2001-07-31
Data variables:
    da       (time) float64 24B nan 5.0 nan