Dataset.to_zarr(store=None, chunk_store=None, mode=None, synchronizer=None, group=None, encoding=None, compute=True, consolidated=False, append_dim=None)

Write dataset contents to a zarr group.


Experimental The Zarr backend is new and experimental. Please report any unexpected behavior via github issues.

  • store (MutableMapping, str or Path, optional) – Store or path to directory in file system.

  • chunk_store (MutableMapping, str or Path, optional) – Store or path to directory in file system only for Zarr array chunks. Requires zarr-python v2.4.0 or later.

  • mode ({"w", "w-", "a", None}, optional) – Persistence mode: “w” means create (overwrite if exists); “w-” means create (fail if exists); “a” means override existing variables (create if does not exist). If append_dim is set, mode can be omitted as it is internally set to "a". Otherwise, mode will default to w- if not set.

  • synchronizer (object, optional) – Array synchronizer

  • group (str, optional) – Group path. (a.k.a. path in zarr terminology.)

  • encoding (dict, optional) – Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1,}, ...}

  • compute (bool, optional) – If True compute immediately, otherwise return a dask.delayed.Delayed object that can be computed later.

  • consolidated (bool, optional) – If True, apply zarr’s consolidate_metadata function to the store after writing.

  • append_dim (hashable, optional) – If set, the dimension along which the data will be appended. All other dimensions on overriden variables must remain the same size.




Zarr chunking behavior:

If chunks are found in the encoding argument or attribute corresponding to any DataArray, those chunks are used. If a DataArray is a dask array, it is written with those chunks. If not other chunks are found, Zarr uses its own heuristics to choose automatic chunk sizes.