pywgrib2_xr.open_dataset

pywgrib2_xr.open_dataset(filenames, template, chunks=None, preprocess=None, parallel=False, cache=None, save=False, invdir=None)[source]

Opens one or more files as a single dataset.

Parameters
  • filenames (string or sequence of strings.) – GRIB files to process.

  • template (Template.) – Template specifies dataset structure. See pywgrib2_xr.make_template().

  • chunks (int or dict, optional) – Dictionary with keys given by dimension names and values given by chunk sizes. In general, these should divide the dimensions of each dataset. If int, chunk each dimension by chunks. By default, chunks will be chosen to load entire logical dataset into memory at once.

  • preprocess (callable, optional.) – If provided, call this function on each dataset prior to concatenation. You can find the file names from which each dataset was loaded in ds.encoding['source'].

  • parallel (bool, optional.) – If True, the open and preprocess steps of this function will be performed in parallel using dask.delayed. Default is False.

  • cache (bool, optional) – If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

  • save (bool, optional) – Save inventory files. Default is False.

  • invdir (str, optional.) – Inventory location. None means inventory files are collocated with data files.

Returns

Return type

xarray.Dataset - The newly created dataset.