Archive

meme.archive.get(pv, from_time=None, to_time=None, timeout=5.0)[source]

Gets history data from the archive service.

Parameters:
  • pv (str or list of str) – A PV (or list of PVs) to get history data for. The archive engine can perform processing on the data before returning it to you. To process data, you can ask for a PV wrapped in a processing function. For example, to bin the data into 3600 second-wide bins, and take the mean of each bin, ask for “mean_3600(YOUR:PV:NAME:HERE)”. For full documentation of all the processing operators, see the EPICS Archiver Appliance User Guide.

  • from_time (str or datetime, optional) – The start time for the data. Can be a string, like “1 hour ago” or “now”, or a python datetime object. If you use a datetime object, and do not specify UTC or GMT for the timezone, a timezone of ‘US/Pacific’ is implied.

  • to_time (str or datetime, optional) – The end time for the data. The same rules as from_time apply.

  • timeout (float, optional) – An amount of time to wait (in seconds) before cancelling the request. The default timeout is 5.0 seconds.

Returns:

A data structure with the following fields:

  • secondsPastEpoch (list of ints): The UNIX timestamp for each history point.

  • values (list of float): The values for each history point.

  • nanoseconds (list of ints): The number of nanoseconds past the timestamp for each history point.

  • severity (list of int): The severity value for the PV at each history point.

  • status (list of int): The status value for the PV at each history point.

If more than one PV was requested, a list of dicts will be returned. Each item in the list has the following fields:

  • pvName (str): The PV that this structure represents.

  • value (dict): A dict with the data for the PV. Has the following fields:

    • value (dict): Holds the data for the PV. Same fields as the single-PV case, documented above.

    • labels (list of str): The names of the fields in the value structure.

Return type:

dict or list of dicts

meme.archive.get_dataframe(*args, **kwargs)[source]

Gets history data from the archive service as a pandas.DataFrame

All arugments are the same as meme.archive.get(). This is equivalent to:

meme.archive.convert_to_dataframe(meme.archive.get())
Parameters:
  • pv (str or list of str) –

    A PV (or list of PVs) to get history data for. The archive engine can perform processing on the data before returning it to you. To process data, you can ask for a PV wrapped in a processing function. For example, to bin the data into 3600 second-wide bins, and take the mean of each bin, ask for “mean_3600(YOUR:PV:NAME:HERE)”. For full documentation of all the processing operators, see the EPICS Archiver Appliance User Guide.

  • from_time (str or datetime, optional) – The start time for the data. Can be a string, like “1 hour ago” or “now”, or a python datetime object. If you use a datetime object, and do not specify UTC or GMT for the timezone, a timezone of ‘US/Pacific’ is implied.

  • to_time (str or datetime, optional) – The end time for the data. The same rules as from_time apply.

  • timeout (float, optional) – An amount of time to wait (in seconds) before cancelling the request. The default timeout is 5.0 seconds.

Returns:

A pandas DataFrame object with a column for the values of each PV.

The DataFrame is indexed on timestamp, with nanosecond level precision. All data is is joined on the timestamps, and filled when there are gaps, so every column has data for every timestamp.

Return type:

pandas.DataFrame

meme.archive.convert_to_dataframe(archive_data)[source]

Convert archive data returned by meme.archive.get() to a pandas dataframe.

Parameters:

archive_data (dict or list of dicts) – A dictionary, or list of dictionaries of archive data, in the format returned by meme.archive.get().

Returns:

A pandas DataFrame object with a column for the values of each PV.

The DataFrame is indexed on timestamp, with nanosecond level precision. All data is is joined on the timestamps, and filled when there are gaps, so every column has data for every timestamp.

Return type:

pandas.DataFrame