atoti.AggregateCache#
- final class atoti.AggregateCache#
Aggregate cache of a
Cube
.Example
>>> from dataclasses import replace >>> table = session.create_table("Example", data_types={"id": "int"}) >>> cube = session.create_cube(table) >>> m = cube.measures
There is a default cache:
>>> cube.aggregate_cache AggregateCache(capacity=100, measures=None)
Increasing the capacity and only caching contributors.COUNT aggregates:
>>> cube.aggregate_cache = tt.AggregateCache( ... capacity=200, ... measures={m["contributors.COUNT"]}, ... ) >>> cube.aggregate_cache AggregateCache(capacity=200, measures=frozenset({m['contributors.COUNT']}))
Changing back to caching all the measures:
>>> cube.aggregate_cache = replace(cube.aggregate_cache, measures=None) >>> cube.aggregate_cache AggregateCache(capacity=200, measures=None)
Disabling caching but keeping sharing enabled:
>>> cube.aggregate_cache = replace(cube.aggregate_cache, capacity=0) >>> cube.aggregate_cache AggregateCache(capacity=0, measures=None)
Disabling caching and sharing:
>>> del cube.aggregate_cache >>> print(cube.aggregate_cache) None
- capacity: int#
The capacity of the cache.
If greater than
0
, this value corresponds to the maximum amount of{location: measure}
pairs that the cache can hold.If
0
, caching is disabled but sharing stays enabled: concurrent queries will share their computed aggregates, but the aggregates will not be stored to be reused in later queries.