Array measures

atoti is designed to handle array data efficiently.

⚠️ This is an advanced tutorial, make sure to learn the basics first.

Loading arrays from CSV

atoti can load array from CSV files. The separator for array elements must be provided to the read_csv method, and the CSV columns must use another separator. All the arrays in a column must have the same length.

[1]:
import atoti as tt

session = tt.create_session()
[2]:
store = session.read_csv(
    "data/arrays.csv", keys=["Trade ID"], store_name="Store with arrays", array_sep=";"
)
store.head()
[2]:
Continent Country City PnL PnL array Int array
Trade ID
0 Europe France Paris 3.469516 doubleVector[10]{-0.46511920664962714, ...} intVector[10]{1, ...}
1 Europe France Paris 92.851919 doubleVector[10]{10.587927935842618, ...} intVector[10]{11, ...}
2 Europe France Lyon 425.866214 doubleVector[10]{68.43655816283167, ...} intVector[10]{19, ...}
3 Europe France Bordeaux 454.127060 doubleVector[10]{-149.61978026139195, ...} intVector[10]{14, ...}
4 Europe UK London 187.015613 doubleVector[10]{11.449089651224922, ...} intVector[10]{13, ...}
[3]:
cube = session.create_cube(store, "Cube")

Arrays default aggregations

As for scalar measures, atoti provides the default SUM and MEAN aggregations on array measures. They are applied element by element on the array.

[4]:
lvl = cube.levels
m = cube.measures
cube.query(m["PnL array.SUM"], levels=lvl["Continent"])
[4]:
PnL array.SUM
Continent
Asia doubleVector[10]{-14.696414629727794, ...}
Europe doubleVector[10]{6.711474315042935, ...}
[5]:
cube.query(m["PnL array.MEAN"], levels=lvl["Continent"])
[5]:
PnL array.MEAN
Continent
Asia doubleVector[10]{-2.939282925945559, ...}
Europe doubleVector[10]{1.1185790525071557, ...}

Additional array functions

Arithmetic operations

[6]:
m["PnL +10 array"] = m["PnL array.SUM"] + 10.0
m["PnL -10 array"] = m["PnL array.SUM"] - 10.0
m["PnL x10 array"] = m["PnL array.SUM"] * 10.0
m["PnL /10 array"] = m["PnL array.SUM"] / 10.0
cube.query(
    m["PnL +10 array"], m["PnL -10 array"], m["PnL x10 array"], m["PnL /10 array"]
)
[6]:
PnL +10 array PnL -10 array PnL x10 array PnL /10 array
0 doubleVector[10]{2.0150596853151406, ...} doubleVector[10]{-17.98494031468486, ...} doubleVector[10]{-79.8494031468486, ...} doubleVector[10]{-0.798494031468486, ...}

Sum, mean, min or max of all the array elements

[7]:
m["PnL sum"] = tt.array.sum(m["PnL array.SUM"])
m["PnL mean"] = tt.array.mean(m["PnL array.SUM"])
m["min PnL"] = tt.array.min(m["PnL array.SUM"])
m["max PnL"] = tt.array.max(m["PnL array.SUM"])
cube.query(
    m["PnL sum"], m["PnL mean"], m["min PnL"], m["max PnL"], levels=lvl["Continent"]
)
[7]:
PnL sum PnL mean min PnL max PnL
Continent
Asia 357.87 35.79 -222.82 232.61
Europe 49.10 4.91 -86.08 195.82

Length

[8]:
m["PnL array length"] = tt.array.len(m["PnL array.SUM"])
cube.query(m["PnL array length"])
[8]:
PnL array length
0 10

Variance and Standard Deviation

[9]:
m["PnL array variance"] = tt.array.var(m["PnL array.SUM"])
m["PnL array standard deviation"] = tt.array.std(m["PnL array.SUM"])
cube.query(
    m["PnL array variance"], m["PnL array standard deviation"], levels=lvl["Continent"]
)
[9]:
PnL array variance PnL array standard deviation
Continent
Asia 16,149.34 127.08
Europe 6,952.28 83.38

Sort

[10]:
m["Sorted PnL array"] = tt.array.sort(m["PnL array.SUM"])
cube.query(m["Sorted PnL array"], levels=lvl["Continent"])
[10]:
Sorted PnL array
Continent
Asia doubleVector[10]{-222.82147745688468, ...}
Europe doubleVector[10]{-86.07955036608166, ...}

Quantile

[11]:
m["95 quantile"] = tt.array.quantile(m["PnL array.SUM"], 0.95, mode="simple")
m["95 exc quantile"] = tt.array.quantile(m["PnL array.SUM"], 0.95, mode="exc")
m["95 inc quantile"] = tt.array.quantile(m["PnL array.SUM"], 0.95, mode="inc")
m["95 centered quantile"] = tt.array.quantile(m["PnL array.SUM"], 0.95, mode="centered")
cube.query(
    m["95 quantile"],
    m["95 exc quantile"],
    m["95 inc quantile"],
    m["95 centered quantile"],
    levels=[lvl["Continent"], lvl["Country"]],
)
[11]:
95 quantile 95 exc quantile 95 inc quantile 95 centered quantile
Continent Country
Asia China 195.22 256.01 201.30 256.01
India 22.35 25.27 22.64 25.27
Europe France 128.17 163.73 131.73 163.73
UK 112.24 146.72 115.69 146.72
[12]:
m["95 linear quantile"] = tt.array.quantile(
    m["PnL array.SUM"], 0.95, mode="inc", interpolation="linear"
)
m["95 lower quantile"] = tt.array.quantile(
    m["PnL array.SUM"], 0.95, mode="inc", interpolation="lower"
)
m["95 higher quantile"] = tt.array.quantile(
    m["PnL array.SUM"], 0.95, mode="inc", interpolation="higher"
)
m["95 nearest quantile"] = tt.array.quantile(
    m["PnL array.SUM"], 0.95, mode="inc", interpolation="nearest"
)
m["95 midpoint quantile"] = tt.array.quantile(
    m["PnL array.SUM"], 0.95, mode="inc", interpolation="midpoint"
)
cube.query(
    m["95 linear quantile"],
    m["95 lower quantile"],
    m["95 higher quantile"],
    m["95 nearest quantile"],
    m["95 midpoint quantile"],
)
[12]:
95 linear quantile 95 lower quantile 95 higher quantile 95 nearest quantile 95 midpoint quantile
0 299.29 290.08 306.82 306.82 298.45

n greatest / n lowest

Returns an array with the n greatest/lowest values of a another array.

[13]:
m["Top 3 PnL array"] = tt.array.n_greatest(m["PnL array.SUM"], 3)
m["Bottom 2 PnL array"] = tt.array.n_lowest(m["PnL array.SUM"], 2)
cube.query(m["Top 3 PnL array"], m["Bottom 2 PnL array"])
[13]:
Top 3 PnL array Bottom 2 PnL array
0 doubleVector[3]{290.0808017776343, ...} doubleVector[2]{-135.20079639258478, ...}

nth greatest value / nth lowest value

Returns nth greatest or lowest value of a vector

[14]:
m["3rd greatest PnL"] = tt.array.nth_greatest(m["PnL array.SUM"], 3)
m["2nd lowest PnL"] = tt.array.nth_lowest(m["PnL array.SUM"], 2)
cube.query(m["3rd greatest PnL"], m["2nd lowest PnL"])
[14]:
3rd greatest PnL 2nd lowest PnL
0 195.35 -135.20

Element at index

Extract the element at a given index:

[15]:
m["First element"] = m["PnL array.SUM"][0]
cube.query(m["First element"], m["PnL array.SUM"])
[15]:
First element PnL array.SUM
0 -7.98 doubleVector[10]{-7.984940314684859, ...}

With the create_parameter_hierarchy function, it is possible to create a hierarchy corresponding to the indices of the array. This hierarchy can then be used to “slice” this array and create a measure which depends on the selected index.

[16]:
cube.create_parameter_hierarchy("Index", list(range(0, 10)))
m["PnL at index"] = m["PnL array.SUM"][lvl["Index"]]
cube.query(m["PnL at index"], levels=lvl["Index"])
[16]:
PnL at index
Index
0 -7.98
1 -36.82
2 -135.20
3 30.37
4 195.35
5 306.82
6 96.07
7 -308.90
8 290.08
9 -22.82

You can also build non-integer hierarchies and map each member to its index in the hierarchy using the index_measure argument:

[17]:
from datetime import date, timedelta

cube.create_parameter_hierarchy(
    "Dates",
    [date(2020, 1, 1) + timedelta(days=x) for x in range(0, 10)],
    index_measure="Date index",
)
m["PnL at date"] = m["PnL array.SUM"][m["Date index"]]
cube.query(m["Date index"], m["PnL at date"], levels=lvl["Dates"])
[17]:
Date index PnL at date
Dates
2020-01-01 0 -7.98
2020-01-02 1 -36.82
2020-01-03 2 -135.20
2020-01-04 3 30.37
2020-01-05 4 195.35
2020-01-06 5 306.82
2020-01-07 6 96.07
2020-01-08 7 -308.90
2020-01-09 8 290.08
2020-01-10 9 -22.82

In cases the indices need to be of arbitrary order or range, it is also possible to manually provide them as a list.

[18]:
cube.create_parameter_hierarchy(
    "Custom dates",
    [date(2020, 1, 1) + timedelta(days=x) for x in range(0, 10)],
    indices=[9, 8, 7, 6, 5, 0, 1, 2, 3, 4],
    index_measure="Custom date index",
)
m["PnL at custom date"] = m["PnL array.SUM"][m["Custom date index"]]
cube.query(m["Custom date index"], m["PnL at custom date"], levels=lvl["Custom dates"])
[18]:
Custom date index PnL at custom date
Custom dates
2020-01-01 9 -22.82
2020-01-02 8 290.08
2020-01-03 7 -308.90
2020-01-04 6 96.07
2020-01-05 5 306.82
2020-01-06 0 -7.98
2020-01-07 1 -36.82
2020-01-08 2 -135.20
2020-01-09 3 30.37
2020-01-10 4 195.35

Array slices

Extract a slice of the array:

[19]:
m["First 2 elements"] = m["PnL array.SUM"][0:2]
cube.query(m["First 2 elements"], m["PnL array.SUM"])
[19]:
First 2 elements PnL array.SUM
0 doubleVector[2]{-7.984940314684859, ...} doubleVector[10]{-7.984940314684859, ...}

Load DataFrame with lists

atoti can load a pandas DataFrame containing NumPy arrays and Python lists

[20]:
import pandas as pd
import numpy as np

df = pd.DataFrame(
    {
        "Index": [0, 1, 2],
        "NumPy array": [
            np.array([3.2, 1.0, 8, 9, 4.5, 7, 6, 18]),
            np.array([4.2, 4.0, 4, 9, 4.5, 8, 7, 8]),
            np.array([12, 1.0, 8, 9, 4.5, 7, 6, 18]),
        ],
        "Python list": [
            [3.2, 1.0, 8, 9, 4.5, 7, 6, 18],
            [4.2, 4.0, 4, 9, 4.5, 8, 7, 8],
            [12, 1.0, 8, 9, 4.5, 7, 6, 18],
        ],
    }
)
df
[20]:
Index NumPy array Python list
0 0 [3.2, 1.0, 8.0, 9.0, 4.5, 7.0, 6.0, 18.0] [3.2, 1.0, 8, 9, 4.5, 7, 6, 18]
1 1 [4.2, 4.0, 4.0, 9.0, 4.5, 8.0, 7.0, 8.0] [4.2, 4.0, 4, 9, 4.5, 8, 7, 8]
2 2 [12.0, 1.0, 8.0, 9.0, 4.5, 7.0, 6.0, 18.0] [12, 1.0, 8, 9, 4.5, 7, 6, 18]
[21]:
pd_store = session.read_pandas(df, "pandas")
pd_store
[21]:
  • pandas
    • Index
      • key: False
      • nullable: True
      • type: long
    • NumPy array
      • key: False
      • nullable: True
      • type: atoti_numpy_double[][ ]
    • Python list
      • key: False
      • nullable: True
      • type: atoti_list_double[][,]
[22]:
pd_store.head()
[22]:
Index NumPy array Python list
0 0 doubleVector[8]{3.2, ...} doubleVector[8]{3.2, ...}
1 1 doubleVector[8]{4.2, ...} doubleVector[8]{4.2, ...}
2 2 doubleVector[8]{12.0, ...} doubleVector[8]{12.0, ...}
[ ]: