0.7.0 (August 16, 2022)#
Added#
DirectQuery plugins.
Ability to change the
default_value
of aColumn
.AggregateProvider
to speed up queries by pre-aggregating some measures on specified levels.
Functions#
User interface#
Ability to hide and show table columns.
Changed#
Table
is automatically partitioned for improved default performance.
User interface#
Upgraded Atoti UI to 5.0.15.
Functions and methods#
The following changes are BREAKING.
The signature of the
atoti.agg
functions have been clarified:The signature of functions creating measures in the
atoti
,atoti.array
, andatoti.math
modules and of the methodsSession.read_*()
andTable.load_*()
have been changed to make more parameters positional-only.read_csv()
andatoti.Table.load_csv()
’s separator and process_quotes parameters default to","
andTrue
(respectively).None
can still be passed to force the inference of these arguments in a preliminary partial read of the CSV file.read_csv()
andatoti.Table.load_csv()
attempt to parse dates in the ISO 8601 format only. To parse dates in other formats, specify their pattern in the date_patterns argument.date_shift()
results are not impacted by conditions and filters when executing a query (same behavior asshift()
).atoti.scope.cumulative()
with a time period window matches pandas and Excel’s behavior (issue #396).scope=tt.scope.cumulative( l["datetime"], - window=("-1D", None), + window=("-2D", None) )
delete_scenario()
’s scenario parameter has been renamed name.
Other#
The following changes are BREAKING.
Different
Table
withColumn
with the same name can be joined (issue #655).Saved drillthrough widgets must be recreated as their MDX query has changed to use both the table and column names to uniquely reference a field.
atoti_plus.security.Restrictions
must specify both the table and column names using a tuple.
The
DataType
class has been replaced with aLiteral
type. The main data types still have constants inatoti.type
. Nullable types have been removed, set thedefault_value
toNone
instead for non-numeric types:- session.create_table("Example", types={"String": tt.type.NULLABLE_STRING}) + session.create_table("Example", types={"String": "String"}, default_values={"String": None})
Default values for temporal
Column
have been changed from"N/A"
to actual temporal values. Seedefault_value
for an exhaustive list.Passing a
atoti.config.user_content_storage_config.UserContentStorageConfig
pointing to a local H2 database will not automatically migrate the database to H2 v2 format during session startup anymore. Downgrade to Atoti 0.6.6 and start a session configured to use this local H2 database to migrate it to v2 format to be able to use the database with Atoti 0.7.0.
Deprecated#
Scope factory functions
atoti.scope.cumulative()
,atoti.scope.origin()
, andatoti.scope.siblings()
. Instantiate their corresponding class instead:CumulativeScope
,OriginScope
, andSiblingsScope
(respectively).value()
function. Useatoti.agg.single_value()
instead:- m["City.VALUE"] = tt.value(table["City"]) + m["City.VALUE"] = tt.agg.single_value(table["City"])
date_shift()
’s offset andatoti.scope.cumulative()
’s window values missing a duration designator ("P"
).atoti.Cube.query()
,atoti_query.QueryCube.query()
andatoti.Cube.explain_query()
’s condition parameter has been deprecated and renamed filter.Passing a
Level
toshift()
’s on parameter. Pass aHierarchy
instead.
Removed#
The following removals are BREAKING.
Ability to create a
Hierarchy
on a nullableColumn
. Set the column’sdefault_value
to something different thanNone
before creating theHierarchy
instead.Implicit required levels. Measures created from
value()
,Level
, or conditions implicitly added aOriginScope
on the corresponding level(s) in subsequent calls toatoti.agg
functions. This undocumented behavior has been removed so these implicit scopes will have to be defined explicitly. For example:table = session.create_table( "example", types={ "Product": "String", "Date": "LocalDate", "Price": "int", }, keys=["Product", "Date"], ) cube = session.create_cube(table) l, m = cube.levels, cube.measures price_column = table["Price"] - m["Price"] = tt.agg.sum(tt.value(price_column)) + m["Price"] = tt.agg.sum( + tt.agg.single_value(price_column), + # Levels corresponding to the key columns of `price_column`'s table. + scope=tt.OriginScope(l["Product"], l["Date"]), + )
Note: The example above illustrates the required changes but is contrived:
tt.agg.sum(price_column)
would be better in both cases.Set the
ATOTI_REQUIRED_LEVELS_WARNING
environment variable to"True"
to be warned of the creation of measures that used to have required levels.value()
’s levels parameter. Use a combination ofatoti.agg.single_value()
andwhere()
instead:- m["City.VALUE"] = tt.value(table["City"], levels=[l["Country"]]) + m["City.VALUE"] = tt.where( + l["Country"] != None, + tt.agg.single_value(table["City"]), + )
Parameter hierarchized_columns of table creation methods (
create_table()
,read_pandas()
,atoti.Session.read_spark()
,read_csv()
,read_parquet()
,atoti.Session.read_numpy()
andatoti.Session.read_sql()
). Create hierarchies in batch withatoti.hierarchies.Hierarchies.update
instead:table = session.create_table( "Product", types={"Date": "LocalDate", "Product": "String", "Quantity": "double"}, keys={"Date"}, - hierarchized_columns=["Date", "Product"], ) cube = session.create_cube(table, mode="manual") + cube.hierarchies.update({name: {name: table[name]} for name in ["Date", "Product"]})
atoti.__version__
. Useimportlib.metadata
’sversion()
like for any other library instead:import atoti as tt + from importlib.metadata import version - atoti_version = tt.__version__ + atoti_version = version(tt.__name__)
date_shift()
’s offset andatoti.scope.cumulative()
’s window support of quarter units. Use"3M"
instead of"1Q"
.Support for remote content servers. Configure the user content storage with a JDBC
atoti.UserContentStorageConfig.url
instead.The custom
Exception
classes have been made private.atoti.Table.loading_report
has been made private.
Previously deprecated#
create_session()
andopen_query_session()
. InstantiateSession
andatoti_query.QuerySession
(respectively) instead.atoti_query.QuerySession
’s name attribute.atoti.query
module. Instead, import functions and classes fromatoti
if installed or fromatoti-query
otherwise.atoti.query.create_basic_authentication
andatoti.query.create_token_authentication
. Useatoti_query.BasicAuthentication
andatoti_query.TokenAuthentication
instead.atoti.level.Level.comparator
.instead of
atoti.comparator.ASCENDING
andatoti.comparator.DESCENDING
, useNaturalOrder
.instead of
atoti.comparator.first_members()
, useCustomOrder
.
Passing timeouts as instances of
int
. Usedatetime.timedelta
instead.create_parameter_simulation()
measure_name and default_value parameters. Use the measures parameter instead.Context value keys
queriesResultLimit.intermediateSize
andqueriesResultLimit.transientSize
inshared_context
,query()
, andquery_mdx()
. UsequeriesResultLimit.intermediateLimit
andqueriesResultLimit.transientLimit
(respectively) instead.atoti_kafka.create_deserializer()
andatoti.Table.load_kafka()
’s deserializer parameter. Only supported records are JSON objects with keys matchingTable
’s columns.
Config#
atoti.config.session_config.SessionConfig.aws
andatoti.config.session_config.SessionConfig.azure
. Useread_csv()
,atoti.Table.load_csv()
,read_parquet()
, oratoti.Table.load_parquet()
’s client_side_encryption parameter instead.atoti_aws.create_aws_key_pair()
andatoti_aws.create_aws_kms_config()
. Instantiateatoti_aws.AwsKeyPair
andatoti_aws.AwsKmsConfig
(respectively) instead.atoti_azure.create_azure_key_pair()
. Instantiateatoti_azure.AzureKeyPair
instead.Session
’s certificate_authority parameter. UseHttpsConfig
’s certificate_authority parameter instead.atoti.LdapConfig.role_mapping
andatoti.OidcConfig.role_mapping
. Useatoti_plus.security.LdapSecurity.role_mapping
andatoti_plus.security.OidcSecurity.role_mapping
(respectively) instead.atoti.LoggingConfig.file_path
. Useatoti.LoggingConfig.destination
instead.
Fixed#
drop()
’s deletion ofNone
values (issue #610).