atoti.Session.read_spark()#

Session.read_spark(dataframe, /, *, table_name, keys=(), partitioning=None, default_values={})#

Read a Spark DataFrame into a table.

Parameters:
  • dataframe (object) – The DataFrame to load.

  • table_name (str) – The name of the table to create.

  • keys (Collection[str]) –

    The columns that will become keys of the table.

    Inserting a row containing key values equal to the ones of an existing row will replace the existing row with the new one.

    Key columns cannot have None as their default_value.

  • partitioning (str | None) –

    The description of how the data will be split across partitions of the table.

    Default rules:

    • Only non-joined tables are automatically partitioned.

    • Tables are automatically partitioned by hashing their key columns. If there are no key columns, all the dictionarized columns are hashed.

    • Joined tables can only use a sub-partitioning of the table referencing them.

    • Automatic partitioning is done modulo the number of available cores.

    Example

    hash4(country) splits the data across 4 partitions based on the country column’s hash value.

  • default_values (Mapping[str, ConstantValue | None]) – Mapping from column name to column default_value.

Return type:

Table