atoti.Session.read_spark()#

Session.read_spark(dataframe, /, *, table_name, keys=frozenset({}), partitioning=None, default_values={})#

Read a Spark DataFrame into a table.

Parameters:
  • dataframe (object) – The DataFrame to load.

  • table_name (str) – The name of the table to create.

  • keys (Set[str] | Sequence[str]) –

    The columns that will become keys of the table.

    If a Set is given, the keys will be ordered as the table columns.

  • partitioning (str | None) –

    The description of how the data will be split across partitions of the table.

    Default rules:

    • Only non-joined tables are automatically partitioned.

    • Tables are automatically partitioned by hashing their key columns. If there are no key columns, all the dictionarized columns are hashed.

    • Joined tables can only use a sub-partitioning of the table referencing them.

    • Automatic partitioning is done modulo the number of available cores.

    Example

    modulo4(country) splits the data across 4 partitions based on the country column’s dictionarized value.

  • default_values (Mapping[str, bool | int | float | date | datetime | time | Sequence[bool] | Sequence[int] | Sequence[float] | Sequence[str] | str | None]) – Mapping from column name to column default_value.

Return type:

Table