atoti.CsvLoad#
- final class atoti.CsvLoad#
The description of a CSV file load.
Example
>>> import csv >>> from pathlib import Path >>> file_path = directory / "largest-cities.csv" >>> with open(file_path, "w") as csv_file: ... writer = csv.writer(csv_file) ... writer.writerows( ... [ ... ("city", "area", "country", "population"), ... ("Tokyo", "Kantō", "Japan", 14_094_034), ... ("Johannesburg", "Gauteng", "South Africa", 4_803_262), ... ( ... "Barcelona", ... "Community of Madrid", ... "Madrid", ... 3_223_334, ... ), ... ] ... )
Using
columns
to drop the population column and rename and reorder the remaining ones:>>> csv_load = tt.CsvLoad( ... file_path, ... columns={"city": "City", "area": "Region", "country": "Country"}, ... ) >>> session.tables.infer_data_types(csv_load) {'City': 'String', 'Region': 'String', 'Country': 'String'}
Creating a table and loading data into it from a headerless CSV file:
>>> file_path = directory / "largest-cities-headerless.csv" >>> with open(file_path, "w") as csv_file: ... writer = csv.writer(csv_file) ... writer.writerows( ... [ ... ("Tokyo", "Kantō", "Japan", 14_094_034), ... ("Johannesburg", "Gauteng", "South Africa", 4_803_262), ... ( ... "Madrid", ... "Community of Madrid", ... "Spain", ... 3_223_334, ... ), ... ] ... ) >>> csv_load = tt.CsvLoad( ... file_path, ... columns=["City", "Area", "Country", "Population"], ... ) >>> data_types = session.tables.infer_data_types(csv_load) >>> data_types {'City': 'String', 'Area': 'String', 'Country': 'String', 'Population': 'int'} >>> table = session.create_table( ... "Example", ... data_types=data_types, ... keys={"Country"}, ... ) >>> table.load(csv_load) >>> table.head().sort_index() City Area Population Country Japan Tokyo Kantō 14094034 South Africa Johannesburg Gauteng 4803262 Spain Madrid Community of Madrid 3223334
See also
The other
DataLoad
implementations.- array_separator: str | None = None#
The character separating array elements.
If not
None
, any field containing this separator will be parsed as anarray
.
- client_side_encryption: ClientSideEncryptionConfig | None = None#
- columns: Mapping[str, str] | Sequence[str] = {}#
The collection used to name, rename, or filter the CSV file columns.
- If an empty collection is passed, the CSV file must have a header.
The CSV column names must follow the
Table
column names.
- If a non empty
Mapping
is passed, the CSV file must have a header and the mapping keys must be column names of the CSV file. Columns of the CSV file absent from the mapping keys will not be loaded. The mapping values correspond to the
Table
column names. The other attributes of this class accepting column names expect to be passed values of this mapping, not keys.
- If a non empty
- date_patterns: Mapping[str, str] = {}#
A column name to date pattern mapping that can be used when the built-in date parsers fail to recognize the formatted dates in the CSV file.
- path: Path | str#
The path to the CSV file to load.
.gz
,.tar.gz
and.zip
files containing compressed CSV(s) are also supported.The path can also be a glob pattern (e.g.
"path/to/directory/*.csv"
).
- process_quotes: bool | None = True#
Whether double quotes should be processed to follow the official CSV specification:
True
:Each field may or may not be enclosed in double quotes (however some programs, such as Microsoft Excel, do not use double quotes at all). If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.
A double quote appearing inside a field must be escaped by preceding it with another double quote.
Fields containing line breaks, double quotes, and commas should be enclosed in double-quotes.
False
: all double-quotes within a field will be treated as any regular character, following Excel’s behavior.In this mode, it is expected that fields are not enclosed in double quotes. It is also not possible to have a line break inside a field.
None
: the behavior will be inferred in a preliminary partial load.