Pyspark Data Sources
Pyspark supported data sources
Spark data sources
Spark support various data sources. Spark has some core data sources built into it while the others are available and maintained by other developers from the community. In this post, I am going to explain the core data sources supported by pyspark.
Here is the list of core data sources supported by spark.
- CSV
- json
- parquet
- ORC
- JDBC/ODBC Connection
- Plain text files
Data sources API
Reading Data
Foundation of reading data in apache spark is DataFrameReader
which is an interface to load a dataframe from an external data source. We can access it throgh read
attribute of spark sessoion.
spark.read
Once we have dataframe reader, we can specify several values based on the type of data source.
- format
- schema
- read mode
- series of options
All these options returns DataFrameReader
.
format, schema and options are self explanatory. Examples will give a better idea about them. read mode is an important thing and you must understand it so you can use an appropriate mode while creating a dataframe.
Read Modes
Reading from an external data source can have malformed data. Read mode specifies that what spark should do when it encounters malformed data. There are 3 read modes.
- permissive -> Default
- failFast
- dropMalFormed
Writing Data
Writing operation is very similar to the reading operation. Here we have to use DataFrameWriter
instead of DataFrameReader
. For writing since we have to write a dataframe, the DataFrameWriter
can be accessed on a dataframe.
df.write
Once we have dataframe writer, we can specify several values based on the type of data source.
- format
- path
- save mode
- series of options
Here we have a set of concepts which are important to know while writing a file based data source. I will cover these topics in future as these may need a dedicated post.
- partition by
- bucket by
- sort by
Similar to read mode, we have write modes when we want to write data. Here we have 4 write mode.
- append
- overwrite
- ignore
- errorIfExists -> default
Data sources quick summary
CSV
csv is a commonly used data format. It is a comma separated file in which every line represent a new record. CSVs are tricky file format as it encounters a lot of issues in the production system due to the assumptions made. In order to make it flexible, csv file format provides a lot of options which I am listing below from the official documentation.
Property Name | Default | Meaning | Scope |
---|---|---|---|
sep | , | Sets a separator for each field and value. This separator can be one or more characters. | read/write |
encoding | UTF-8 | For reading, decodes the CSV files by the given encoding type. For writing, specifies encoding (charset) of saved CSV files. CSV built-in functions ignore this option. | read/write |
quote | " | Sets a single character used for escaping quoted values where the separator can be part of the value. For reading, if you would like to turn off quotations, you need to set not null but an empty string. For writing, if an empty string is set, it uses u0000 (null character). | read/write |
quoteAll | false | A flag indicating whether all values should always be enclosed in quotes. Default is to only escape values containing a quote character. | write |
escape | \ | Sets a single character used for escaping quotes inside an already quoted value. | read/write |
escapeQuotes | true | A flag indicating whether values containing quotes should always be enclosed in quotes. Default is to escape all values containing a quote character. | write |
comment | Sets a single character used for skipping lines beginning with this character. By default, it is disabled. | read | |
header | false | For reading, uses the first line as names of columns. For writing, writes the names of columns as the first line. Note that if the given path is a RDD of Strings, this header option will remove all lines same with the header if exists. CSV built-in functions ignore this option. | read/write |
inferSchema | false | Infers the input schema automatically from data. It requires one extra pass over the data. CSV built-in functions ignore this option. | read |
enforceSchema | true | If it is set to true, the specified or inferred schema will be forcibly applied to datasource files, and headers in CSV files will be ignored. If the option is set to false, the schema will be validated against all headers in CSV files in the case when the header option is set to true. Field names in the schema and column names in CSV headers are checked by their positions taking into account spark.sql.caseSensitive. Though the default value is true, it is recommended to disable the enforceSchema option to avoid incorrect results. CSV built-in functions ignore this option. | read |
ignoreLeadingWhiteSpace | false (for reading), true (for writing) | A flag indicating whether or not leading whitespaces from values being read/written should be skipped. | read/write |
ignoreTrailingWhiteSpace | false (for reading), true (for writing) | A flag indicating whether or not trailing whitespaces from values being read/written should be skipped. | read/write |
nullValue | Sets the string representation of a null value. Since 2.0.1, this nullValue param applies to all supported types including the string type. | read/write | |
nanValue | NaN | Sets the string representation of a non-number value. | read |
positiveInf | Inf | Sets the string representation of a positive infinity value. | read |
negativeInf | -Inf | Sets the string representation of a negative infinity value. | read |
dateFormat | yyyy-MM-dd | Sets the string that indicates a date format. Custom date formats follow the formats at Datetime Patterns. This applies to date type. | read/write |
timestampFormat | yyyy-MM-dd’T’HH:mm:ss[.SSS][XXX] | Sets the string that indicates a timestamp format. Custom date formats follow the formats at Datetime Patterns. This applies to timestamp type. | read/write |
timestampNTZFormat | yyyy-MM-dd’T’HH:mm:ss[.SSS] | Sets the string that indicates a timestamp without timezone format. Custom date formats follow the formats at Datetime Patterns. This applies to timestamp without timezone type, note that zone-offset and time-zone components are not supported when writing or reading this data type. | read/write |
maxColumns | 20480 | Defines a hard limit of how many columns a record can have. | read |
maxCharsPerColumn | -1 | Defines the maximum number of characters allowed for any given value being read. By default, it is -1 meaning unlimited length | read |
mode | PERMISSIVE | Allows a mode for dealing with corrupt records during parsing. It supports the following case-insensitive modes. Note that Spark tries to parse only required columns in CSV under column pruning. Therefore, corrupt records can be different based on required set of fields. This behavior can be controlled by spark.sql.csv.parser.columnPruning.enabled (enabled by default). PERMISSIVE: when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets malformed fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. A record with less/more tokens than schema is not a corrupted record to CSV. When it meets a record having fewer tokens than the length of the schema, sets null to extra fields. When the record has more tokens than the length of the schema, it drops extra tokens. DROPMALFORMED: ignores the whole corrupted records. This mode is unsupported in the CSV built-in functions. FAILFAST: throws an exception when it meets corrupted records. | read |
columnNameOfCorruptRecord | (value of spark.sql.columnNameOfCorruptRecord configuration) | Allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord. | read |
multiLine | false | Parse one record, which may span multiple lines, per file. CSV built-in functions ignore this option. | read |
charToEscapeQuoteEscaping | escape or \0 | Sets a single character used for escaping the escape for the quote character. The default value is escape character when escape and quote characters are different, \0 otherwise. | read/write |
samplingRatio | 1.0 | Defines fraction of rows used for schema inferring. CSV built-in functions ignore this option. | read |
emptyValue | (for reading), "" (for writing) | Sets the string representation of an empty value. | read/write |
locale | en-US | Sets a locale as language tag in IETF BCP 47 format. For instance, this is used while parsing dates and timestamps. | read |
lineSep | \r, \r\n and \n (for reading), \n (for writing) | Defines the line separator that should be used for parsing/writing. Maximum length is 1 character. CSV built-in functions ignore this option. | read/write |
unescapedQuoteHandling | STOP_AT_DELIMITER | Defines how the CsvParser will handle values with unescaped quotes. STOP_AT_CLOSING_QUOTE: If unescaped quotes are found in the input, accumulate the quote character and proceed parsing the value as a quoted value, until a closing quote is found. BACK_TO_DELIMITER: If unescaped quotes are found in the input, consider the value as an unquoted value. This will make the parser accumulate all characters of the current parsed value until the delimiter is found. If no delimiter is found in the value, the parser will continue accumulating characters from the input until a delimiter or line ending is found. STOP_AT_DELIMITER: If unescaped quotes are found in the input, consider the value as an unquoted value. This will make the parser accumulate all characters until the delimiter or a line ending is found in the input. SKIP_VALUE: If unescaped quotes are found in the input, the content parsed for the given value will be skipped and the value set in nullValue will be produced instead. RAISE_ERROR: If unescaped quotes are found in the input, a TextParsingException will be thrown. | read |
compression | (none) | Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate). CSV built-in functions ignore this option. | write |
spark.read.format("csv").option("mode", "FAILFAST").option("inferSchema", True).option("path","/some/path/file.csv").load()
Json
Json file format is stable as it allows you to append to a file with new records. Since Json have structure, spark can make assumptions. Since Json file have structre, we have less number of options associated with this format.
Property Name | Default | Meaning | Scope |
---|---|---|---|
timeZone | (value of spark.sql.session.timeZone configuration) | Sets the string that indicates a time zone ID to be used to format timestamps in the JSON datasources or partition values. The following formats of timeZone are supported: Region-based zone ID: It should have the form ‘area/city’, such as ‘America/Los_Angeles’. Zone offset: It should be in the format ‘(+|-)HH:mm’, for example ‘-08:00’ or ‘+01:00’. Also ‘UTC’ and ‘Z’ are supported as aliases of ‘+00:00’. Other short names like ‘CST’ are not recommended to use because they can be ambiguous. | read/write |
primitivesAsString | false | Infers all primitive values as a string type. | read |
prefersDecimal | false | Infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles. | read |
allowComments | false | Ignores Java/C++ style comment in JSON records. | read |
allowUnquotedFieldNames | false | Allows unquoted JSON field names. | read |
allowSingleQuotes | true | Allows single quotes in addition to double quotes. | read |
allowNumericLeadingZero | false | Allows leading zeros in numbers (e.g. 00012). | read |
allowBackslashEscapingAnyCharacter | false | Allows accepting quoting of all character using backslash quoting mechanism. | read |
mode | PERMISSIVE | Allows a mode for dealing with corrupt records during parsing. PERMISSIVE: when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets malformed fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. When inferring a schema, it implicitly adds a columnNameOfCorruptRecord field in an output schema. DROPMALFORMED: ignores the whole corrupted records. This mode is unsupported in the JSON built-in functions. FAILFAST: throws an exception when it meets corrupted records. | read |
columnNameOfCorruptRecord | (value of spark.sql.columnNameOfCorruptRecord configuration) | Allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord. | read |
dateFormat | yyyy-MM-dd | Sets the string that indicates a date format. Custom date formats follow the formats at datetime pattern. This applies to date type. | read/write |
timestampFormat | yyyy-MM-dd’T’HH:mm:ss[.SSS][XXX] | Sets the string that indicates a timestamp format. Custom date formats follow the formats at datetime pattern. This applies to timestamp type. | read/write |
timestampNTZFormat | yyyy-MM-dd’T’HH:mm:ss[.SSS] | Sets the string that indicates a timestamp without timezone format. Custom date formats follow the formats at Datetime Patterns. This applies to timestamp without timezone type, note that zone-offset and time-zone components are not supported when writing or reading this data type. | read/write |
multiLine | false | Parse one record, which may span multiple lines, per file. JSON built-in functions ignore this option. | read |
allowUnquotedControlChars | false | Allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not. | read |
encoding | Detected automatically when multiLine is set to true (for reading), UTF-8 (for writing) | For reading, allows to forcibly set one of standard basic or extended encoding for the JSON files. For example UTF-16BE, UTF-32LE. For writing, Specifies encoding (charset) of saved json files. JSON built-in functions ignore this option. | read/write |
lineSep | \r, \r\n, \n (for reading), \n (for writing) | Defines the line separator that should be used for parsing. JSON built-in functions ignore this option. | read/write |
samplingRatio | 1.0 | Defines fraction of input JSON objects used for schema inferring. | read |
dropFieldIfAllNull | false | Whether to ignore column of all null values or empty array/struct during schema inference. | read |
locale | en-US | Sets a locale as language tag in IETF BCP 47 format. For instance, locale is used while parsing dates and timestamps. | read |
allowNonNumericNumbers | true | Allows JSON parser to recognize set of “Not-a-Number” (NaN) tokens as legal floating number values. +INF: for positive infinity, as well as alias of +Infinity and Infinity. -INF: for negative infinity, alias -Infinity. NaN: for other not-a-numbers, like result of division by zero. | read |
compression | (none) | Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate). JSON built-in functions ignore this option. | write |
ignoreNullFields | (value of spark.sql.jsonGenerator.ignoreNullFields configuration) | Whether to ignore null fields when generating JSON objects. | write |
Parquet
- This is an open-source column oriente data source
- Provides storage optimization for analytics workload
- Saves storage
- Allow reading individual columns
- Works great with spark
- Supports complext types
- Schema is build into parquet files
Property Name | Default | Meaning | Scope |
---|---|---|---|
datetimeRebaseMode | (value of spark.sql.parquet.datetimeRebaseModeInRead configuration) | The datetimeRebaseMode option allows to specify the rebasing mode for the values of the DATE, TIMESTAMP_MILLIS, TIMESTAMP_MICROS logical types from the Julian to Proleptic Gregorian calendar. Currently supported modes are: EXCEPTION: fails in reads of ancient dates/timestamps that are ambiguous between the two calendars. CORRECTED: loads dates/timestamps without rebasing. LEGACY: performs rebasing of ancient dates/timestamps from the Julian to Proleptic Gregorian calendar. | read |
int96RebaseMode | (value of spark.sql.parquet.int96RebaseModeInRead configuration) | The int96RebaseMode option allows to specify the rebasing mode for INT96 timestamps from the Julian to Proleptic Gregorian calendar. Currently supported modes are: EXCEPTION: fails in reads of ancient INT96 timestamps that are ambiguous between the two calendars. CORRECTED: loads INT96 timestamps without rebasing. LEGACY: performs rebasing of ancient timestamps from the Julian to Proleptic Gregorian calendar. | read |
mergeSchema | (value of spark.sql.parquet.mergeSchema configuration) | Sets whether we should merge schemas collected from all Parquet part-files. This will override spark.sql.parquet.mergeSchema. | read |
compression | snappy | Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, uncompressed, snappy, gzip, lzo, brotli, lz4, and zstd). This will override spark.sql.parquet.compression.codec. | write |
ORC
- Similar to parquet
- Designed for hadoop workloads
Property Name | Default | Meaning | Scope |
---|---|---|---|
mergeSchema | false | sets whether we should merge schemas collected from all ORC part-files. This will override spark.sql.orc.mergeSchema. The default value is specified in spark.sql.orc.mergeSchema. | read |
compression | snappy | compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, snappy, zlib, lzo, zstd and lz4). This will override orc.compress and spark.sql.orc.compression.codec. | write |
Text Files
- For text files, each line becomes a record.
Property Name | Default | Meaning | Scope |
---|---|---|---|
wholetext | false | If true, read each file from input path(s) as a single row. | read |
lineSep | \r, \r\n, \n (for reading), \n (for writing) | Defines the line separator that should be used for reading or writing. | read/write |
compression | (none) | Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate). | write |
Conclusion
In this post, I tried to include the read and write operations for core spark sources. I have not covered SQL as I am planning to use it and then create a blog post based on my learning. In the subsequent post, I will cover advanced I/O concepts for apache spark.
Share this post
Twitter
Reddit
LinkedIn
Pinterest
Email