Reading and Writing HDFS Avro Data
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.
Use the PXF HDFS Connector to read and write Avro-format data. This section describes how to use PXF to read and write Avro data in HDFS, including how to create, query, and insert into an external table that references an Avro file in the HDFS data store.
Note: PXF does not support reading or writing compressed Avro files.
Prerequisites
Ensure that you have met the PXF Hadoop Prerequisites before you attempt to read data from HDFS.
Working with Avro Data
Apache Avro is a data serialization framework where the data is serialized in a compact binary format. Avro specifies that data types be defined in JSON. Avro format data has an independent schema, also defined in JSON. An Avro schema, together with its data, is fully self-describing.
Data Type Mapping
Avro supports both primitive and complex data types.
To represent Avro primitive data types in Greenplum Database, map data values to Greenplum Database columns of the same type.
Avro supports complex data types including arrays, maps, records, enumerations, and fixed types. Map top-level fields of these complex data types to the Greenplum Database TEXT
type. While Greenplum Database does not natively support these types, you can create Greenplum Database functions or application code to extract or further process subcomponents of these complex data types.
The following table summarizes external mapping rules for Avro data.
Avro Data Type | PXF/Greenplum Data Type |
---|---|
boolean | boolean |
bytes | bytea |
double | double |
float | real |
int | int or smallint |
long | bigint |
string | text |
Complex type: Array, Map, Record, or Enum | text, with delimiters inserted between collection items, mapped key-value pairs, and record data. |
Complex type: Fixed | bytea (supported for read operations only). |
Union | Follows the above conventions for primitive or complex data types, depending on the union; must contain 2 elements, one of which must be null. |
Avro Schemas and Data
Avro schemas are defined using JSON, and composed of the same primitive and complex types identified in the data type mapping section above. Avro schema files typically have a .avsc
suffix.
Fields in an Avro schema file are defined via an array of objects, each of which is specified by a name and a type.
An Avro data file contains the schema and a compact binary representation of the data. Avro data files typically have the .avro
suffix.
You can specify an Avro schema on both read and write operations to HDFS. You can provide either a binary *.avro
file or a JSON-format *.avsc
file for the schema file:
External Table Type | Schema Specified? | Description |
---|---|---|
readable | yes | PXF uses the specified schema; this overrides the schema embedded in the Avro data file. |
readable | no | PXF uses the schema embedded in the Avro data file. |
writable | yes | PXF uses the specified schema. |
writable | no | PXF creates the Avro schema based on the external table definition. |
When you provide the Avro schema file to PXF, the file must reside in the same location on each Greenplum Database segment host or the file may reside on the Hadoop file system. PXF first searches for an absolute file path on the Greenplum segment hosts. If PXF does not find the schema file there, it searches for the file relative to the PXF classpath. If PXF cannot find the schema file locally, it searches for the file on HDFS.
The $PXF_CONF/conf
directory is in the PXF classpath. PXF can locate an Avro schema file that you add to this directory on every Greenplum Database segment host.
See Writing Avro Data for additional schema considerations when writing Avro data to HDFS.
Creating the External Table
Use the hdfs:avro
profile to read or write Avro-format data in HDFS. The following syntax creates a Greenplum Database readable external table that references such a file:
CREATE [WRITABLE] EXTERNAL TABLE <table_name>
( <column_name> <data_type> [, ...] | LIKE <other_table> )
LOCATION ('pxf://<path-to-hdfs-file>?PROFILE=hdfs:avro[&SERVER=<server_name>][&<custom-option>=<value>[...]]')
FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import'|'pxfwritable_export');
[DISTRIBUTED BY (<column_name> [, ... ] ) | DISTRIBUTED RANDOMLY];
The specific keywords and values used in the CREATE EXTERNAL TABLE command are described in the table below.
Keyword | Value |
---|---|
<path‑to‑hdfs‑file> | The absolute path to the directory or file in the HDFS data store. |
PROFILE | The PROFILE keyword must specify hdfs:avro . |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. Optional; PXF uses the default server if not specified. |
<custom‑option> | <custom-option>s are discussed below. |
FORMAT ‘CUSTOM’ | Use FORMAT ’CUSTOM ’ with (FORMATTER='pxfwritable_export') (write) or (FORMATTER='pxfwritable_import') (read). |
DISTRIBUTED BY | If you want to load data from an existing Greenplum Database table into the writable external table, consider specifying the same distribution policy or <column_name> on both tables. Doing so will avoid extra motion of data between segments on the load operation. |
For complex types, the PXF hdfs:avro
profile inserts default delimiters between collection items and values before display. You can use non-default delimiter characters by identifying values for specific hdfs:avro
custom options in the CREATE EXTERNAL TABLE
command.
The hdfs:avro
profile supports the following <custom-option>s:
Option Keyword | Description |
---|---|
COLLECTION_DELIM | The delimiter character(s) placed between entries in a top-level array, map, or record field when PXF maps an Avro complex data type to a text column. The default is the comma , character. (Read) |
MAPKEY_DELIM | The delimiter character(s) placed between the key and value of a map entry when PXF maps an Avro complex data type to a text column. The default is the colon : character. (Read) |
RECORDKEY_DELIM | The delimiter character(s) placed between the field name and value of a record entry when PXF maps an Avro complex data type to a text column. The default is the colon : character. (Read) |
SCHEMA | The absolute path to the Avro schema file on the segment host or on HDFS, or the relative path to the schema file on the segment host. (Read and Write) |
IGNORE_MISSING_PATH | A Boolean value that specifies the action to take when <path-to-hdfs-file> is missing or invalid. The default value is false , PXF returns an error in this situation. When the value is true , PXF ignores missing path errors and returns an empty fragment. |
Example: Reading Avro Data
The examples in this section will operate on Avro data with the following field name and data type record schema:
- id - long
- username - string
- followers - array of string
- fmap - map of long
- relationship - enumerated type
- address - record comprised of street number (int), street name (string), and city (string)
You create an Avro schema and data file, and then create a readable external table to read the data.
Create Schema
Perform the following operations to create an Avro schema to represent the example schema described above.
Create a file named
avro_schema.avsc
:$ vi /tmp/avro_schema.avsc
Copy and paste the following text into
avro_schema.avsc
:{ "type" : "record", "name" : "example_schema", "namespace" : "com.example", "fields" : [ { "name" : "id", "type" : "long", "doc" : "Id of the user account" }, { "name" : "username", "type" : "string", "doc" : "Name of the user account" }, { "name" : "followers", "type" : {"type": "array", "items": "string"}, "doc" : "Users followers" }, { "name": "fmap", "type": {"type": "map", "values": "long"} }, { "name": "relationship", "type": { "type": "enum", "name": "relationshipEnum", "symbols": ["MARRIED","LOVE","FRIEND","COLLEAGUE","STRANGER","ENEMY"] } }, { "name": "address", "type": { "type": "record", "name": "addressRecord", "fields": [ {"name":"number", "type":"int"}, {"name":"street", "type":"string"}, {"name":"city", "type":"string"}] } } ], "doc:" : "A basic schema for storing messages" }
Create Avro Data File (JSON)
Perform the following steps to create a sample Avro data file conforming to the above schema.
Create a text file named
pxf_avro.txt
:$ vi /tmp/pxf_avro.txt
Enter the following data into
pxf_avro.txt
:{"id":1, "username":"john","followers":["kate", "santosh"], "relationship": "FRIEND", "fmap": {"kate":10,"santosh":4}, "address":{"number":1, "street":"renaissance drive", "city":"san jose"}} {"id":2, "username":"jim","followers":["john", "pam"], "relationship": "COLLEAGUE", "fmap": {"john":3,"pam":3}, "address":{"number":9, "street":"deer creek", "city":"palo alto"}}
The sample data uses a comma
,
to separate top level records and a colon:
to separate map/key values and record field name/values.Convert the text file to Avro format. There are various ways to perform the conversion, both programmatically and via the command line. In this example, we use the Java Avro tools; the jar
avro-tools-1.9.1.jar
file resides in the current directory:$ java -jar ./avro-tools-1.9.1.jar fromjson --schema-file /tmp/avro_schema.avsc /tmp/pxf_avro.txt > /tmp/pxf_avro.avro
The generated Avro binary data file is written to
/tmp/pxf_avro.avro
.Copy the generated Avro file to HDFS:
$ hdfs dfs -put /tmp/pxf_avro.avro /data/pxf_examples/
Reading Avro Data
Perform the following operations to create and query an external table that references the pxf_avro.avro
file that you added to HDFS in the previous section. When creating the table:
- Use the PXF default server.
- Map the top-level primitive fields,
id
(type long) andusername
(type string), to their equivalent Greenplum Database types (bigint and text). - Map the remaining complex fields to type text.
- Explicitly set the record, map, and collection delimiters using the
hdfs:avro
profile custom options.
Use the
hdfs:avro
profile to create a queryable external table from thepxf_avro.avro
file:postgres=# CREATE EXTERNAL TABLE pxf_hdfs_avro(id bigint, username text, followers text, fmap text, relationship text, address text) LOCATION ('pxf://data/pxf_examples/pxf_avro.avro?PROFILE=hdfs:avro&COLLECTION_DELIM=,&MAPKEY_DELIM=:&RECORDKEY_DELIM=:') FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
Perform a simple query of the
pxf_hdfs_avro
table:postgres=# SELECT * FROM pxf_hdfs_avro;
id | username | followers | fmap | relationship | address ----+----------+----------------+--------------------+--------------+--------------------------------------------------- 1 | john | [kate,santosh] | {kate:10,santosh:4} | FRIEND | {number:1,street:renaissance drive,city:san jose} 2 | jim | [john,pam] | {pam:3,john:3} | COLLEAGUE | {number:9,street:deer creek,city:palo alto} (2 rows)
The simple query of the external table shows the components of the complex type data separated with the delimiters specified in the
CREATE EXTERNAL TABLE
call.Process the delimited components in the text columns as necessary for your application. For example, the following command uses the Greenplum Database internal
string_to_array
function to convert entries in thefollowers
field to a text array column in a new view.postgres=# CREATE VIEW followers_view AS SELECT username, address, string_to_array(substring(followers FROM 2 FOR (char_length(followers) - 2)), ',')::text[] AS followers FROM pxf_hdfs_avro;
Query the view to filter rows based on whether a particular follower appears in the view:
postgres=# SELECT username, address FROM followers_view WHERE followers @> '{john}';
username | address ----------+--------------------------------------------- jim | {number:9,street:deer creek,city:palo alto}
Writing Avro Data
When you create a writable external table to write data to an Avro file, each table row is an Avro record and each table column is an Avro field.
If you do not specify a SCHEMA
file, PXF generates a schema for the Avro file based on the Greenplum Database external table definition. PXF assigns the name of the external table column to the Avro field name. Because Avro has a null
type and Greenplum external tables do not support the NOT NULL
column qualifier, PXF wraps each data type in an Avro union
of the mapped type and null
. For example, for a writable external table column that you define with the Greenplum Database text
data type, PXF generates the following schema element:
["string", "null"]
PXF returns an error if you provide a schema that does not include a union
of the field data type with null
, and PXF encounters a NULL data field.
PXF supports writing only Avro primitive data types. It does not support writing complex types to Avro:
- When you specify a
SCHEMA
file in theLOCATION
, the schema must include only primitive data types. - When PXF generates the schema, it writes any complex type that you specify in the writable external table column definition to the Avro file as a single Avro
string
type. For example, if you write an array of integers, PXF converts the array to astring
, and you must read this data with a Greenplumtext
-type column.
Example: Writing Avro Data
In this example, you create an external table that writes to an Avro file on HDFS, letting PXF generate the Avro schema. After you insert some data into the file, you create a readable external table to query the Avro data.
The Avro file that you create and read in this example includes the following fields:
- id:
int
- username:
text
- followers:
text[]
Example procedure:
Create the writable external table:
postgres=# CREATE WRITABLE EXTERNAL TABLE pxf_avrowrite(id int, username text, followers text[]) LOCATION ('pxf://data/pxf_examples/pxfwrite.avro?PROFILE=hdfs:avro') FORMAT 'CUSTOM' (FORMATTER='pxfwritable_export');
Insert some data into the
pxf_avrowrite
table:postgres=# INSERT INTO pxf_avrowrite VALUES (33, 'oliver', ARRAY['alex','frank']); postgres=# INSERT INTO pxf_avrowrite VALUES (77, 'lisa', ARRAY['tom','mary']);
PXF uses the external table definition to generate the Avro schema.
Create an external table to read the Avro data that you just inserted into the table:
postgres=# CREATE EXTERNAL TABLE read_pxfwrite(id int, username text, followers text) LOCATION ('pxf://data/pxf_examples/pxfwrite.avro?PROFILE=hdfs:avro') FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
Notice that the
followers
column is of typetext
.Read the Avro data by querying the
read_pxfwrite
table:postgres=# SELECT * FROM read_pxfwrite;
id | username | followers ----+----------+-------------- 77 | lisa | {tom,mary} 33 | oliver | {alex,frank} (2 rows)
followers
is a single string comprised of thetext
array elements that you inserted into the table.