Introduction to PXF
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation.
The Greenplum Platform Extension Framework (PXF) provides connectors that enable you to access data stored in sources external to your Greenplum Database deployment. These connectors map an external data source to a Greenplum Database external table definition. When you create the Greenplum Database external table, you identify the external data store and the format of the data via a server name and a profile name that you provide in the command.
You can query the external table via Greenplum Database, leaving the referenced data in place. Or, you can use the external table to load the data into Greenplum Database for higher performance.
Your Greenplum Database deployment consists of a master node and multiple segment hosts. A single PXF agent process on each Greenplum Database segment host allocates a worker thread for each segment instance on a segment host that participates in a query against an external table. The PXF agents on multiple segment hosts communicate with the external data store in parallel.
Connector is a generic term that encapsulates the implementation details required to read from or write to an external data store. PXF provides built-in connectors to Hadoop (HDFS, Hive, HBase), object stores (Azure, Google Cloud Storage, Minio, S3), and SQL databases (JDBC).
A PXF server is a named configuration for Hadoop or an object store connector. A server definition provides the information required for PXF to access an external data source. This configuration information is data-store-specific, and may include server location and access credentials.
The default PXF server is named
default, and when configured provides the information required for PXF to access Hadoop services including HDFS, Hive, and HBase.
The Greenplum Database administrator will configure at least one server definition for each object store that they will permit Greenplum Database users to access, and will publish the available server names as appropriate.
Finally, a PXF profile is a named mapping identifying a specific data format supported by a specific external data store. PXF supports text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data formats, and provides several built-in profiles as discussed in the following section.
PXF implements a Greenplum Database protocol named
pxf that you can use to create an external table that references data in an external data store. The syntax for a
CREATE EXTERNAL TABLE command that specifies the
pxf protocol follows:
CREATE [WRITABLE] EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, ...] | LIKE <other_table> ) LOCATION('pxf://<path-to-data>?PROFILE=<profile_name>[&SERVER=<server_name>][&<custom-option>=<value>[...]]') FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
LOCATION clause in a
CREATE EXTERNAL TABLE statement specifying the
pxf protocol is a URI. This URI identifies the path to, or other information describing, the location of the external data. For example, if the external data store is HDFS, the <path-to-data> identifies the absolute path to a specific HDFS file. If the external data store is Hive, <path-to-data> identifies a schema-qualified Hive table name.
You use the query portion of the URI, introduced by the question mark (?), to identify the PXF server and profile names.
PXF may require additional information to read or write certain data formats. You provide profile-specific information using the optional <custom-option>=<value> component of the
LOCATION string and formatting information via the <formatting-properties> component of the string. The custom options and formatting properties supported by a specific profile vary; they are identified in usage documentation for the profile.
|Keyword||Value and Description|
|<path‑to‑data>||A directory, file name, wildcard pattern, table name, etc. The syntax of <path-to-data> is dependent upon the external data source.|
|PROFILE=<profile_name>||The profile that PXF uses to access the data. PXF supports profiles that access text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data in Hadoop services, object stores, and other SQL databases.|
|SERVER=<server_name>||The named server configuration that PXF uses to access the data. The default server is a Hadoop server named
|<custom‑option>=<value>||Additional options and their values supported by the profile or the server.|
|FORMAT <value>||PXF profiles support the
|<formatting‑properties>||Formatting properties supported by the profile; for example, the
Note: When you create a PXF external table, you cannot use the
HEADER option in your formatter specification.