Introduction to PXF
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.
The Greenplum Platform Extension Framework (PXF) provides connectors that enable you to access data stored in sources external to your Greenplum Database deployment. These connectors map an external data source to a Greenplum Database external table definition. When you create the Greenplum Database external table, you identify the external data store and the format of the data via a server name and a profile name that you provide in the command.
You can query the external table via Greenplum Database, leaving the referenced data in place. Or, you can use the external table to load the data into Greenplum Database for higher performance.
PXF supports the Red Hat Enterprise Linux 64-bit 7.x, CentOS 64-bit 7.x, and Ubuntu 18.04 LTS operating system platforms.
cURLand instead loads the system-provided library. PXF requires
cURLversion 7.29.0 or newer. The officially-supported
cURLfor the CentOS 6.x and Red Hat Enterprise Linux 6.x operating systems is version 7.19.*. Greenplum Database 6 does not support running PXF on CentOS 6.x or RHEL 6.x due to this limitation.
PXF supports Java 8 and Java 11.
PXF bundles all of the Hadoop JAR files on which it depends, and supports the following Hadoop component versions:
- Hadoop version 2.9.2
- Hive version 1.2.2
- HBase version 1.3.2
Your Greenplum Database deployment consists of a master node and multiple segment hosts. A single PXF agent process on each Greenplum Database segment host allocates a worker thread for each segment instance on a segment host that participates in a query against an external table. The PXF agents on multiple segment hosts communicate with the external data store in parallel.
About Connectors, Servers, and Profiles
Connector is a generic term that encapsulates the implementation details required to read from or write to an external data store. PXF provides built-in connectors to Hadoop (HDFS, Hive, HBase), object stores (Azure, Google Cloud Storage, Minio, S3), and SQL databases (via JDBC).
A PXF Server is a named configuration for a connector. A server definition provides the information required for PXF to access an external data source. This configuration information is data-store-specific, and may include server location, access credentials, and other relevant properties.
The Greenplum Database administrator will configure at least one server definition for each external data store that they will allow Greenplum Database users to access, and will publish the available server names as appropriate.
You specify a
SERVER=<server_name> setting when you create the external table to identify the server configuration from which to obtain the configuration and credentials to access the external data store.
The default PXF server is named
default (reserved), and when configured provides the location and access information for the external data source in the absence of a
Finally, a PXF profile is a named mapping identifying a specific data format or protocol supported by a specific external data store. PXF supports text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data formats, and the JDBC protocol, and provides several built-in profiles as discussed in the following section.
Creating an External Table
PXF implements a Greenplum Database protocol named
pxf that you can use to create an external table that references data in an external data store. The syntax for a
CREATE EXTERNAL TABLE command that specifies the
pxf protocol follows:
CREATE [WRITABLE] EXTERNAL TABLE <table_name> ( <column_name> <data_type> [, ...] | LIKE <other_table> ) LOCATION('pxf://<path-to-data>?PROFILE=<profile_name>[&SERVER=<server_name>][&<custom-option>=<value>[...]]') FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);
LOCATION clause in a
CREATE EXTERNAL TABLE statement specifying the
pxf protocol is a URI. This URI identifies the path to, or other information describing, the location of the external data. For example, if the external data store is HDFS, the <path-to-data> identifies the absolute path to a specific HDFS file. If the external data store is Hive, <path-to-data> identifies a schema-qualified Hive table name.
You use the query portion of the URI, introduced by the question mark (?), to identify the PXF server and profile names.
PXF may require additional information to read or write certain data formats. You provide profile-specific information using the optional <custom-option>=<value> component of the
LOCATION string and formatting information via the <formatting-properties> component of the string. The custom options and formatting properties supported by a specific profile vary; they are identified in usage documentation for the profile.
|Keyword||Value and Description|
|<path‑to‑data>||A directory, file name, wildcard pattern, table name, etc. The syntax of <path-to-data> is dependent upon the external data source.|
|PROFILE=<profile_name>||The profile that PXF uses to access the data. PXF supports profiles that access text, Avro, JSON, RCFile, Parquet, SequenceFile, and ORC data in Hadoop services, object stores, and other SQL databases.|
|SERVER=<server_name>||The named server configuration that PXF uses to access the data. PXF uses the
|<custom‑option>=<value>||Additional options and their values supported by the profile or the server.|
|FORMAT <value>||PXF profiles support the
|<formatting‑properties>||Formatting properties supported by the profile; for example, the
Note: When you create a PXF external table, you cannot use the
HEADER option in your formatter specification.
Other PXF Features
Certain PXF connectors and profiles support filter pushdown and column projection. Refer to the following topics for detailed information about this support: