About PXF Filter Pushdown
PXF supports filter pushdown. When filter pushdown is enabled, the constraints from the
WHERE clause of a
SELECT query can be extracted and passed to the external data source for filtering. This process can improve query performance, and can also reduce the amount of data that is transferred to Greenplum Database.
You enable or disable filter pushdown for all external table protocols, including
pxf, by setting the
gp_external_enable_filter_pushdown server configuration parameter. The default value of this configuration parameter is
on; set it to
off to disable filter pushdown. For example:
SHOW gp_external_enable_filter_pushdown; SET gp_external_enable_filter_pushdown TO 'on';
Note: Some external data sources do not support filter pushdown. Also, filter pushdown may not be supported with certain data types or operators. If a query accesses a data source that does not support filter push-down for the query constraints, the query is instead executed without filter pushdown (the data is filtered after it is transferred to Greenplum Database).
PXF filter pushdown can be used with these data types (connector- and profile-specific):
NUMERIC(not available with the S3 connector when using S3 Select, nor with the
hiveprofile when accessing
STORED AS Parquet)
TIMESTAMP(available only with the JDBC connector, the S3 connector when using S3 Select, the
hive:orcprofiles, and the
hiveprofile when accessing
PXF accesses data sources using profiles exposed by different connectors, and filter pushdown support is determined by the specific connector implementation. The following PXF profiles support some aspects of filter pushdown as well as different arithmetic and logical operations:
|Profile||<, >,<=, >=,=, <>||LIKE||IS [NOT] NULL||IN||AND||OR||NOT|
|*:orc (all except hive:orc)||Y1,3||N||Y1,3||Y1,3||Y1,3||Y1,3||Y1,3|
|s3:parquet and s3:text with S3-Select||Y||N||Y||Y||Y||Y||Y|
|hive:rc, hive (accessing stored as RCFile)||Y2||N||Y||Y||Y, Y2||Y, Y2||Y|
|hive:orc, hive (accessing stored as ORC)||Y, Y2||N||Y||Y||Y, Y2||Y, Y2||Y|
|hive (accessing stored as Parquet)||Y, Y2||N||N||Y||Y, Y2||Y, Y2||Y|
|hive:orc and VECTORIZE=true||Y2||N||N||N||Y2||Y2||N|
1 PXF applies the predicate, rather than the remote system, reducing CPU usage and the memory footprint.
2 PXF supports partition pruning based on partition keys.
3 PXF filtering is based on file-level, stripe-level, and row-level ORC statistics.
4 The PXF
jdbc profile supports the
LIKE operator only for
PXF does not support filter pushdown for any profile not mentioned in the table above, including: *:avro, *:AvroSequenceFile, *:SequenceFile, *:json, *:text, *:csv, and *:text:multi.
To summarize, all of the following criteria must be met for filter pushdown to occur:
- You enable external table filter pushdown by setting the
gp_external_enable_filter_pushdownserver configuration parameter to
- The Greenplum Database protocol that you use to access external data source must support filter pushdown. The
pxfexternal table protocol supports pushdown.
- The external data source that you are accessing must support pushdown. For example, HBase and Hive support pushdown.
For queries on external tables that you create with the
pxfprotocol, the underlying PXF connector must also support filter pushdown. For example, the PXF Hive, HBase, and JDBC connectors support pushdown, as do the PXF connectors that support reading ORC and Parquet data.
Refer to Hive Partition Pruning for more information about Hive support for this feature.