The following table describes some errors you may encounter while using PXF:
|Protocol “pxf” does not exist||Cause: The
Solution: Create (enable) the PXF extension for the database as described in the PXF Enable Procedure.
|Invalid URI pxf://<path-to-data>: missing options section||Cause: The
Solution: Provide the profile and required options in the URI when you submit the
|PXF server error : Input path does not exist: hdfs://<namenode>:8020/<path-to-file>||Cause: The HDFS file that you specified in <path-to-file> does not exist.
Solution: Provide the path to an existing HDFS file.
|PXF server error : NoSuchObjectException(message:<schema>.<hivetable> table not found)||Cause: The Hive table that you specified with <schema>.<hivetable> does not exist.
Solution: Provide the name of an existing Hive table.
|PXF server error : Failed connect to localhost:5888; Connection refused (<segment-id> slice<N> <segment-host>:<port> pid=<process-id>)
|Cause: The PXF Service is not running on <segment-host>.
Solution: Restart PXF on <segment-host>.
|PXF server error: Permission denied: user=<user>, access=READ, inode=“<filepath>”:-rw——-||Cause: The Greenplum Database user that executed the PXF operation does not have permission to access the underlying Hadoop service (HDFS or Hive). See Configuring the Hadoop User, User Impersonation, and Proxying.|
|PXF server error: PXF service could not be reached. PXF is not running in the tomcat container||Cause: The
Solution: Ensure that the PXF server has been updated and restarted on all hosts.
Most PXF error messages include a
HINT that you can use to resolve the error, or to collect more information to identify the error.
Refer to the Logging topic for more information about logging levels, configuration, and the
pxf-service.log log file.
You use the PXF JDBC connector to access data stored in an external SQL database. Depending upon the JDBC driver, the driver may return an error if there is a mismatch between the default time zone set for the PXF Service and the time zone set for the external SQL database.
For example, if you use the PXF JDBC connector to access an Oracle database with a conflicting time zone, PXF logs an error similar to the following:
java.io.IOException: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found
Should you encounter this error, you can set default time zone option(s) for the PXF Service in the
$PXF_BASE/conf/pxf-env.sh configuration file,
PXF_JVM_OPTS property setting. For example, to set the time zone:
export PXF_JVM_OPTS="<current_settings> -Duser.timezone=America/Chicago"
You can use the
PXF_JVM_OPTS property to set other Java options as well.
As described in previous sections, you must synchronize the updated PXF configuration to the Greenplum Database cluster and restart the PXF Service on each host.
Greenplum Database supports partitioned tables, and permits exchanging a leaf child partition with a PXF external table.
When you read from a partitioned Greenplum table where one or more partitions is a PXF external table and there is no data backing the external table path, PXF returns an error and the query fails. This default PXF behavior is not optimal in the partitioned table case; an empty child partition is valid and should not cause a query on the parent table to fail.
IGNORE_MISSING_PATH PXF custom option is a boolean that specifies the action to take when the external table path is missing or invalid. The default value is
false, PXF returns an error when it encounters a missing path. If the external table is a child partition of a Greenplum table, you want PXF to ignore a missing path error, so set this option to
For example, PXF ignores missing path errors generated from the following external table:
CREATE EXTERNAL TABLE ext_part_87 (id int, some_date date) LOCATION ('pxf://bucket/path/?PROFILE=s3:parquet&SERVER=s3&IGNORE_MISSING_PATH=true') FORMAT 'CUSTOM' (formatter = 'pxfwritable_import');
IGNORE_MISSING_PATH custom option applies only to file-based profiles, including
*:SequenceFile. This option is not available when the external table specifies the
jdbc profiles, or when reading from S3 using S3-Select.