Upgrading PXF When You Upgrade Greenplum 6

A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.

If you are using PXF in your current Greenplum Database 6.x installation, you must perform some PXF upgrade actions when you upgrade to a newer version of Greenplum Database 6.x.

The PXF upgrade procedure describes how to upgrade PXF in your Greenplum Database installation. This procedure uses PXF.from to refer to your currently-installed PXF version and PXF.to to refer to the PXF version installed when you upgrade to the new version of Greenplum Database.

The PXF upgrade procedure has two parts. You perform one procedure before, and one procedure after, you upgrade to a new version of Greenplum Database:

Step 1: PXF Pre-Upgrade Actions

Perform this procedure before you upgrade to a new version of Greenplum Database:

  1. Log in to the Greenplum Database master node. For example:

    $ ssh gpadmin@<gpmaster>
    
  2. Identify and note the PXF.from version number. For example:

    gpadmin@gpmaster$ pxf version
    
  3. Stop PXF on each segment host as described in Stopping PXF.

  4. Upgrade to the new version of Greenplum Database and then continue your PXF upgrade with Step 2: Upgrading PXF.

Step 2: Upgrading PXF

After you upgrade to the new version of Greenplum Database, perform the following procedure to upgrade and configure the PXF.to software:

  1. Log in to the Greenplum Database master node. For example:

    $ ssh gpadmin@<gpmaster>
    
  2. If you installed the PXF rpm on your Greenplum 6 hosts:

    1. Copy the PXF extension files from the PXF installation directory to the new Greenplum 6 install directory:

      gpadmin@gpmaster pxf cluster register
      
    2. Start PXF on each segment host as described in Starting PXF.

    3. Exit this procedure.

  3. Initialize PXF on each segment host as described in Initializing PXF. You may choose to use your existing $PXF_CONF for the initialization.

  4. If you are upgrading from Greenplum Database version 6.1.x or earlier and you have configured any JDBC servers that access Kerberos-secured Hive, you must now set the hadoop.security.authentication property to the jdbc-site.xml file to explicitly identify use of the Kerberos authentication method. Perform the following for each of these server configs:

    1. Navigate to the server configuration directory.
    2. Open the jdbc-site.xml file in the editor of your choice and uncomment or add the following property block to the file:

      <property>
          <name>hadoop.security.authentication</name>
          <value>kerberos</value>
      </property>
      
    3. Save the file and exit the editor.

  5. If you are upgrading from Greenplum Database version 6.7.x or earlier: The PXF Hive and HiveRC profiles now support column projection using column name-based mapping. If you have any existing PXF external tables that specify one of these profiles, and the external table relied on column index-based mapping, you may be required to drop and recreate the tables:

    1. Identify all PXF external tables that you created that specify a Hive or HiveRC profile.
    2. For each external table that you identify in step 1, examine the definitions of both the PXF external table and the referenced Hive table. If the column names of the PXF external table do not match the column names of the Hive table:

      1. Drop the existing PXF external table. For example:

        DROP EXTERNAL TABLE pxf_hive_table1;
        
      2. Recreate the PXF external table using the Hive column names. For example:

        CREATE EXTERNAL TABLE pxf_hive_table1( hivecolname int, hivecolname2 text )
          LOCATION( 'pxf://default.hive_table_name?PROFILE=Hive')
        FORMAT 'custom' (FORMATTER='pxfwritable_import');
        
      3. Review any SQL scripts that you may have created that reference the PXF external table, and update column names if required.

  6. Synchronize the PXF configuration from the master host to the standby master and each Greenplum Database segment host. For example:

    gpadmin@gpmaster$ $GPHOME/pxf/bin/pxf cluster sync
    
  7. Start PXF on each segment host as described in Starting PXF.