Checking for Failed Segments

A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.

Checking for Failed Segments

With mirroring enabled, you can have failed segment instances in the system without interruption of service or any indication that a failure has occurred. You can verify the status of your system using the gpstate utility. gpstate provides the status of each individual component of a Greenplum Database system, including primary segments, mirror segments, master, and standby master.

To check for failed segments

  1. On the master host, run the gpstate utility with the -e option to show segment instances with error conditions:
    $ gpstate -e

    If the utility lists Segments with Primary and Mirror Roles Switched, the segment is not in its preferred role (the role to which it was assigned at system initialization). This means the system is in a potentially unbalanced state, as some segment hosts may have more active segments than is optimal for top system performance.

    Segments that display the Config status as Down indicate the corresponding mirror segment is down.

    See Recovering From Segment Failures for instructions to fix this situation.

  2. To get detailed information about failed segments, you can check the gp_segment_configuration catalog table. For example:
    $ psql postgres -c "SELECT * FROM gp_segment_configuration WHERE status='d';"
  3. For failed segment instances, note the host, port, preferred role, and data directory. This information will help determine the host and segment instances to troubleshoot.
  4. To show information about mirror segment instances, run:
    $ gpstate -m