Configuring Your Systems

Configuring Your Systems

Describes how to prepare your operating system environment for Greenplum Database software installation.

Perform the following tasks in order:

  1. Make sure your host systems meet the requirements described in Platform Requirements.
  2. Disable SELinux and firewall software.
  3. Set the required operating system parameters.
  4. Synchronize system clocks.
  5. Create the gpadmin account.

Unless noted, these tasks should be performed for all hosts in your Greenplum Database array (master, standby master, and segment hosts).

The Greenplum Database host naming convention for the master host is mdw and for the standby master host is smdw.

The segment host naming convention is sdwN where sdw is a prefix and N is an integer. For example, segment host names would be sdw1, sdw2 and so on. NIC bonding is recommended for hosts with multiple interfaces, but when the interfaces are not bonded, the convention is to append a dash (-) and number to the host name. For example, sdw1-1 and sdw1-2 are the two interface names for host sdw1.

For information about running Greenplum Database in the cloud see Cloud Services in the Pivotal Greenplum Partner Marketplace.

Important: When data loss is not acceptable for a Pivotal Greenplum Database system, Greenplum Database master and segment mirroring must be enabled in order for the cluster to be supported by Pivotal. Without mirroring, system and data availability is not guaranteed. Pivotal will make best efforts to restore a cluster in this case. For information about master and segment mirroring, see About Redundancy and Failover in the Greenplum Database Administrator Guide.
Note: For information about upgrading Pivotal Greenplum Database from a previous version, see the Greenplum Database Release Notes for the release that you are installing.
Note: Automating the configuration steps described in this topic and Installing the Greenplum Database Software with a system provisioning tool, such as Ansible, Chef, or Puppet, can save time and ensure a reliable and repeatable Greenplum Database installation.

Disabling SELinux and Firewall Software

For all Greenplum Database host systems running RHEL or CentOS, SELinux must be disabled. Follow these steps:
  1. As the root user, check the status of SELinux:
    # sestatus
    SELinuxstatus: disabled
  2. If SELinux is not disabled, disable it by editing the /etc/selinux/config file. As root, change the value of the SELINUX parameter in the config file as follows:
  3. Reboot the system to apply any changes that you made to /etc/selinux/config and verify that SELinux is disabled.

For information about disabling SELinux, see the SELinux documentation.

You should also disable firewall software such as iptables (on systems such as RHEL 6.x and CentOS 6.x ), firewalld (on systems such as RHEL 7.x and CentOS 7.x), or ufw (on Ubuntu systems, disabled by default).

If you decide to enable iptables with Greenplum Database for security purposes, see Enabling iptables (Optional).

Follow these steps to disable iptables:
  1. As the root user, check the status of iptables:
    # /sbin/chkconfig --list iptables

    If iptables is disabled, the command output is:

    iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
  2. If necessary, execute this command as root to disable iptables:
    /sbin/chkconfig iptables off

    You will need to reboot your system after applying the change.

  3. For systems with firewalld, check the status of firewalld with the command:
    # systemctl status firewalld

    If firewalld is disabled, the command output is:

    * firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
  4. If necessary, execute these commands as root to disable firewalld:
    # systemctl stop firewalld.service
    # systemctl disable firewalld.service

For more information about configuring your firewall software, see the documentation for the firewall or your operating system.

Setting the Greenplum Recommended OS Parameters

Greenplum requires that certain Linux operating system (OS) parameters be set on all hosts in your Greenplum Database system (masters and segments).

In general, the following categories of system parameters need to be altered:

  • Shared Memory - A Greenplum Database instance will not work unless the shared memory segment for your kernel is properly sized. Most default OS installations have the shared memory values set too low for Greenplum Database. On Linux systems, you must also disable the OOM (out of memory) killer. For information about Greenplum Database shared memory requirements, see the Greenplum Database server configuration parameter shared_buffers in the Greenplum Database Reference Guide.
  • Network - On high-volume Greenplum Database systems, certain network-related tuning parameters must be set to optimize network connections made by the Greenplum interconnect.
  • User Limits - User limits control the resources available to processes started by a user's shell. Greenplum Database requires a higher limit on the allowed number of file descriptors that a single process can have open. The default settings may cause some Greenplum Database queries to fail because they will run out of file descriptors needed to process the query.

Linux System Settings

  • Edit the /etc/hosts file and make sure that it includes the host names and all interface address names for every machine participating in your Greenplum Database system.
  • Set the following parameters in the /etc/sysctl.conf file and reload with sysctl -p:
    # kernel.shmall = _PHYS_PAGES / 2 # See Note 1
    kernel.shmall = 4000000000
    # kernel.shmmax = kernel.shmall * PAGE_SIZE # See Note 1
    kernel.shmmax = 500000000
    kernel.shmmni = 4096
    vm.overcommit_memory = 2
    vm.overcommit_ratio = 95 # See Note 2
    net.ipv4.ip_local_port_range = 10000 65535 # See Note 3
    kernel.sem = 500 2048000 200 40960
    kernel.sysrq = 1
    kernel.core_uses_pid = 1
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.msgmni = 2048
    net.ipv4.tcp_syncookies = 1
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.tcp_max_syn_backlog = 4096
    net.ipv4.conf.all.arp_filter = 1
    net.core.netdev_max_backlog = 10000
    net.core.rmem_max = 2097152
    net.core.wmem_max = 2097152
    vm.swappiness = 10
    vm.zone_reclaim_mode = 0
    vm.dirty_expire_centisecs = 500
    vm.dirty_writeback_centisecs = 100
    vm.dirty_background_ratio = 0 # See Note 5
    vm.dirty_ratio = 0
    vm.dirty_background_bytes = 1610612736
    vm.dirty_bytes = 4294967296
    Note: The listed sysctl.conf parameters are for performance in a wide variety of environments. However, the settings might require changes in specific situations. These are additional notes about some of the sysctl.conf parameters.
    1. kernel.shmall sets the total amount of shared memory pages that can be used system wide in pages. kernel.shmmax sets maximum size of a single shared memory segment in bytes. Set kernel.shmall and kernel.shmax values based on your system's physical memory and page size. In general, the value for both parameters should be one half of the system physical memory. You can calculate the parameter values with the operating system variables _PHYS_PAGES and PAGE_SIZE.
      kernel.shmall = ( _PHYS_PAGES / 2)
      kernel.shmmax = ( _PHYS_PAGES / 2) * PAGE_SIZE
      You can run these two commands in a terminal window on your system to calculate the values for kernel.shmall and kernel.shmax. The getconf command returns the value of an operating system variable.
      $ echo $(expr $(getconf _PHYS_PAGES) / 2) 
      $ echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE))
    2. When vm.overcommit_memory is 2, you specify a value for vm.overcommit_ratio. For information about calculating the value for vm.overcommit_ratio when using resource queue-based resource management, see the Greenplum Database server configuration parameter gp_vmem_protect_limit in the Greenplum Database Reference Guide. If you are using resource group-based resource management, tune the operating system vm.overcommit_ratio as necessary. If your memory utilization is too low, increase the vm.overcommit_ratio value; if your memory or swap usage is too high, decrease the value.
    3. To avoid port conflicts between Greenplum Database and other applications when initializing Greenplum Database, do not specify Greenplum Database ports in the range specified by the operating system parameter net.ipv4.ip_local_port_range. For example, if net.ipv4.ip_local_port_range = 10000 65535, you could set the Greenplum Database base port numbers to these values.
      PORT_BASE = 6000
      MIRROR_PORT_BASE = 7000

      For information about the port ranges that are used by Greenplum Database, see gpinitsystem.

    4. Azure deployments require Greenplum Database to not use port 65330. Add the following line to sysctl.conf:

      For additional requirements and recommendations for cloud deployments, see Greenplum Database Cloud Technical Recommendations.

    5. For host systems with more than 64GB of memory, these settings are recommended:
      vm.dirty_background_ratio = 0
      vm.dirty_ratio = 0
      vm.dirty_background_bytes = 1610612736 # 1.5GB
      vm.dirty_bytes = 4294967296 # 4GB
      For host systems with 64GB of memory or less, remove vm.dirty_background_bytes and vm.dirty_bytes and set the two ratio parameters to these values:
      vm.dirty_background_ratio = 3
      vm.dirty_ratio = 10
    6. Increase vm.min_free_kbytes to ensure PF_MEMALLOC requests from network and storage drivers are easily satisfied. This is especially critical on systems with large amounts of system memory. The default value is often far too low on these systems. Use this awk command to set vm.min_free_kbytes to a recommended 3% of system physical memory:
      awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}'
                     /proc/meminfo >> /etc/sysctl.conf 

      Do not set vm.min_free_kbytes to higher than 5% of system memory as doing so might cause out of memory conditions.

  • Set the following parameters in the /etc/security/limits.conf file:
    * soft nofile 524288
    * hard nofile 524288
    * soft nproc 131072
    * hard nproc 131072

    For Red Hat Enterprise Linux (RHEL) and CentOS systems, parameter values in the /etc/security/limits.d/90-nproc.conf file (RHEL/CentOS 6) or /etc/security/limits.d/20-nproc.conf file (RHEL/CentOS 7) override the values in the limits.conf file. Ensure that any parameters in the override file are set to the required value. The Linux module pam_limits sets user limits by reading the values from the limits.conf file and then from the override file. For information about PAM and user limits, see the documentation on PAM and pam_limits.

    Execute the ulimit -u command on each segment host to display the maximum number of processes that are available to each user. Validate that the return value is 131072.

  • XFS is the preferred file system on Linux platforms for data storage. The following XFS mount options are recommended:

    See the manual page (man) for the mount command for more information about using that command (man mount opens the man page).

    The XFS options can also be set in the /etc/fstab file. This example entry from an fstab file specifies the XFS options.

    /dev/data /data xfs nodev,noatime,nobarrier,inode64 0 0
  • Each disk device file should have a read-ahead (blockdev) value of 16384.

    To verify the read-ahead value of a disk device:

    # /sbin/blockdev --getra devname

    For example:

    # /sbin/blockdev --getra /dev/sdb

    To set blockdev (read-ahead) on a device:

    # /sbin/blockdev --setra bytes devname

    For example:

    # /sbin/blockdev --setra 16384 /dev/sdb

    See the manual page (man) for the blockdev command for more information about using that command (man blockdev opens the man page).

    Note: The blockdev --setra command is not persistent, it needs to be run every time the system reboots. How to run the command will vary based on your system, but you must to ensure that the read ahead setting is set every time the system reboots.
  • The Linux disk I/O scheduler for disk access supports different policies, such as CFQ, AS, and deadline.

    The deadline scheduler option is recommended. To specify a scheduler until the next system reboot, run the following:

    # echo schedulername > /sys/block/devname/queue/scheduler

    For example:

    # echo deadline > /sys/block/sbd/queue/scheduler
    Note: Setting the disk I/O scheduler policy with the echo command is not persistent, and must be run when the system is rebooted. If you use the echo command to set the policy, you must ensure the command is run when the system reboots. How to run the command will vary based on your system.

    One method to set the I/O scheduler policy at boot time is with the elevator kernel parameter. Add the parameter elevator=deadline to the kernel command in the file /boot/grub/grub.conf, the GRUB boot loader configuration file. This is an example kernel command from a grub.conf file on RHEL 6.x or CentOS 6.x. The command is on multiple lines for readability.

    kernel /vmlinuz-2.6.18-274.3.1.el5 ro root=LABEL=/
        elevator=deadline crashkernel=128M@16M  quiet console=tty1
        console=ttyS1,115200 panic=30 transparent_hugepage=never 
        initrd /initrd-2.6.18-274.3.1.el5.img
    To specify the I/O scheduler at boot time on systems that use grub2 such as RHEL 7.x or CentOS 7.x, use the system utility grubby. This command adds the parameter when run as root.
    # grubby --update-kernel=ALL --args="elevator=deadline"

    After adding the parameter, reboot the system.

    This grubby command displays kernel parameter settings.
    # grubby --info=ALL

    For more information about the grubby utility, see your operating system documentation. If the grubby command does not update the kernels, see the Note at the end of the section.

  • Disable Transparent Huge Pages (THP). RHEL 6.0 or higher enables THP by default. THP degrades Greenplum Database performance. One way to disable THP on RHEL 6.x is by adding the parameter transparent_hugepage=never to the kernel command in the file /boot/grub/grub.conf, the GRUB boot loader configuration file. This is an example kernel command from a grub.conf file. The command is on multiple lines for readability:
    kernel /vmlinuz-2.6.18-274.3.1.el5 ro root=LABEL=/
        elevator=deadline crashkernel=128M@16M  quiet console=tty1
        console=ttyS1,115200 panic=30 transparent_hugepage=never 
        initrd /initrd-2.6.18-274.3.1.el5.img
    On systems that use grub2 such as RHEL 7.x or CentOS 7.x, use the system utility grubby. This command adds the parameter when run as root.
    # grubby --update-kernel=ALL --args="transparent_hugepage=never"

    After adding the parameter, reboot the system.

    For Ubuntu systems, install the hugepages package and execute this command as root:
    # hugeadm --thp-never
    This cat command checks the state of THP. The output indicates that THP is disabled.
    $ cat /sys/kernel/mm/*transparent_hugepage/enabled
    always [never]

    For more information about Transparent Huge Pages or the grubby utility, see your operating system documentation. If the grubby command does not update the kernels, see the Note at the end of the section.

  • Disable IPC object removal for RHEL 7.2 or CentOS 7.2, or Ubuntu. The default systemd setting RemoveIPC=yes removes IPC connections when non-system user accounts log out. This causes the Greenplum Database utility gpinitsystem to fail with semaphore errors. Perform one of the following to avoid this issue.
    • When you add the gpadmin operating system user account to the master node in Creating the Greenplum Administrative User, create the user as a system account.
    • Disable RemoveIPC. Set this parameter in /etc/systemd/logind.conf on the Greenplum Database host systems.

      The setting takes effect after restarting the systemd-login service or rebooting the system. To restart the service, run this command as the root user.

      service systemd-logind restart
  • Certain Greenplum Database management utilities including gpexpand, gpinitsystem, and gpaddmirrors, use secure shell (SSH) connections between systems to perform their tasks. In large Greenplum Database deployments, cloud deployments, or deployments with a large number of segments per host, these utilities may exceed the hosts' maximum threshold for unauthenticated connections. When this occurs, you receive errors such as: ssh_exchange_identification: Connection closed by remote host..

    To increase this connection threshold for your Greenplum Database system, update the SSH MaxStartups configuration parameter in one of the /etc/ssh/sshd_config or /etc/sshd_config SSH daemon configuration files.

    If you specify MaxStartups using a single integer value, you identify the maximum number of concurrent unauthenticated connections. For example:
    MaxStartups 200
    If you specify MaxStartups using the "start:rate:full" syntax, you enable random early connection drop by the SSH daemon. start identifies the maximum number of unathenticated SSH connection attempts allowed. Once start number of unauthenticated connection attempts is reached, the SSH daemon refuses rate percent of subsequent connection attempts. full identifies the maximum number of unauthenticated connection attempts after which all attempts are refused. For example:
    Max Startups 10:30:200
    Restart the SSH daemon after you update MaxStartups. For example, on a CentOS 6 system, run the following command as the root user:
    # service sshd restart

    For detailed information about SSH configuration options, refer to the SSH documentation for your Linux distribution.

Note: If the grubby command does not update the kernels of a RHEL 7.x or CentOS 7.x system, you can manually update all kernels on the system. For example, to add the parameter transparent_hugepage=never to all kernels on a system.
  1. Add the parameter to the GRUB_CMDLINE_LINUX line in the file parameter in /etc/default/grub.
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet transparent_hugepage=never"
  2. As root, run the grub2-mkconfig command to update the kernels.
    # grub2-mkconfig -o /boot/grub2/grub.cfg
  3. Reboot the system.

Synchronizing System Clocks

You should use NTP (Network Time Protocol) to synchronize the system clocks on all hosts that comprise your Greenplum Database system. See for more information about NTP.

NTP on the segment hosts should be configured to use the master host as the primary time source, and the standby master as the secondary time source. On the master and standby master hosts, configure NTP to point to your preferred time server.

To configure NTP

  1. On the master host, log in as root and edit the /etc/ntp.conf file. Set the server parameter to point to your data center's NTP time server. For example (if was the IP address of your data center's NTP server):
  2. On each segment host, log in as root and edit the /etc/ntp.conf file. Set the first server parameter to point to the master host, and the second server parameter to point to the standby master host. For example:
    server mdw prefer
    server smdw
  3. On the standby master host, log in as root and edit the /etc/ntp.conf file. Set the first server parameter to point to the primary master host, and the second server parameter to point to your data center's NTP time server. For example:
    server mdw prefer
  4. On the master host, use the NTP daemon synchronize the system clocks on all Greenplum hosts. For example, using gpssh:
    # gpssh -f hostfile_gpssh_allhosts -v -e 'ntpd'

Creating the Greenplum Administrative User

Create a dedicated operating system user account on each node to run and administer Greenplum Database. This user account is named gpadmin by convention.

Important: You cannot run the Greenplum Database server as root.

The gpadmin user must have permission to access the services and directories required to install and run Greenplum Database.

The gpadmin user on each Greenplum host must have an SSH key pair installed and be able to SSH from any host in the cluster to any other host in the cluster without entering a password or passphrase (called "passwordless SSH"). If you enable passwordless SSH from the master host to every other host in the cluster ("1-n passwordless SSH"), you can use the Greenplum Database gpssh-exkeys command-line utility later to enable passwordless SSH from every host to every other host ("n-n passwordless SSH").

You can optionally give the gpadmin user sudo privilege, so that you can easily administer all hosts in the Greenplum Database cluster as gpadmin using the sudo, ssh/scp, and gpssh/gpscp commands.

The following steps show how to set up the gpadmin user on a host, set a password, create an SSH key pair, and (optionally) enable sudo capability. These steps must be performed as root on every Greenplum Database cluster host. (For a large Greenplum Database cluster you will want to automate these steps using your system provisioning tools.)

Note: See Example Ansible Playbook for an example that shows how to automate the tasks of creating the gpadmin user and installing the Greenplum Database software on all hosts in the cluster.
  1. Create the gpadmin group and user.
    Note: If you are installing Greenplum Database on RHEL 7.2 or CentOS 7.2 and want to disable IPC object removal by creating the gpadmin user as a system account, provide both the -r option (create the user as a system account) and the -m option (create a home directory) to the useradd command. On Ubuntu systems, you must use the -m option with the useradd command to create a home directory for a user.
    This example creates the gpadmin group, creates the gpadmin user as a system account with a home directory and as a member of the gpadmin group, and creates a password for the user.
    # groupadd gpadmin
    # useradd gpadmin -r -m -g gpadmin
    # passwd gpadmin
    New password: <changeme>
    Retype new password: <changeme>
    Note: Make sure the gpadmin user has the same user id (uid) and group id (gid) numbers on each host to prevent problems with scripts or services that use them for identity or permissions. For example, backing up Greenplum databases to some networked filesy stems or storage appliances could fail if the gpadmin user has different uid or gid numbers on different segment hosts. When you create the gpadmin group and user, you can use the groupadd -g option to specify a gid number and the useradd -u option to specify the uid number. Use the command id gpadmin to see the uid and gid for the gpadmin user on the current host.
  2. Switch to the gpadmin user and generate an SSH key pair for the gpadmin user.
    $ su gpadmin
    $ ssh-keygen -t rsa -b 4096
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/gpadmin/.ssh/id_rsa):
    Created directory '/home/gpadmin/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    At the passphrase prompts, press Enter so that SSH connections will not require entry of a passphrase.
  3. (Optional) Grant sudo access to the gpadmin user.
    On Red Hat or CentOS, run visudo and uncomment the %wheel group entry.
    %wheel        ALL=(ALL)       NOPASSWD: ALL

    Make sure you uncomment the line that has the NOPASSWD keyword.

    Add the gpadmin user to the wheel group with this command.

    # usermod -aG wheel gpadmin