Deepgreen/Greenplum删除节点步骤

简介:

Greenplum和Deepgreen官方都没有给出删除节点的方法和建议,但实际上,我们可以对节点进行删除。由于不确定性,删除节点极有可能导致其他的问题,所以还行做好备份,谨慎而为。下面是具体的步骤:

1.查看数据库当前状态(12个实例)

[gpadmin@sdw1 ~]$ gpstate
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Starting gpstate with args:
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.99.00 build Deepgreen DB) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) compiled on Jul  6 2017 03:04:10'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Gathering data from segments...
..
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-Greenplum instance status summary
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Master instance                                = Active
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Master standby                                 = No master standby configured
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total segment instance count from metadata     = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Primary Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segments                         = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segment valid (at master)        = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segment failures (at master)     = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid files missing   = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid files found     = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing    = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found      = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of /tmp lock files missing        = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of /tmp lock files found          = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number postmaster processes missing      = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number postmaster processes found        = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Mirror Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Mirrors not configured on this array
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------

2.并行备份数据库

使用 gpcrondump 命令备份数据库,这里不赘述,不明白的可以翻看文档。

3.关闭当前数据库

[gpadmin@sdw1 ~]$ gpstop -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master Greenplum instance process active PID   = 31250
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Database                                       = template1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master port                                    = 5432
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master directory                               = /hgdata/master/hgdwseg-1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Shutdown mode                                  = fast
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Timeout                                        = 120
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Shutdown Master standby host                   = Off
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Segment instances that will be shutdown:
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Host   Datadir                     Port    Status
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg0    25432   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg1    25433   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg2    25434   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg3    25435   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg4    25436   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg5    25437   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg6    25438   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg7    25439   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg8    25440   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg9    25441   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg10   25442   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg11   25443   u

Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='fast'
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Detected 0 connections to database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Using standard WAIT mode of 120 seconds
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=fast
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-No standby master host configured
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-0.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-100.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-   Segments stopped successfully      = 12
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Successfully shutdown 12 of 12 segment instances
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpmmon process
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpmmon process found
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover shared memory

4.以管理模式启动数据库

[gpadmin@sdw1 ~]$ gpstart -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args: -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Master-only start requested in configuration without a standby master.

Continue with master-only startup Yy|Nn (default=N):
> y
20170816:12:54:41:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Master Started...

5.登陆管理数据库

[gpadmin@sdw1 ~]$ PGOPTIONS="-c gp_session_role=utility" psql -d postgres
psql (8.2.15)
Type "help" for help.

6.删除segment

postgres=# select * from gp_segment_configuration;
 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | sdw1     | sdw1    |                  |
    2 |       0 | p    | p              | s    | u      | 25432 | sdw1     | sdw1    |                  |
    3 |       1 | p    | p              | s    | u      | 25433 | sdw1     | sdw1    |                  |
    4 |       2 | p    | p              | s    | u      | 25434 | sdw1     | sdw1    |                  |
    5 |       3 | p    | p              | s    | u      | 25435 | sdw1     | sdw1    |                  |
    6 |       4 | p    | p              | s    | u      | 25436 | sdw1     | sdw1    |                  |
    7 |       5 | p    | p              | s    | u      | 25437 | sdw1     | sdw1    |                  |
    8 |       6 | p    | p              | s    | u      | 25438 | sdw1     | sdw1    |                  |
    9 |       7 | p    | p              | s    | u      | 25439 | sdw1     | sdw1    |                  |
   10 |       8 | p    | p              | s    | u      | 25440 | sdw1     | sdw1    |                  |
   11 |       9 | p    | p              | s    | u      | 25441 | sdw1     | sdw1    |                  |
   12 |      10 | p    | p              | s    | u      | 25442 | sdw1     | sdw1    |                  |
   13 |      11 | p    | p              | s    | u      | 25443 | sdw1     | sdw1    |                  |
(13 rows)
postgres=# set allow_system_table_mods='dml';
SET
postgres=# delete from gp_segment_configuration where dbid=13;
DELETE 1
postgres=# select * from gp_segment_configuration;
 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | sdw1     | sdw1    |                  |
    2 |       0 | p    | p              | s    | u      | 25432 | sdw1     | sdw1    |                  |
    3 |       1 | p    | p              | s    | u      | 25433 | sdw1     | sdw1    |                  |
    4 |       2 | p    | p              | s    | u      | 25434 | sdw1     | sdw1    |                  |
    5 |       3 | p    | p              | s    | u      | 25435 | sdw1     | sdw1    |                  |
    6 |       4 | p    | p              | s    | u      | 25436 | sdw1     | sdw1    |                  |
    7 |       5 | p    | p              | s    | u      | 25437 | sdw1     | sdw1    |                  |
    8 |       6 | p    | p              | s    | u      | 25438 | sdw1     | sdw1    |                  |
    9 |       7 | p    | p              | s    | u      | 25439 | sdw1     | sdw1    |                  |
   10 |       8 | p    | p              | s    | u      | 25440 | sdw1     | sdw1    |                  |
   11 |       9 | p    | p              | s    | u      | 25441 | sdw1     | sdw1    |                  |
   12 |      10 | p    | p              | s    | u      | 25442 | sdw1     | sdw1    |                  |
(12 rows)

7.删除filespace

postgres=# select * from pg_filespace_entry;
 fsefsoid | fsedbid |        fselocation
----------+---------+---------------------------
     3052 |       1 | /hgdata/master/hgdwseg-1
     3052 |       2 | /hgdata/primary/hgdwseg0
     3052 |       3 | /hgdata/primary/hgdwseg1
     3052 |       4 | /hgdata/primary/hgdwseg2
     3052 |       5 | /hgdata/primary/hgdwseg3
     3052 |       6 | /hgdata/primary/hgdwseg4
     3052 |       7 | /hgdata/primary/hgdwseg5
     3052 |       8 | /hgdata/primary/hgdwseg6
     3052 |       9 | /hgdata/primary/hgdwseg7
     3052 |      10 | /hgdata/primary/hgdwseg8
     3052 |      11 | /hgdata/primary/hgdwseg9
     3052 |      12 | /hgdata/primary/hgdwseg10
     3052 |      13 | /hgdata/primary/hgdwseg11
(13 rows)
postgres=#  delete from pg_filespace_entry where fsedbid=13;
DELETE 1
postgres=# select * from pg_filespace_entry;
 fsefsoid | fsedbid |        fselocation
----------+---------+---------------------------
     3052 |       1 | /hgdata/master/hgdwseg-1
     3052 |       2 | /hgdata/primary/hgdwseg0
     3052 |       3 | /hgdata/primary/hgdwseg1
     3052 |       4 | /hgdata/primary/hgdwseg2
     3052 |       5 | /hgdata/primary/hgdwseg3
     3052 |       6 | /hgdata/primary/hgdwseg4
     3052 |       7 | /hgdata/primary/hgdwseg5
     3052 |       8 | /hgdata/primary/hgdwseg6
     3052 |       9 | /hgdata/primary/hgdwseg7
     3052 |      10 | /hgdata/primary/hgdwseg8
     3052 |      11 | /hgdata/primary/hgdwseg9
     3052 |      12 | /hgdata/primary/hgdwseg10
(12 rows)

8.退出管理模式,正常启动数据库

[gpadmin@sdw1 ~]$ gpstop -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
[gpadmin@sdw1 ~]$ gpstart
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args:
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Started...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Shutting down master
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Database                 = template1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Port              = 5432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master directory         = /hgdata/master/hgdwseg-1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Timeout                  = 600 seconds
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master standby           = Off
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Segment instances that will be started
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   Host   Datadir                     Port
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg0    25432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg1    25433
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg2    25434
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg3    25435
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg4    25436
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg5    25437
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg6    25438
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg7    25439
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg8    25440
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg9    25441
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg10   25442

Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20170816:12:57:07:098112 gpstart:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.......
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Process results...
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Successful segment starts                                            = 11
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Successfully started 11 of 11 segment instances
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance sdw1 directory /hgdata/master/hgdwseg-1
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Command pg_ctl reports Master sdw1 instance active
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-No standby master configured.  skipping...
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Database successfully started

9.将删除节点的备份文件使用psql恢复到当前数据库

psql -d postgres -f xxxx.sql #这里不赘述恢复过程

备注:

1)本文使用的是只恢复删除节点的数据。

2)本文的过程,逆向执行,可以将删除的节点重新添加回来,但是数据恢复起来比较耗时,与重新建库恢复差不多。

目录
相关文章
|
3月前
|
Oracle 关系型数据库 分布式数据库
PolarDB for PostgreSQL报错问题之跨节点执行报错如何解决
PolarDB for PostgreSQL是基于PostgreSQL开发的一款云原生关系型数据库服务,它提供了高性能、高可用性和弹性扩展的特性;本合集将围绕PolarDB(pg)的部署、管理和优化提供指导,以及常见问题的排查和解决办法。
|
SQL 网络协议 关系型数据库
实践教程之PolarDB-X replica原理和使用
PolarDB-X 为了方便用户体验,提供了免费的实验环境,您可以在实验环境里体验 PolarDB-X 的安装部署和各种内核特性。除了免费的实验,PolarDB-X 也提供免费的视频课程,手把手教你玩转 PolarDB-X 分布式数据库。
实践教程之PolarDB-X replica原理和使用
|
22天前
|
SQL 关系型数据库 分布式数据库
从Citus深度解密如何基于PostgreSQL做分布式数据库
前言分布式数据库能够解决海量数据存储、超高并发吞吐、大表瓶颈以及复杂计算效率等单机数据库瓶颈难题,当业务体量即将突破单机数据库承载极限和单表过大导致性能、维护问题时,分布式数据库是解决上述问题的高性价比方案。数据库作为分布式改造的最大难点,就是"和使用单机数据库一样使用分布式数据库",这也一直是广大...
248 0
从Citus深度解密如何基于PostgreSQL做分布式数据库
|
存储 SQL Cloud Native
基于 PolarDB for MySQL 实现并行创建索引赛题解析 | 学习笔记
快速学习基于 PolarDB for MySQL 实现并行创建索引赛题解析
180 0
基于 PolarDB for MySQL 实现并行创建索引赛题解析 | 学习笔记
|
存储 SQL 负载均衡
Citus 11 for Postgres 完全开源,可从任何节点查询(Citus 官方博客)
Citus 11.0 来了! Citus 是一个 PostgreSQL 扩展,它为 PostgreSQL 添加了分布式数据库的超能力。 使用 Citus,您可以创建跨 PostgreSQL 节点集群透明分布或复制的表。 Citus 11.0 是一个新的主版本,这意味着它带有一些非常令人兴奋的新功能,可以实现更高级别的可扩展性。
461 1
Citus 11 for Postgres 完全开源,可从任何节点查询(Citus 官方博客)
|
数据库 SQL
将Greenplum并行备份恢复到配置不同的新集群
在Greenplum中,我们可以使用 gp_restore 或者 gpdbrestore 对数据库进行并行恢复,但是并行恢复要求要恢复的新集群与备份集群拥有同样的配置(节点实例数量)。但是如果我们的新集群节点数与原集群不一样怎么办?还能使用原备份文件吗?答案是肯定的,但是由于节点数量不一样了,我们只能通过Master节点进行非并行备份。
3963 0
|
关系型数据库
Greenplum6单机安装攻略
title: Greenplum6单机安装攻略 date: 2019-07-18 11:10:27 categories: Greenplum 本文记录了GP安装的艰辛历程,本文重点记录了单机安装的流(cai)程(keng),多机集群的安装方法是类似的,使用不同的配置就好。踩坑无数,尽量严格按照下面顺序安装。 1 目录规划 这里有个技巧,可以用端口号命名文件夹,隔离不同的集群,例如后
5693 0
|
弹性计算 关系型数据库 数据库
Deepgreen(Greenplum) 多机部署测试 , TPC-H VS citus
标签 PostgreSQL , deepgreen , greenplum , citus , tpch , 多机部署 背景 多机部署deepgreen,与greenplum部署方法类似。
2250 0
|
SQL 关系型数据库
PostgreSQL citus, Greenplum 分布式执行计划 DEBUG
标签 PostgreSQL , citus , sharding , Greenplum , explain , debug 背景 开启DEBUG,可以观察citus, Greenplum的SQL分布式执行计划,下发情况,主节点,数据节点交互情况。
1599 0