溪畔小屋西拉干红(金钱溪西拉干红价格)
2025-11-18 14:36:22
表格数字变成e+17怎么办(表格中的数字变成了e 17怎么恢复)
作者:杨文
DBA,负责客户项目的需求与维护,会点数据库,不限于MySQL、Redis、Cassandra、GreenPlum、ClickHouse、Elastic、TDSQL等等。
本文来源:原创投稿
* 爱可生开源社区出品,原创内容未经授权不得随意使用,转载请联系小编并注明来源。
客户在巡检时,发现 Greenplum 虽然正常运行,但有些数据的状态异常。我们知道 Greenplum 的数据是存在主段和镜像段上的,当 primary 数据异常,会自动的启用 mirror 数据。当然为了保证数据的高可用,还是要及时修复异常数据。
[gpadmin@master~]$psql-c"select*fromgp_segment_configurationorderbycontentasc,dbid;" dbid|content|role|preferred_role|mode|status|port|hostname|address|datadir ------+---------+------+----------------+------+--------+-------+--------------+--------------+----------------------------------- 44|-1|p|p|s|u|5432|master|master|/greenplum/gpdata/master/gpseg-1 45|-1|m|m|s|u|5432|standby|standby|/greenplum/gpdata/master/gpseg-1 2|0|p|p|s|u|55000|data01|data01|/greenplum/gpdata/primary/gpseg0 11|0|m|m|s|u|56000|data02|data02|/greenplum/gpdata/mirror/gpseg0 3|1|p|p|s|u|55001|data01|data01|/greenplum/gpdata/primary/gpseg1 12|1|m|m|s|u|56001|data02|data02|/greenplum/gpdata/mirror/gpseg1 4|2|p|p|s|u|55002|data01|data01|/greenplum/gpdata/primary/gpseg2 13|2|m|m|s|u|56002|data02|data02|/greenplum/gpdata/mirror/gpseg2 5|3|p|p|s|u|55000|data02|data02|/greenplum/gpdata/primary/gpseg3 14|3|m|m|s|u|56000|data03|data03|/greenplum/gpdata/mirror/gpseg3 6|4|p|p|s|u|55001|data02|data02|/greenplum/gpdata/primary/gpseg4 15|4|m|m|s|u|56001|data03|data03|/greenplum/gpdata/mirror/gpseg4 7|5|p|p|s|u|55002|data02|data02|/greenplum/gpdata/primary/gpseg5 16|5|m|m|s|u|56002|data03|data03|/greenplum/gpdata/mirror/gpseg5 8|6|p|p|s|u|55000|data03|data03|/greenplum/gpdata/primary/gpseg6 17|6|m|m|s|u|56000|data01|data01|/greenplum/gpdata/mirror/gpseg6 9|7|p|p|s|u|55001|data03|data03|/greenplum/gpdata/primary/gpseg7 18|7|m|m|s|u|56001|data01|data01|/greenplum/gpdata/mirror/gpseg7 10|8|p|p|s|u|55002|data03|data03|/greenplum/gpdata/primary/gpseg8 19|8|m|m|s|u|56002|data01|data01|/greenplum/gpdata/mirror/gpseg8 21|9|m|p|s|d|55000|data04|data04|/greenplum/gpdata/primary/gpseg9 30|9|p|m|s|u|56000|data05|data05|/greenplum/gpdata/mirror/gpseg9 22|10|m|p|s|d|55001|data04|data04|/greenplum/gpdata/primary/gpseg10 31|10|p|m|s|u|56001|data05|data05|/greenplum/gpdata/mirror/gpseg10 23|11|m|p|s|d|55002|data04|data04|/greenplum/gpdata/primary/gpseg11 32|11|p|m|s|u|56002|data05|data05|/greenplum/gpdata/mirror/gpseg11 24|12|m|p|s|d|55000|data05|data05|/greenplum/gpdata/primary/gpseg12 27|12|p|m|s|u|56000|data04|data04|/greenplum/gpdata/mirror/gpseg12 25|13|m|p|s|d|55001|data05|data05|/greenplum/gpdata/primary/gpseg13 28|13|p|m|s|u|56001|data04|data04|/greenplum/gpdata/mirror/gpseg13 26|14|m|p|s|d|55002|data05|data05|/greenplum/gpdata/primary/gpseg14 29|14|p|m|s|u|56002|data04|data04|/greenplum/gpdata/mirror/gpseg14 33|15|m|p|s|d|55003|data01|data01|/greenplum/gpdata/primary/gpseg15 39|15|p|m|s|u|56003|data02|data02|/greenplum/gpdata/mirror/gpseg15 34|16|m|p|s|d|55003|data02|data02|/greenplum/gpdata/primary/gpseg16 40|16|p|m|s|u|56003|data03|data03|/greenplum/gpdata/mirror/gpseg16 35|17|m|p|s|d|55003|data03|data03|/greenplum/gpdata/primary/gpseg17 41|17|p|m|s|u|56003|data04|data04|/greenplum/gpdata/mirror/gpseg17 36|18|m|p|s|d|55003|data04|data04|/greenplum/gpdata/primary/gpseg18 42|18|p|m|s|u|56003|data05|data05|/greenplum/gpdata/mirror/gpseg18 37|19|m|p|s|d|55003|data05|data05|/greenplum/gpdata/primary/gpseg19 38|19|p|m|s|u|56003|data01|data01|/greenplum/gpdata/mirror/gpseg19 (42rows)
可以看到42个数据节点中有11个数据节点处于 down 状态;
分别去down掉的节点中去查看数据文件(此处我们只取一个节点进行展示对比):
[gpadmin@data02gpseg16]$pwd /greenplum/gpdata/primary/gpseg16 [gpadmin@data02gpseg16]$ls basepg_hba.confpg_serialpg_utilitymodedtmredo fts_probe_file.bakpg_ident.confpg_snapshotsPG_VERSION globalpg_logpg_statpg_xlog internal.auto.confpg_logicalpg_stat_tmppostgresql.auto.conf pg_clogpg_multixactpg_subtranspostgresql.conf pg_distributedlogpg_notifypg_tblspcpostmaster.opts pg_dynshmempg_replslotpg_twophase
可以发现都缺少了 postmaster.pid 文件。
为了看的更清楚,我们找一个状态正常的节点查看对比:
[gpadmin@data01gpseg1]$pwd /greenplum/gpdata/primary/gpseg1 [gpadmin@data01gpseg1]$ls basepg_hba.confpg_serialpg_utilitymodedtmredo fts_probe_file.bakpg_ident.confpg_snapshotsPG_VERSION globalpg_logpg_statpg_xlog internal.auto.confpg_logicalpg_stat_tmppostgresql.auto.conf pg_clogpg_multixactpg_subtranspostgresql.conf pg_distributedlogpg_notifypg_tblspcpostmaster.opts pg_dynshmempg_replslotpg_twophasepostmaster.pid [gpadmin@data01gpseg1]$catpostmaster.pid 20517 /greenplum/gpdata/primary/gpseg1 1652025705 55001 /tmp * 55001001393219
说明:很多人说此时重启集群可以轻易的解决这个问题,但实际上重启集群并不能保证一定会解决问题,并且重启集群会导致业务中断。
[gpadmin@master~]$gprecoverseg-o./recover 20251127:22:10:22:020909gprecoverseg:master:gpadmin-[INFO]:-Startinggprecoversegwithargs:-o./recover 20251127:22:10:22:020909gprecoverseg:master:gpadmin-[INFO]:-localGreenplumVersion:'postgres(GreenplumDatabase)6.7.0buildcommit:2fbc274bc15a19b5de3c6e44ad5073464cd4f47b' 20251127:22:10:22:020909gprecoverseg:master:gpadmin-[INFO]:-masterGreenplumVersion:'PostgreSQL9.4.24(GreenplumDatabase6.7.0buildcommit:2fbc274bc15a19b5de3c6e44ad5073464cd4f47b)onx86_64-unknown-linux-gnu,compiledbygcc(GCC)6.4.0,64-bitcompiledonApr16202502:24:06' 20251127:22:10:22:020909gprecoverseg:master:gpadmin-[INFO]:-ObtainingSegmentdetailsfrommaster... 20251127:22:10:22:020909gprecoverseg:master:gpadmin-[INFO]:-Configurationfileoutputto./recoversuccessfully.
[gpadmin@master~]$ls gpAdminLogsrecover [gpadmin@master~]$morerecover data04|55000|/greenplum/gpdata/primary/gpseg9 data04|55001|/greenplum/gpdata/primary/gpseg10 data04|55002|/greenplum/gpdata/primary/gpseg11 data05|55000|/greenplum/gpdata/primary/gpseg12 data05|55001|/greenplum/gpdata/primary/gpseg13 data05|55002|/greenplum/gpdata/primary/gpseg14 data01|55003|/greenplum/gpdata/primary/gpseg15 data02|55003|/greenplum/gpdata/primary/gpseg16 data03|55003|/greenplum/gpdata/primary/gpseg17 data04|55003|/greenplum/gpdata/primary/gpseg18 data05|55003|/greenplum/gpdata/primary/gpseg19
[gpadmin@master~]$gprecoverseg-i./recover-F 执行过程省略,但有个选项需要确认: ContinuewithsegmentrecoveryprocedureYy|Nn(default=N): >y
[gpadmin@master~]$psql-c"select*fromgp_segment_configurationorderbycontentasc,dbid;" dbid|content|role|preferred_role|mode|status|port|hostname|address|datadir ------+---------+------+----------------+------+--------+-------+--------------+--------------+----------------------------------- 44|-1|p|p|s|u|5432|master|master|/greenplum/gpdata/master/gpseg-1 45|-1|m|m|s|u|5432|standby|standby|/greenplum/gpdata/master/gpseg-1 2|0|p|p|s|u|55000|data01|data01|/greenplum/gpdata/primary/gpseg0 11|0|m|m|s|u|56000|data02|data02|/greenplum/gpdata/mirror/gpseg0 3|1|p|p|s|u|55001|data01|data01|/greenplum/gpdata/primary/gpseg1 12|1|m|m|s|u|56001|data02|data02|/greenplum/gpdata/mirror/gpseg1 4|2|p|p|s|u|55002|data01|data01|/greenplum/gpdata/primary/gpseg2 13|2|m|m|s|u|56002|data02|data02|/greenplum/gpdata/mirror/gpseg2 5|3|p|p|s|u|55000|data02|data02|/greenplum/gpdata/primary/gpseg3 14|3|m|m|s|u|56000|data03|data03|/greenplum/gpdata/mirror/gpseg3 6|4|p|p|s|u|55001|data02|data02|/greenplum/gpdata/primary/gpseg4 15|4|m|m|s|u|56001|data03|data03|/greenplum/gpdata/mirror/gpseg4 7|5|p|p|s|u|55002|data02|data02|/greenplum/gpdata/primary/gpseg5 16|5|m|m|s|u|56002|data03|data03|/greenplum/gpdata/mirror/gpseg5 8|6|p|p|s|u|55000|data03|data03|/greenplum/gpdata/primary/gpseg6 17|6|m|m|s|u|56000|data01|data01|/greenplum/gpdata/mirror/gpseg6 9|7|p|p|s|u|55001|data03|data03|/greenplum/gpdata/primary/gpseg7 18|7|m|m|s|u|56001|data01|data01|/greenplum/gpdata/mirror/gpseg7 10|8|p|p|s|u|55002|data03|data03|/greenplum/gpdata/primary/gpseg8 19|8|m|m|s|u|56002|data01|data01|/greenplum/gpdata/mirror/gpseg8 21|9|m|p|s|u|55000|data04|data04|/greenplum/gpdata/primary/gpseg9 30|9|p|m|s|u|56000|data05|data05|/greenplum/gpdata/mirror/gpseg9 22|10|m|p|s|u|55001|data04|data04|/greenplum/gpdata/primary/gpseg10 31|10|p|m|s|u|56001|data05|data05|/greenplum/gpdata/mirror/gpseg10 23|11|m|p|s|u|55002|data04|data04|/greenplum/gpdata/primary/gpseg11 32|11|p|m|s|u|56002|data05|data05|/greenplum/gpdata/mirror/gpseg11 24|12|m|p|s|u|55000|data05|data05|/greenplum/gpdata/primary/gpseg12 27|12|p|m|s|u|56000|data04|data04|/greenplum/gpdata/mirror/gpseg12 25|13|m|p|s|u|55001|data05|data05|/greenplum/gpdata/primary/gpseg13 28|13|p|m|s|u|56001|data04|data04|/greenplum/gpdata/mirror/gpseg13 26|14|m|p|s|u|55002|data05|data05|/greenplum/gpdata/primary/gpseg14 29|14|p|m|s|u|56002|data04|data04|/greenplum/gpdata/mirror/gpseg14 33|15|m|p|s|u|55003|data01|data01|/greenplum/gpdata/primary/gpseg15 39|15|p|m|s|u|56003|data02|data02|/greenplum/gpdata/mirror/gpseg15 34|16|m|p|s|u|55003|data02|data02|/greenplum/gpdata/primary/gpseg16 40|16|p|m|s|u|56003|data03|data03|/greenplum/gpdata/mirror/gpseg16 35|17|m|p|s|u|55003|data03|data03|/greenplum/gpdata/primary/gpseg17 41|17|p|m|s|u|56003|data04|data04|/greenplum/gpdata/mirror/gpseg17 36|18|m|p|s|u|55003|data04|data04|/greenplum/gpdata/primary/gpseg18 42|18|p|m|s|u|56003|data05|data05|/greenplum/gpdata/mirror/gpseg18 37|19|m|p|s|u|55003|data05|data05|/greenplum/gpdata/primary/gpseg19 38|19|p|m|s|u|56003|data01|data01|/greenplum/gpdata/mirror/gpseg19 (42rows)
此时可以看到所有数据节点的状态都是正常的up状态。
[gpadmin@master~]$psql psql(9.4.24) Type"help"forhelp. postgres=#\ctest Youarenowconnectedtodatabase"test"asuser"gpadmin". test=#selectgp_segment_id,count(*)fromtest_ywgroupbygp_segment_id; gp_segment_id|count ---------------+------- 1|384 13|396 14|403 9|429 10|376 16|364 12|389 6|414 0|426 15|426 3|404 19|411 4|409 2|393 8|410 18|407 7|407 11|420 5|346 17|386 (20rows) test=#\q
可以看到所有数据节点上都是有数据的,且都正常。
其实仔细看可以发现,上面的数据节点看起来都很正常,但还有个小小的问题:部分数据节点的角色存在异常,即有的"主段"角色变成了“镜像段”角色,有的"镜像段"角色变成了“主段”角色。
[gpadmin@master~]$gprecoverseg-r 执行过程省略,但有个选项需要确认: ContinuewithsegmentrebalanceprocedureYy|Nn(default=N): >y
[gpadmin@master~]$psql-c"select*fromgp_segment_configurationorderbycontentasc,dbid;" dbid|content|role|preferred_role|mode|status|port|hostname|address|datadir ------+---------+------+----------------+------+--------+-------+--------------+--------------+----------------------------------- 44|-1|p|p|s|u|5432|master|master|/greenplum/gpdata/master/gpseg-1 45|-1|m|m|s|u|5432|standby|standby|/greenplum/gpdata/master/gpseg-1 2|0|p|p|s|u|55000|data01|data01|/greenplum/gpdata/primary/gpseg0 11|0|m|m|s|u|56000|data02|data02|/greenplum/gpdata/mirror/gpseg0 3|1|p|p|s|u|55001|data01|data01|/greenplum/gpdata/primary/gpseg1 12|1|m|m|s|u|56001|data02|data02|/greenplum/gpdata/mirror/gpseg1 4|2|p|p|s|u|55002|data01|data01|/greenplum/gpdata/primary/gpseg2 13|2|m|m|s|u|56002|data02|data02|/greenplum/gpdata/mirror/gpseg2 5|3|p|p|s|u|55000|data02|data02|/greenplum/gpdata/primary/gpseg3 14|3|m|m|s|u|56000|data03|data03|/greenplum/gpdata/mirror/gpseg3 6|4|p|p|s|u|55001|data02|data02|/greenplum/gpdata/primary/gpseg4 15|4|m|m|s|u|56001|data03|data03|/greenplum/gpdata/mirror/gpseg4 7|5|p|p|s|u|55002|data02|data02|/greenplum/gpdata/primary/gpseg5 16|5|m|m|s|u|56002|data03|data03|/greenplum/gpdata/mirror/gpseg5 8|6|p|p|s|u|55000|data03|data03|/greenplum/gpdata/primary/gpseg6 17|6|m|m|s|u|56000|data01|data01|/greenplum/gpdata/mirror/gpseg6 9|7|p|p|s|u|55001|data03|data03|/greenplum/gpdata/primary/gpseg7 18|7|m|m|s|u|56001|data01|data01|/greenplum/gpdata/mirror/gpseg7 10|8|p|p|s|u|55002|data03|data03|/greenplum/gpdata/primary/gpseg8 19|8|m|m|s|u|56002|data01|data01|/greenplum/gpdata/mirror/gpseg8 21|9|p|p|s|u|55000|data04|data04|/greenplum/gpdata/primary/gpseg9 30|9|m|m|s|u|56000|data05|data05|/greenplum/gpdata/mirror/gpseg9 22|10|p|p|s|u|55001|data04|data04|/greenplum/gpdata/primary/gpseg10 31|10|m|m|s|u|56001|data05|data05|/greenplum/gpdata/mirror/gpseg10 23|11|p|p|s|u|55002|data04|data04|/greenplum/gpdata/primary/gpseg11 32|11|m|m|s|u|56002|data05|data05|/greenplum/gpdata/mirror/gpseg11 24|12|p|p|s|u|55000|data05|data05|/greenplum/gpdata/primary/gpseg12 27|12|m|m|s|u|56000|data04|data04|/greenplum/gpdata/mirror/gpseg12 25|13|p|p|s|u|55001|data05|data05|/greenplum/gpdata/primary/gpseg13 28|13|m|m|s|u|56001|data04|data04|/greenplum/gpdata/mirror/gpseg13 26|14|p|p|s|u|55002|data05|data05|/greenplum/gpdata/primary/gpseg14 29|14|m|m|s|u|56002|data04|data04|/greenplum/gpdata/mirror/gpseg14 33|15|p|p|s|u|55003|data01|data01|/greenplum/gpdata/primary/gpseg15 39|15|m|m|s|u|56003|data02|data02|/greenplum/gpdata/mirror/gpseg15 34|16|p|p|s|u|55003|data02|data02|/greenplum/gpdata/primary/gpseg16 40|16|m|m|s|u|56003|data03|data03|/greenplum/gpdata/mirror/gpseg16 35|17|p|p|s|u|55003|data03|data03|/greenplum/gpdata/primary/gpseg17 41|17|m|m|s|u|56003|data04|data04|/greenplum/gpdata/mirror/gpseg17 36|18|p|p|s|u|55003|data04|data04|/greenplum/gpdata/primary/gpseg18 42|18|m|m|s|u|56003|data05|data05|/greenplum/gpdata/mirror/gpseg18 37|19|p|p|s|u|55003|data05|data05|/greenplum/gpdata/primary/gpseg19 38|19|m|m|s|u|56003|data01|data01|/greenplum/gpdata/mirror/gpseg19 (42rows)
此时可以看到数据节点的所有状态都是正确的。
此时去之前异常数据节点中去查看数据文件,可以发现之前缺少的 postmaster.pid 文件都存在了,并且还多了 recovery.done 文件:
[gpadmin@data02gpseg16]$pwd /greenplum/gpdata/primary/gpseg16 [gpadmin@data02gpseg16]$ls backup_label.oldpg_clogpg_stat_tmp basepg_distributedlogpg_subtrans fts_probe_file.bakpg_dynshmempg_tblspc globalpg_hba.confpg_twophase gpexpand.pidpg_ident.confpg_utilitymodedtmredo gpexpand.statuspg_logPG_VERSION gpexpand.status_detailpg_logicalpg_xlog gpmetricspg_multixactpostgresql.auto.conf gpperfmonpg_notifypostgresql.conf gpsegconfig_dumppg_replslotpostgresql.conf.bak gpssh.confpg_serialpostmaster.opts internal.auto.confpg_snapshotspostmaster.pid internal.auto.conf.bakpg_statrecovery.done [gpadmin@data02gpseg16]$morepostmaster.pid 19572 /greenplum/gpdata/primary/gpseg16 1669556066 55003 /tmp * 55003001327680 [gpadmin@data02gpseg16]$morerecovery.done standby_mode='on' primary_conninfo='user=gpadminhost=data03port=56003sslmode=prefersslcompression=1krbsrvname=postgresapplication_name=gp_walreceiver' primary_slot_name='internal_wal_replication_slot'
查看数据:
[gpadmin@master~]$psql-c"selectgp_segment_id,count(*)fromtest_yw;"
同样可以看到所有数据节点上的数据都是正常的。
2025-11-18 14:36:22
2025-11-18 14:34:07
2025-11-18 14:31:52
2025-11-18 14:29:37
2025-11-18 14:27:22
2025-11-18 09:25:21
2025-11-18 09:23:05
2025-11-18 09:20:51
2025-11-18 09:18:36
2025-11-18 09:16:21
2025-11-18 09:14:06
2025-11-18 09:11:51
2025-11-18 09:09:36
2025-11-18 09:07:21
2025-11-18 09:05:06
2025-11-18 09:02:51
2025-11-18 09:00:36
2025-11-18 08:58:21
2025-11-18 08:56:06
2025-11-18 08:53:51