PostgreSQL Data checksum aktiv edilməsi


Normalda data checksum-ı PostgreSQL clusterini inisalizə edərkən aşağıdakı kimi aktiv etmək lazımdır :

$ /usr/pgsql-13/bin/initdb -D /var/lib/pgsql/14/data/  --data-checksums

Bu məqalədə biz  PostgreSQL-i inisalizə etdikdən sonra, data checksum-un aktiv edilməsinə  baxacağıq

Ilk öncə checksumun aktiv olmadığına baxırı:

postgres=# show data_checksums ;
 data_checksums 
----------------
 off
(1 row)

# systemctl stop postgresql.service

--On RHEL, CentOS
$ /usr/pgsql-14/bin/pg_checksums --enable -D /var/lib/pgsql/14/data/
$ pg_controldata -D /var/lib/pgsql/14/data/ | grep checksum
# systemctl start postgresql.service
postgres=# show data_checksums ;
 data_checksums 
----------------
 on
(1 row)

--On Ubuntu, Debian
# apt-get install postgresql-14-pg-checksums
$ /usr/lib/postgresql/14/bin/pg_checksums --enable -D /var/lib/postgresql/14/main/
/usr/lib/postgresql/14/bin/pg_controldata  -D /var/lib/postgresql/14/main/ | grep checksum
# systemctl start postgresql.service

postgres=# show data_checksums ;
 data_checksums 
----------------
 on
(1 row)

PostgreSQL serverdə CheckSum-un disable edilməsi:
pg_checksums --disable -D /var/lib/pgsql/14/data/

pgBackrest vasitəsilə backup alınması


pgBackRest CrunchyData şirkəti tərəfindən yazılmış open source backup alətidir. pgBackrest-lə bacupları 2 cür , backupını almaq istəyimiz PostgreSQL bazasının yerləşdiyi serverə və yaxud da ayrıca bir backup serveri yaradıb müxtəlif remote serverlərdə yerləşən PostgreSQL bazalarının backupını bir serverə ala bilərik. Aşağıda hər iki üsul göstərilmişdir

  • Serverin öz lokalına backup alınması
  • Ayrılmış remote serverə backup alınması
  • İncremental və Differential backup
  • Backupdan geri qayıtmaq
  • Point In Time Recovery(PITR)
  • Bəzi pgbackrest əmrləri və konfiqləri

Serverin öz lokalına backup alınması

pgBackRest paketlərini PostgreSQL yum repositoriyasından yükləyə bilərik. PostgreSQL quraşdırılmış serverimizin öz lokalına backup almaq üçün pgBackrest-i aşağıdakı qaydada quraşdırırıq :

# yum install pgbackrest
# su - postgres

$ pgbackrest version
pgBackRest 2.32

pgBackrest konfiqurasiya faylı susmaya görə /etc/pgbackrest.conf qovluğunda yerləşdir. Həmin faylı redaktə edib öz konfiqurasiyamızı yazırıq:
[global] Burada global bölməsi Backupları və WAL seqment arxivlərini saxlamaq üçün repozitoriyanı təyin etmək üçündür.
[mydb_stanza] hissəsində isə backupını alacağımız bazanın harada yerləşdiyi, backupın necə alınacağı, onun qalma(retention) müddəti vəs xüsusiyyətləri qeyd olunur.

# vi /etc/pgbackrest.conf
[global]
repo1-path=/var/lib/pgsql/12/backups
log-level-console=info
log-level-file=debug
start-fast=y

[mydb_stanza]
pg1-path=/var/lib/pgsql/12/data
repo1-retention-full=1

Postgresql.conf faylında arxivləməni aktivləşdiririk :

# vi /var/lib/pgsql/12/data/postgresql.conf
archive_mode = on
archive_command = 'pgbackrest --stanza=mydb_stanza archive-push %p'

# systemctl restart postgresql-12.service

Burada pgBackRest-in archive-push əmri vasitəsilə WAL segmentləri arxivə göndəririk. 

İndi isə gəlin stanza yaradaq və konfiqurasiyanı yoxlayaq:

$ pgbackrest --stanza=mydb_stanza stanza-create
$ pgbackrest --stanza=mydb_stanza check

Aaşağıdakı üsulla bazanın full backupını alırıq:

$ pgbackrest --stanza=mydb_stanza --type=full backup

Ayrılmış serverə backup alınması

Digər bir nümunəyə baxaq burada biz bir serverden digər serverə backup alacağıq. Burada backup alınacaq serverdən backup alan serverə ssh üzərindən şifrəsiz daxil olma imkanı olmalıdır. Backup serveri və database serverdə postgres userləri arasında ssh üzərindən şifrəsiz qoşulmanı təmin edirk və təhlükəsizlik qaydalarına uyğun olaraq selinux və firewall tənzimləmələrini edirik. Bu nümunədə test olduğu uçün selinux və firewall disable edirik:

### hər iki serverdə
# systemctl stop firewalld  && systemctl disable firewalld && setenforce 0 && sed -i 's/enforcing/disabled/g' /etc/selinux/config /etc/selinux/config
$ ssh-keygen
--Backup serverindən DB serverlərə
$ ssh-copy-id postgres@mydb1
$ ssh-copy-id postgres@mydb2
$ ssh-copy-id postgres@mydb2
---DB serverlərdən Backup serverinə
$ ssh-copy-id postgres@backupsrv

Backup serverinə pgbackrest yazırıq və konfiqurasiya edirik. Aşağıdakı nümunədən göründüyü kimi burada biz 3 stanza, yəni 3 fərqli PostgreSQL bazasının backuplarını almağı planlaşdırırıq:

# yum install pgbackrest
# vi /etc/pgbackrest.conf

[mydb1-stanza]
repo1-path=/backups/mydb1
pg1-host=192.168.1.100
repo1-host-user=postgres
pg1-path=/var/lib/pgsql/12/data
pg1-port=5432
compress=n
repo1-retention-full=1
start-fast=y
stop-auto=y
process-max=8
log-level-console=info
log-level-file=detail

[mydb2-stanza]
repo1-path=/backups/mydb2
pg1-host=192.168.1.101
repo1-host-user=postgres
pg1-path=/var/lib/pgsql/12/data
pg1-port=5432
compress=n
repo1-retention-full=1
start-fast=y
stop-auto=y
process-max=8
log-level-console=info
log-level-file=detail

[mydb3-stanza]
repo1-path=/backups/mydb3
pg1-host=192.168.1.103
repo1-host-user=postgres
pg1-path=/var/lib/pgsql/12/data
pg1-port=5432
compress=n
repo1-retention-full=1
start-fast=y
stop-auto=y
process-max=8
log-level-console=info
log-level-file=detail

Burada :
process-max=4   paralel işləyəcək proses sayını göstərir.
pg1-host=192.168.X.X  backupı alınacaq database serverin ipsini göstərir.
İndi isə mydb1  bazamız yerləşən serverimizə daxil oluruq , pgbackrest quraşdırıb və konfiqurasiya faylını aşağıdakı kimi dəyişirik. Burada repo1-host Backup serverimizin(backupların saxlanılacağı server) ip ünvanıdır:

[root@mydb1~]# yum install pgbackrest
[root@mydb1~]# vi /etc/pgbackrest.conf
[global]
repo1-host=192.168.1.200
repo1-host-user=postgres
log-level-console=info
log-level-file=detail
compress=n

[global:archive-push]
process-max=4
archive-async=y
log-level-console=info
log-level-file=info

[mydb01-stanza]
pg1-path=/var/lib/pgsql/12/data
pg1-port=5432

İndi isə backupını alacağımız serverlərimizdə postgresql.conf faylına dəyişikliklərimizi edirik:

[root@mydb1 ~]# vim /var/lib/pgsql/12/data/postgresql.conf 
archive_mode = on
archive_command = 'pgbackrest --stanza=mydb1-stanza archive-push %p' 

Yuxarıda mydb1-də edilən əməliyyatlar eyni qaydada mydb2 və myb3-də edirik.

pgbackrest –stanza=mydb1-stanza archive-push %p’  bu komandada biz pgbackrest-ə deyirik ki sənə bir stanza verirəm(yəni backup olunacaq db) archive-push komandasini %p (wal fayllarının arxivlənəcəyi direktoriyaya) kopyala

İndi isə backup serverimizdə stanza create edirik:

[postgres@backupsrv~]$ pgbackrest --stanza=mydb1-stanza stanza-create

[postgres@backupsrv ~]$ cd /backups/mydb1
[postgres@backupsrv mydb1]$ ls
archive  backup
[postgres@backupsrv mydb1]$ cd backup/
[postgres@backupsrv backup]$ ls
mydb1-stanza
[postgres@backupsrv backup]$ cd mydb1-stanza/
[postgres@backupsrv mydb1-stanza]$ ls
backup.info  backup.info.copy

Backupını almaq istədiyimiz  PostgreSQL bazalarını restart edirik:

[root@mydb1~]# systemctl restart postgresql-12.service

Bazanın Full backupını aşağıdakı qaydada alırıq:

## Full backup almaq uchun
[postgres@backupsrv]$ pgbackrest --stanza=mydb1-stanza --type=full backup

Bundan əlavə aşağıdakı üsullarla bazanın differential və incremental backuplarını da ala bilərsiniz.

## incremental backup almaq uchun
[postgres@backupsrv]$ pgbackrest --stanza=mydb1-stanza --type=incr backup

## differential backup almaq uchun
[postgres@backupsrv]$ pgbackrest --stanza=mydb1-stanza --type=diff backup

## backuplar barede melumata baxmaq uchun
[postgres@backupsrv]$ pgbackrest --stanza=mydb1-stanza info

Backupdan geri qayıtma

İndi isə deyəlim ki mydb1 db-mız çökdü və biz backupdan geri qayıtmaq istəyirik:

[root@mydb1 ~]# systemctl stop postgresql-12.service
[postgres@mydb1 ~]$  rm -rf /var/lib/pgsql/12/data/
[postgres@mydb1 ~]$ mkdir -m 700 /var/lib/pgsql/12/data


Bazamızı ən son aldığımız backupdan restore edirik:

[postgres@mydb1 ~]$ pgbackrest --stanza=mydb1-stanza restore

Əgər bir neçə backupımız varsa, həmin backupın adını yazmaqla backupdan geri dönə bilərik:

[postgres@mydb1 ]$ pgbackrest --stanza=mydb1-stanza --set=20210218-033202F restore
[root@mydb1 ~]# systemctl start postgresql-12.service

Ola bilər kizim PostgreSQL serverdə bir neçə baza olsun və biz aldığımız backupdan yalnız bir bazanı bərpa etmək istəyək. Misal üçün aşağıdakı nümunədə yalnız testdb bazasını bərpa edirik::

# systemctl stop postgresql-12.service

$ pgbackrest --stanza=my_stanza --delta --db-include=testdb restore

# systemctl start postgresql-12.service

Point in Time Recovery(PITR)

İndi isə point in time recovery üsuluna baxaq. Əgər biz bazamızın incremental backuplarını alırıqsa və hər hansısa zaman anına göri dönmək lazım olarsa o zaman PİTR vasitəsilə aşağıdakı bazamızı lazm olan zamana geri bərpa edə bilərik:

[root@mydb1 ~]# systemctl stop postgresql-12.service
[postgres@mydb1 ~]$  rm -rf /var/lib/pgsql/12/data/
[postgres@mydb1 ~]$ mkdir -m 700 /var/lib/pgsql/12/data

[postgres@mydb1 ~]$ pgbackrest --stanz'2020-05-17 03:31:00' a=mydb-stanza --type=time --target=restore

Stanzanın və Backupların silinməsi

Backupları və stanzanı silmək üçün ilk oncə, backup prosessini dayandırırıq:

 $ pgbackrest --stanza=stanza-adı --log-level-console=info stop

Daha sonra aşağıdakı komanda ilə stanza və backupı silirik. Əgər bir neçə stanza varsa o halda  –stanza parametrindən istifadə edərək stranza adına görə nəticəni filt edə bilərik.

$ pgbackrest --stanza=stanza-adı --log-level-console=info stanza-delete --force

PgUpgrade vasitəsilə PostgreSQL-in upgrade olunması


Bu məqalədə PgUpgrade vasitəsilə PostgreSQL 12-dən 13-ə necə upgrade etmək  olar ona baxacağıq:
İlk olaraq PostgreSQL12 olan serverə postgres13 quraşdırıq

# dnf update -y
# dnf install -y postgresql13-server
# systemctl enable --now postgresql-13

Kohne baza ile yeni baza konfiqurasiya fayllarını müqayisə edirik

# diff /var/lib/pgsql/12/data/postgresql.conf /var/lib/pgsql/13/data/postgresql.conf
# diff /var/lib/pgsql/12/data/pg_hba.conf /var/lib/pgsql/13/data/pg_hba.conf

Klasterləri check edrik. Bu komandanı –check arqumenti ilə işlətdikdə heç bir məlumat dəyişmir, sadəcə upgrade prosesinin uğurlu olub olmayacağı test olunur. Əgər upgrade zamanı problem yarana bilərsə aşağıdakı kimi  testin nəticəsində bəndlərdən birində FATAL çıxacaqdır.

/usr/pgsql-13/bin/pg_upgrade \
--old-datadir=/var/lib/pgsql/12/data/ \
--new-datadir=/var/lib/pgsql/13/data/ \
--old-bindir=/usr/pgsql-12/bin \
--new-bindir=/usr/pgsql-13/bin \
--old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' \
--new-options '-c config_file=/var/lib/pgsql/13/data/postgresql.conf' \
--check


[postgres@pgtest01 ~]$ /usr/pgsql-13/bin/pg_upgrade --old-datadir=/var/lib/pgsql/12/data/ --new-datadir=/var/lib/pgsql/13/data/ --old-bindir=/usr/pgsql-12/bin --new-bindir=/usr/pgsql-13/bin --old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' --new-options '-c config_file=/var/lib/pgsql/13/data/postgresql.conf' --check
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for presence of required libraries fatal

Your installation references loadable libraries that are missing from the
new installation. You can add these libraries to the new installation,
or remove the functions using them from the old installation. A list of
problem libraries is in the file:
loadable_libraries.txt

[postgres@pgtest01 ~]$ more loadable_libraries.txt
could not load library "$libdir/rum": ERROR: could not access file "$libdir/rum": No such file or directory
In database: contrib_regression
In database: postgres
could not load library "$libdir/powa": ERROR: could not access file "$libdir/powa": No such file or directory
In database: powa
could not load library "$libdir/hypopg": ERROR: could not access file "$libdir/hypopg": No such file or directory
In database: pagila
could not load library "$libdir/pg_cron": ERROR: could not access file "$libdir/pg_cron": No such file or directory
In database: postgres
could not load library "$libdir/pgaudit": ERROR: could not access file "$libdir/pgaudit": No such file or directory
In database: postgres
could not load library "$libdir/postgis-3": ERROR: could not access file "$libdir/postgis-3": No such file or directory
In database: postgres
could not load library "$libdir/pg_qualstats": ERROR: could not access file "$libdir/pg_qualstats": No such file or directory
In database: powa
could not load library "$libdir/pg_background": ERROR: could not access file "$libdir/pg_background": No such file or directory
In database: postgres
could not load library "$libdir/plpgsql_check": ERROR: could not access file "$libdir/plpgsql_check": No such file or directory
In database: postgres
could not load library "$libdir/pg_stat_kcache": ERROR: could not access file "$libdir/pg_stat_kcache": No such file or directory
In database: powa
could not load library "$libdir/pg_wait_sampling": ERROR: could not access file "$libdir/pg_wait_sampling": No such file or directory
In database: postgres
could not load library "$libdir/extra_window_functions": ERROR: could not access file "$libdir/extra_window_functions": No such file or directory

PostgreSQL12-də olan və bizim bazamızda quraşdırılmış extentionların bəzilərinin PostgreSQL13 üçün olan versiyasının quraşdılmadığı, bəzilərinin isə hələ ki 13-cü versiya üçün olan relizləri çıxmadığı üçün burada ERROR-lar çıxır. Bu problemi həll etmək üçün 13-cü versiya üçün olan extensionları quraşdırırıq, PG13 üçün  versiyası olmayanları isə silirik ( Buradan-da gördüyümüz kimi hələ ki bir çox extension-ların PostgreSQL13 üçün versiyasının çıxmadığı üçün,  13-cü versiayaya keçməyə tələsmək lazım deyil):

[root@pgtest01 ~]# dnf install postgresql13-contrib postgresql13-devel -y
[root@pgtest01 ~]# yum install hypopg_13 -y
postgres=# drop extension pg_cron;
DROP EXTENSION
postgres=# drop extension rum CASCADE ;

[postgres@pgtest01 data]$ /usr/pgsql-13/bin/pg_upgrade --old-datadir=/var/lib/pgsql/12/data/ --new-datadir=/var/lib/pgsql/13/data/ --old-bindir=/usr/pgsql-12/bin --new-bindir=/usr/pgsql-13/bin --old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' --new-options '-c config_file=/var/lib/pgsql/13/data/postgresql.conf' --check
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok

systemctl stop postgresql-12.service

[postgres@pgtest01 data]$ /usr/pgsql-13/bin/pg_upgrade --old-datadir=/var/lib/pgsql/12/data/ --new-datadir=/var/lib/pgsql/13/data/ --old-bindir=/usr/pgsql-12/bin --new-bindir=/usr/pgsql-13/bin --old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' --new-options '-c config_file=/var/lib/pgsql/13/data/postgresql.conf'
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster ok
Freezing all rows in the new cluster ok
Deleting files from new pg_xact ok
Copying old pg_xact to new server ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Copying old pg_multixact/offsets to new server ok
Deleting files from new pg_multixact/members ok
Copying old pg_multixact/members to new server ok
Setting next multixact ID and offset for new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
ok
Copying user relation files
ok
Setting next OID for new cluster ok
Sync data directory to disk ok
Creating script to analyze new cluster ok
Creating script to delete old cluster ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
./delete_old_cluster.sh

Konfiqurasiya fayllarında portları və bizə lazım olan parametrləri köhnə versiyaya uyğun  dəyişib hər iki postgresql servisini başladırıq:

vim /var/lib/pgsql/13/data/postgresql.conf
# "port = 5433" nu "port = 5432" dəyişirik

vim /var/lib/pgsql/12/data/postgresql.conf
# "port = 5432" nu "port = 5433" dəyişirik

# systemctl start postgresql-13.service

Generasiya olunmuş analyze_new_cluster scriptini yeni clusterdə işlədirik:

# su - postgres
$ ./analyze_new_cluster.sh

Köhnə klasteri silirik:

# yum remove postgresql*12*
# rm -rf /var/lib/pgsql/12/
# su - postgres
$ ./delete_old_cluster.sh

Zabbix vasitəsilə PostgreSQL-in monitorinq olunması


İlk olaraq Open Source monitorinq sistemi olan  Zabbixin quraşdırılması və PostgreSQL-i monitorinq etmək üçün konfiqurasiya olunmasına baxacağıq:

 

SELinux disable edirik və ya exeptiona atırıq:

# setsebool -P httpd_can_connect_zabbix on
# setsebool -P httpd_can_network_connect_db on
və ya
vim /etc/sysconfig/selinux
SELINUX=disabled

Apacheni quraşdırıb konfiqurasiya edirik:

# yum -y install httpd
# systemctl status httpd.service
# systemctl --now enable httpd.service 

Lazımi repozitoriyaları konfiqurasiya edirik:

# yum -y install epel-release
# yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm

# yum-config-manager --disable remi-php54
# yum-config-manager --enable remi-php72

PHP quraşdırırıq:

# yum install php php-pear php-cgi php-common php-mbstring php-snmp php-gd php-pecl-mysql php-xml php-mysql php-gettext php-bcmath
vim /etc/php.ini
date.timezone = Asia/Baku

Zabbix repozitoriyasını və zabbixi quraşdırırıq:

# rpm -ivh https://repo.zabbix.com/zabbix/4.4/rhel/7/x86_64/zabbix-release-4.4-1.el7.noarch.rpm

# yum -y install zabbix-server-pgsql zabbix-web-pgsql

Ən son repozitoriyanı https://repo.zabbix.com/zabbix/ ünvanından götürmək olar.

Zabbix datalarını saxlamaq üçün database olaraq PostgreSQL quraşdırıb konfiqurasiya edirik:
Lazımi linux paketlərini quraşdırırıq:

# yum -y install bash-completion
# yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
# yum install postgresql12
# yum install postgresql12-server postgresql12-contrib
# export PGSETUP_INITDB_OPTIONS="--data-checksums"
# /usr/pgsql-12/bin/postgresql-12-setup initdb
# firewall-cmd --zone=dmz --add-port=5432/tcp --permanent
#firewall-cmd --zone=public --add-port=5432/tcp --permanent
# firewall-cmd --reload
# systemctl enable --now postgresql-12

vi /var/lib/pgsql/.pgsql_profile
   export PATH=$PATH:/usr/pgsql-12/bin
# vi /var/lib/pgsql/12/data/postgresql.conf
listen_addresses = '*'
# vi /var/lib/pgsql/12/data/pg_hba.conf 
# systemctl restart postgresql-12.service
# sudo -iu postgres createuser --pwprompt zabbix
# sudo -iu postgres createdb -O zabbix -E Unicode -T template0 zabbix
# zcat /usr/share/doc/zabbix-server-pgsql-4.4.7/create.sql.gz | sudo -u zabbix psql zabbix

Zabbix frontendi üçün PHP konfiqurasiyası /etc/httpd/conf.d/zabbix.conf faylında yerləşdir, faylı aşağıdakı kimi redaktə edirik:

# vim /etc/httpd/conf.d/zabbix.conf
 
php_value max_execution_time 300
php_value memory_limit 128M
php_value post_max_size 16M
php_value upload_max_filesize 2M
php_value max_input_time 300
php_value always_populate_raw_post_data -1
php_value date.timezone Asia/Baku

# systemctl restart httpd.service

Zabbix server üçün Database konfiqurasiyasını düzəldirik:

# vim /etc/zabbix/zabbix_server.conf
DBHost=
DBName=zabbix
DBUser=zabbix
DBPassword=123654

Zabbix serveri başladırıq:

# systemctl status zabbix-server.service
# systemctl --now enable zabbix-server

Firewall qaydalarını redaktə edirik:

# firewall-cmd --add-service={http,https} --permanent
# firewall-cmd --add-port={10051/tcp,10050/tcp} --permanent
# firewall-cmd --reload
# systemctl restart httpd

Indi isə Zabbixi sazlamaqa başlaya bilərik:

http://sizin_zabbix_serverin_ip_unvani/zabbix/

Susmaya görə user Admin və şifrə isə zabbix daxil edib login oluruq

Indi isə monitoring etmək istədiyimiz serverə Zabbix agent yazmaq lazımdır. Əgər agenti manual install etmək istəyirsinizsə ən son agent versiyasını https://repo.zabbix.com/zabbix ünvanından götürmək olar.Biz repozitoriyadan avtomatik install edəcəyik:

# rpm -ivh https://repo.zabbix.com/zabbix/4.4/rhel/7/x86_64/zabbix-release-4.4-1.el7.noarch.rpm
# yum install zabbix-agent
# systemctl --now enable zabbix-agent

Indi isə agentin configurasiya faylını sazlayaq:

# vim  /etc/zabbix/zabbix_agentd.conf

Server=10.14.33.103
ServerActive=10.14.33.103
Hostname=pgtest.localdomain

# systemctl restart zabbix-agent

Zabbix internetdə bir çox templatelər var, bu nümunədə biz GBD tərəfindən yaradılmış zabbix templateni istifadə edəcəyik (https://gitlab.com/gunduzbilisim/pgspotter):

 # yum install git -y
# cd /etc/zabbix/
# git clone https://gitlab.com/gunduzbilisim/pgspotter.git
# chown -R zabbix: /etc/zabbix/pgspotter
# cp /etc/zabbix/pgspotter/zabbix_agentd.d/pgspotter.conf    /etc/zabbix/zabbix_agentd.d/pgspotter.conf

[root@pgtest zabbix]# sudo -iu postgres psql
postgres=# CREATE USER pgspotter IN ROLE pg_monitor;
postgres=# GRANT  USAGE   ON SCHEMA public  TO pgspotter ;

# vi /var/lib/pgsql/12/data/pg_ident.conf
# MAPNAME       SYSTEM-USERNAME         PG-USERNAME
monitoring      zabbix                  pgspotter

# vi /var/lib/pgsql/12/data/pg_hba.conf
local    all      pgspotter     ident             map=monitoring

Indi isə Zabbixin Web interfeysindən daxil olub pgSpotter-PostgreSQL_Template.xml template faylını sistemə import etmək, Hosts hissəsindən Hostlarımızı əlavə edib özümüzə uyğun screen və dashboardlar yaradıb hostlarımızı monitorinq edə bilərik.

 
 
 

PostgreSQL-dən Oracle, Ms SQL Server və MySQL DB-larına qoşularaq cədvəllərin sorğulanması (OracleFDW)


1. Oracle-a qoşulma

PostgreSQL-dən Oracle-yə qoşulmaq üçün
oracle_fdw (https://github.com/laurenz/oracle_fdw) əlavəsindən istifadə olunur.
İlk olaraq PostgreSQL yerləşən serverimizdə Oracle bazasına qoşulmaq üçün Oracle instant client yazırıq və SQL plus ilə bazaya qoşulmağı test edirik.

$ sudo rpm -ivh oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
$ sudo rpm -ivh oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm 
$ sudo rpm -ivh oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm

export CLIENT_HOME=/usr/lib/oracle/12.1/client64/
export LD_LIBRARY_PATH=$CLIENT_HOME/lib
export PATH=$PATH:$CLIENT_HOME/bin

$ sqlplus system/your_pass@DB_IP_unvani:1521/iktexdb

SQL> SHOW CON_NAME
CON_NAME
------------------------------
IKTEXDB

Lazımı paketləri yükləyirik:

# yum makecache
# yum install gcc gcc-c++ -y
# yum install build-essential
# yum install postgresql12-devel
# yum install clang
# yum install centos-release-scl-rh
# yum install llvm-toolset-7-llvm
# yum install llvm-toolset-7-clang
# yum install postgresql12-devel
# pg_config --version 
Eger burada belə bir komandanın olmadığı barədə error baş verirse .bash_profile faylına  path deyishenini elave edirik.
Burada pg_cofig versiyası PostgreSQL-in versiyası ilə eyni olmalıdır. Pg_config postgresql12-devel paketinin içində gəlir.
-bash-4.2$ vi ~/.bash_profile
export CLIENT_HOME=/usr/lib/oracle/12.1/client64
export LD_LIBRARY_PATH=$CLIENT_HOME/lib
export PATH=$PATH:$CLIENT_HOME/bin
export PATH="$PATH:/usr/pgsql-12/bin"

Oracle FDW-nu quraşdırmağın 2 üsulu var
1 – PostgreSQL non-free repos
2 – GitHub
PostgreSQL NonFree repos-dan quraşdırmaq üçün

[root@pgtest ~]# yum install https://yum.postgresql.org/non-free/12/redhat/rhel-7-x86_64/pgdg-redhat-nonfree-repo-42.0-6.noarch.rpm
[root@pgtest ~]# yum install oracle_fdw12

2-ci üsul isə git repozitoriyadan Oracle FWD-nu yüklənib quraşdırılması:

$ wget https://github.com/laurenz/oracle_fdw/archive/ORACLE_FDW_2_2_0.tar.gz


$ tar -zxvf ORACLE_FDW_2_2_0.tar.gz 
$ su -
cd /var/lib/pgsql/oracle_fdw-ORACLE_FDW_2_2_0
# make 
# make install
# systemctl restart postgresql-12.service

testdb=# SELECT name,default_version FROM pg_available_extensions WHERE name='oracle_fdw' \gx

testdb=# CREATE EXTENSION oracle_fdw ;
CREATE EXTENSION

Quraşdırılmış extentionlar barədə məlumata baxmaq üçün:

testdb=# SELECT name,default_version FROM pg_available_extensions WHERE name='oracle_fdw' \gx
-[ RECORD 1 ]---+-----------
name            | oracle_fdw
default_version | 1.1


testdb=# \dew
List of foreign-data wrappers
-[ RECORD 1 ]-------------------
Name      | oracle_fdw
Owner     | postgres
Handler   | oracle_fdw_handler
Validator | oracle_fdw_validator

Indi isə qoşulacağımız kənar baza serverini yaradırıq. Burada OPTIONS hissəsində uyğun olaraq serverin ünvanı, port nömrəsi və instance adı yazılır:

testdb=# CREATE SERVER oracle_srv FOREIGN DATA WRAPPER oracle_fdw OPTIONS (dbserver '//192.168.100.199:1521/iktexdb');
CREATE SERVER

Bu nümunədə PostgreSQL-in postgres useri Oracle-yə iktex useri ilə qoşulacağıq:

testdb=# CREATE USER MAPPING FOR postgres SERVER oracle_srv OPTIONS (user 'iktex',password '123654');
CREATE USER MAPPING

Remote bazanın cədvəllərinin təsvirini ayrıca bir schema-ya import edirik:

testdb=# CREATE SCHEMA oracle_iktex;
CREATE SCHEMA

Remote bazanın 2 cədvəlini import edirik :

testdb=# IMPORT FOREIGN SCHEMA "IKTEX" LIMIT TO (TBL1,TBL2) FROM SERVER oracle_srv INTO oracle_iktex;
IMPORT FOREIGN SCHEMA

Indi isə oracle-də olan cədvəlləri postgresql-dən sorğulayaq:

testdb=# \det oracle_iktex.*
      List of foreign tables
    Schema    | Table |   Server   
--------------+-------+------------
 oracle_iktex | tbl1  | oracle_srv
 oracle_iktex | tbl2  | oracle_srv
(2 rows)


testdb=# select * from oracle_iktex.tbl1;
 id |   ad   |  soyad  
----+--------+---------
  1 | Valeh  | Agayev
  2 | Vusal  | Hasanli
  4 | Teyyub | Aliyev
  5 | Rashad | Javadov
(4 rows)

Biz bu məlumatları select etməkdən əlavə həmdə onlar üzərində Insert, Update, Delete əməliyyatlarınıda edə bilərik

testdb=#  insert into oracle_iktex.tbl1 values (6,'Samir','Babayev');
INSERT 0 1
testdb=# delete from oracle_iktex.tbl1 where ad='Valeh';
DELETE 1

2. Ms SQL Serverə qoşulma

PostgreSQL-dən Ms SQL Serverə qoşulmaq üçün
tds_fdw (https://github.com/tds-fdw/tds_fdw) əlavəsindən istifadə olunur.

[root@pgtest ~]# yum install freetds
[root@pgtest ~]# vi /etc/freetds.conf
[testsqlserver]
    host = 192.168.8.111
    instance = TESTSQLSERVER
    database = testdb
    tds version = 7.0

[root@pgtest ~]# tsql -S testsqlserver -U sa -P 123654
$ sudo yum install epel-release 
$ sudo yum install freetds-devel 
$ sudo yum install gcc make wget 
# curl https://tds-fdw.github.io/yum/tds_fdw.repo -o /etc/yum.repos.d/tds_fdw.repo
# yum clean all
# yum repolist
# yum install postgresql-12-tds_fdw -y
testdb=# CREATE EXTENSION tds_fdw;
CREATE EXTENSION
postgres=# CREATE SERVER sqlserver_srv FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername 'testsqlserver');
CREATE SERVER

SQL server Super useri ilə qoşulmağa çalışıq:

testdb=# CREATE USER MAPPING FOR postgres SERVER sqlserver_srv  OPTIONS (username 'sa',password '123654');
CREATE USER MAPPING
testdb=# CREATE SCHEMA sql2pg_schema;
CREATE SCHEMA

SQL Serverdən schemanı bütünlükdə yeni yaratdığımız PostgreSQL schemasına import edirik:

postgres=# IMPORT FOREIGN SCHEMA sqlsrv_schema FROM SERVER sqlserver_srv INTO sql2pg_schema;
IMPORT FOREIGN SCHEMA
postgres=# \det sql2pg_schema.*
List of foreign tables
-[ RECORD 1 ]---------
Schema | sql2pg_schema
Table  | Table_1
Server | sqlserver_srv

postgres=# select * from sql2pg_schema."Table_1" ;
NOTICE:  tds_fdw: Query executed correctly
NOTICE:  tds_fdw: Getting results
     id     |     ad     |   soyad    
------------+------------+------------
 1          | Valeh      | Agayev    
 1          | Taleh       | Agayev    
(2 rows)

 

3. MySQL-ə qoşulma

PostgreSQL-dən MySQL qoşulmaq üçün
mysql_fdw (https://github.com/EnterpriseDB/mysql_fdw) əlavəsindən istifadə olunur.

-bash-4.2$ sudo yum install mysql-devel gcc gcc-devel python-devel
-bash-4.2$ wget https://github.com/EnterpriseDB/mysql_fdw/archive/REL-2_5_3.tar.gz
-bash-4.2$ tar -xvf REL-2_5_3.tar.gz 

-bash-4.2$ cd mysql_fdw-REL-2_5_3/
-bash-4.2$ make USE_PGXS=1
-bash-4.2$ make USE_PGXS=1 install
testdb=# CREATE EXTENSION mysql_fdw;
CREATE EXTENSION
testdb=# CREATE SERVER mysql_srv FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host 'localhost',port '3306');
CREATE SERVER

MySQL-in superuseri ilə qoşulacağıq:

testdb=# CREATE USER MAPPING FOR postgres SERVER mysql_srv OPTIONS (username 'root',password 'password');
CREATE USER MAPPING

dfdfdfdf

CREATE FOREIGN TABLE testcedvel(
     id int,
     name text)
SERVER mysql_srv
     OPTIONS (dbname 'testdb', table_name 'testcedvel');


 


			

Migrating from Oracle to PostgreSQL (with Ora2Pg)


Today I want to write about one of the most used tools for migrating from Oracle to PostgreSQL.
Ora2Pg is Free Software written and maintained by Gilles Darold.
In this example Oracle database and the PostgreSQL database are running on the same server.
I will show both variants of migration:

Variant 1- Oracle and PostgreSQL running on the same server.

We are using:|

  • Oracle 12.1 database
  • PostgreSQL 11.5
  • Oracle Enterprise Linux 6.7
  • DBD-Oracle latest version
  • DBD-Pg latest version
  • DBI latest version
  • ora2pg latest version

Installing pre-requisites:

Download and install Perl :

[root@ora2pg ~]# yum install perl-DBD-Pg perl perl-devel perl-DBI perl-CPAN -y

Installation
Install Database independent interface for Perl (DBI module)

Perl module DBD::Oracle will used for connectivity to an Oracle database from Perl DBI. That is why we must first install the DBI module.
Download the latest DBI module for Perl from CPAN (https://www.cpan.org/modules/by-module/DBI/).

root@ora2pg ~]# mkdir install
[root@ora2pg ~]# cd install
[root@ora2pg install]# wget https://www.cpan.org/modules/by-module/DBI/DBI-1.642.tar.gz
[root@ora2pg install]# tar xvzf DBI-1.642.tar.gz
[root@ora2pg install]# cd DBI-1.642
[root@ora2pg DBI-1.642]# perl Makefile.PL
[root@ora2pg DBI-1.642]# make
[root@ora2pg DBI-1.642]# make install

 

Install Oracle database driver for the DBI module(DBD-Oracle )

[root@ora2pg ~]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1
[root@ora2pg ~]# export LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0.2/db_1/lib
[root@ora2pg ~]# perl -MCPAN -e shell
. . . . . .
Would you like me to configure as much as possible automatically? [yes]
. . . . . .
cpan[1]> get DBD::Oracle
. . . . . .
cpan[2]> quit

[root@ora2pg ~]# cd ~/.cpan/build/DBD-Oracle-1.80-zGJNXY/
[root@ora2pg DBD-Oracle-1.80-zGJNXY]# perl Makefile.PL
[root@ora2pg DBD-Oracle-1.80-zGJNXY]# make
[root@ora2pg DBD-Oracle-1.80-zGJNXY]# make install

Install Postgres database driver for DBI module (DBD-Pg)

[root@ora2pg ~]# cd install
[root@ora2pg install]# rm -rf *
[root@ora2pg install]# wget https://cpan.metacpan.org/authors/id/T/TU/TURNSTEP/DBD-Pg-3.10.0.tar.gz
[root@ora2pg install]# tar xvzf DBD-Pg-3.10.0.tar.gz
[root@ora2pg install]# cd DBD-Pg-3.10.0
[root@ora2pg DBD-Pg-3.10.0]# perl Makefile.PL
[root@ora2pg DBD-Pg-3.10.0]# make
[root@ora2pg DBD-Pg-3.10.0]# make install

And finally install Ora2Pg

[root@ora2pg ~]# cd install
[root@ora2pg install]# rm -rf *
[root@ora2pg install]# wget https://sourceforge.net/projects/ora2pg/files/20.0/ora2pg-20.0.tar.bz2
[root@ora2pg install]# bzip2 -d ora2pg-20.0.tar.bz2
[root@ora2pg install]# tar xvf ora2pg-20.0.tar
[root@ora2pg install]# cd ora2pg-20.0
[root@ora2pg ora2pg-20.0]# perl Makefile.PL
[root@ora2pg ora2pg-20.0]# make && make install

Configure Ora2Pg

By default Ora2Pg will look for /etc/ora2pg/ora2pg.conf configuration file, create the configuration file from the default template  and grant the privileges to the oracle user:

[root@ora2pg ~]# cd /etc/ora2pg/
[root@ora2pg ora2pg]# cp ora2pg.conf.dist ora2pg.conf
[root@ora2pg ora2pg]# chown oracle:oinstall /etc/ora2pg/ora2pg.conf
[root@ora2pg ora2pg]# vi ora2pg.conf

Fisrt we must find the following four Oracle parameters and change them :

ORACLE_HOME /u01/app/oracle/product/12.1.0.2/db_1
#If you use SID
ORACLE_DSN dbi:Oracle:host=localhost;sid=inzmapdb;port=1521
#If you use service name
ORACLE_DSN dbi:Oracle://localhost:1521/inzmapdb
ORACLE_USER system
ORACLE_PWD oracle

For defining which objects we want to export, we must add object type names after TYPE keyword:
If we will use this example , then all tables, views, tablespaces and their data will be exported.

TYPE              TABLE VIEW GRANT TABLESPACE COPY

There is much detailed information in the conf file you can read and configure export for your needs.

#——————————————————————————
# EXPORT SECTION (Export type and filters)
#——————————————————————————

# Type of export. Values can be the following keyword:
# TABLE Export tables, constraints, indexes, …
# PACKAGE Export packages
# INSERT Export data from table as INSERT statement
# COPY Export data from table as COPY statement
# VIEW Export views
# GRANT Export grants
# SEQUENCE Export sequences
# TRIGGER Export triggers
# FUNCTION Export functions
# PROCEDURE Export procedures
# TABLESPACE Export tablespace (PostgreSQL >= 8 only)
# TYPE Export user defined Oracle types
# PARTITION Export range or list partition (PostgreSQL >= v8.4)
# FDW Export table as foreign data wrapper tables
# MVIEW Export materialized view as snapshot refresh view
# QUERY Convert Oracle SQL queries from a file.
# KETTLE Generate XML ktr template files to be used by Kettle.
# DBLINK Generate oracle foreign data wrapper server to use as dblink.
# SYNONYM Export Oracle’s synonyms as views on other schema’s objects.
# DIRECTORY Export Oracle’s directories as external_file extension objects.
# LOAD Dispatch a list of queries over multiple PostgreSQl connections.
# TEST perform a diff between Oracle and PostgreSQL database.
# TEST_VIEW perform a count on both side of rows returned by viewsZ

We can also define an output file name where exported data will be saved:

OUTPUT                    output.sql

We can also set parameter SCHEMA to the source DB schema names:

# Oracle schema/owner to use
SCHEMA                    ORCL_SCHEMA_NAME1  ORCL_SCHEMA_NAME2     ORCL_SCHEMA_NAME3

Parameter PG_SCHEMA  is for setting the target schema name

PG_SCHEMA                postgresql_schema_name
 
 

# Export Oracle schema to PostgreSQL schema
EXPORT_SCHEMA      1

 

Also set  parmeters bellow

ORA_INITIAL_COMMAND commit
DROP_FKEY 1
DISABLE_SEQUENCE 1
DISABLE_TRIGGERS USER
TRUNCATE_TABLE 1

 

In this example, I am not going to the deeps of the configuration, you can read the manual and config file and configure your conf file for your needs. http://ora2pg.darold.net/documentation.html

To copy the data from Oracle to PostgreSQL we first will Export objects under specified schema from Oracle, then Import to PostgreSQL and finally copy the data. In this example i will import only  tables and their data :

    • Export  from Oracle
      [root@ora2pg ora2pg]# ora2pg -d
      [2019-09-29 20:51:01] Ora2Pg version: 20.0
      [2019-09-29 20:51:01] Trying to connect to database: dbi:Oracle:host=localhost;sid=inzmapdb;port=1521
      [2019-09-29 20:51:01] Isolation level: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
      [2019-09-29 20:51:01] Looking forward functions declaration in schema AAP.
      [2019-09-29 20:51:01] Looking at package pkg_administrative_area...
      [2019-09-29 20:51:02] Looking at package PKG_CONSTANT...
      [2019-09-29 20:51:02] Looking at package PKG_SEARCH...
      [2019-09-29 20:51:03] Looking at package PKG_SECURITY...
      [2019-09-29 20:51:03] Looking at package PKG_USER...
      [2019-09-29 20:51:03] Retrieving table information...
      [2019-09-29 20:56:17] [1] Scanning table ACTION_LOG (2912 rows)...
      [2019-09-29 20:56:17] [2] Scanning table ADDRESS (899 rows)...
      [2019-09-29 20:56:17] [3] Scanning table ADMINISTRATIVE_AREA (899 rows)...
      [2019-09-29 20:56:17] [4] Scanning table AREA (900 rows)...
      [2019-09-29 20:56:17] [5] Scanning table AREA_ADDRESS (900 rows)...
      [2019-09-29 20:56:17] [6] Scanning table AREA_CONTACT (84 rows)...
      [2019-09-29 20:56:17] [7] Scanning table AREA_NUMBER (900 rows)...
      :::::::::output  trimmed::::::::
      [2019-09-29 20:56:23] Dumping table ACTION_LOG...
      [2019-09-29 20:56:23] Dumping table ADDRESS...
      [2019-09-29 20:56:23] Dumping table ADMINISTRATIVE_AREA...
      [2019-09-29 20:56:23] Dumping table AREA...
      [2019-09-29 20:56:23] Dumping table AREA_ADDRESS...
      [2019-09-29 20:56:23] Dumping table AREA_CONTACT...
      :::::::::output  trimmed::::::::
      2019-09-29 22:29:44] Trying to connect to database: dbi:Oracle:host=localhost;sid=inzmapdb;port=1521
      [2019-09-29 22:29:44] Isolation level: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
      [2019-09-29 22:29:44] Retrieving partitions information...
      [2019-09-29 22:30:02] Looking how to retrieve data from ACTION_LOG...
      [2019-09-29 22:30:02] DEGUG: Query sent to Oracle: SELECT "ID","ACTION_NAME","ACTION_DATE","USERS_ID","IP_ADDRESS","HOST_NAME","ACTION_DATA","STATUS" FROM "AAP"."ACTION_LOG" a
      [2019-09-29 22:30:02] Fetching all data from ACTION_LOG tuples...
      [2019-09-29 22:30:02] DEBUG: number of rows 2912 extracted from table ACTION_LOG
      [2019-09-29 22:30:02] DEBUG: Formatting bulk of 10000 data (real: 2912 rows) for PostgreSQL.
      [2019-09-29 22:30:03] DEBUG: Creating output for 10000 tuples
      [2019-09-29 22:30:03] Dumping data from ACTION_LOG to file: output.sql
      [2019-09-29 22:30:03] Extracted records from table ACTION_LOG: total_records = 2912 (avg: 2912 recs/sec)[> ] 2912/113806 total rows (2.6%) - (1 sec., avg: 2912 recs/sec).
      [2019-09-29 22:30:03] Looking how to retrieve data from ADDRESS...
      [2019-09-29 22:30:03] DEGUG: Query sent to Oracle: SELECT "ID","NAME","STATUS" FROM "AAP"."ADDRESS" a
      [2019-09-29 22:30:03] Fetching all data from ADDRESS tuples...
      [2019-09-29 22:30:03] DEBUG: number of rows 984 extracted from table ADDRESS
      [2019-09-29 22:30:03] DEBUG: Formatting bulk of 10000 data (real: 984 rows) for PostgreSQL.
      [2019-09-29 22:30:03] DEBUG: Creating output for 10000 tuples
      [2019-09-29 22:30:03] Dumping data from ADDRESS to file: output.sql
      [2019-09-29 22:30:03] Extracted records from table ADDRESS: total_records = 984 (avg: 984 recs/sec)[2019-09-29 22:30:32] Total time to export data from 32 tables (0 partitions, 0 sub-partitions) and 113806 total rows: 48 wallclock secs (18.43 usr + 0.12 sys = 18.55 CPU)
      [2019-09-29 22:30:32] Speed average: 2370.96 rows/secImport to Postgres
      
      

      import syntax like bellow

      psql -U postgresql_username -d pg_database_name < export_file_name.sql

 

[root@ora2pg ~]# su - postgres
-bash-4.1$ psql postgres=# CREATE DATABASE testdb;
postgres=# \c testdb
testdb=# CREATE SCHEMA inzmapdb;
[root@ora2pg ora2pg]# su - postgres

-bash-4.1$ cd /etc/ora2pg/ 
-bash-4.1$ psql -U postgres -d testdb < TABLE_output.sql
SET
SET
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
:::::::::output  trimmed::::::::
:::::::::output  trimmed::::::::
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE

-bash-4.1$ psql -U postgres -d testdb < COPY_output.sql
SET
SET
SET
SET
SET
SET
SET
SET
SET
:::::::::output  trimmed::::::::
SET
SET
SET
SET
SET
SET
SET
SET
SET
SET
SET
SET
SET
SET
WARNING: there is no transaction in progress
COMMIT

 

Now, let’s check imported data:

postgres=# \c testdb

testdb=# SELECT count(*) from inzmapdb.action_log ;
count
-------
2912
(1 row)

Later I will add more details, to be continued…

PostgreSQL 11 Streaming Replication/Hot Standby


elephant
Photo by shy sol on Pexels.com

To create PostgreSQL replication first we must install PostgreSQL on  Primary and  Standby server.

On Primary and Standby:

[root@localhost ]# yum -y install epel-release
[root@localhost ]# yum -y install bash-completion
[root@localhost ]# yum -y install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
[root@localhost ]# yum install postgresql11-server
[root@localhost ]# yum install postgresql11-contrib

On Primary:

[root@localhost ]# /usr/pgsql-11/bin/postgresql-11-setup initdb
Initializing database ... OK

[root@localhost ]# systemctl enable postgresql-11.service
[root@localhost ]# systemctl start postgresql-11.service

The combination of Hot Standby and Standby Replication would make the latest data inserted into the primary visible in the standby almost immediately. Now lets change postgresql.conf on Primary to create hot standby with streaming replication:

[root@localhost ]# vi /var/lib/pgsql/11/data/postgresql.conf

wal_level = replica
max_wal_senders=10
wal_keep_segments=256
archive_mode=on
archive_command=’/usr/pgsql-11/bin/syncwal.sh %p %f'
listen_addresses = '*'

Create bash script which will copy WAL files from primary to standby:

[root@localhost ]# vi /usr/pgsql-11/bin/syncwal.sh
#!/bin/bash
scp $1 192.168.2.129:/var/lib/pgsql/11/walarchive/$2
if [ $? != 0 ]
then
echo "Archiver error:"
exit 1
fi
exit 0
[root@localhost ~]# chown postgres: /usr/pgsql-11/bin/syncwal.sh
[root@localhost ~]# chmod 700 /usr/pgsql-11/bin/syncwal.sh

Now we will create a special user for replication and revoke REPLICATION grant from “postgres” superuser to secure our replication.

[root@localhost bin]# su - postgres 
-bash-4.2$
psql postgres=# CREATE ROLE replicauser WITH REPLICATION LOGIN ENCRYPTED PASSWORD 'test';
CREATE ROLE 
postgres=# ALTER ROLE postgres NOREPLICATION;
ALTER ROLE

Then edit pg_hba.conf  file on the primary server and add a line below to give access to walreceiver from the standby :

[root@localhost bin]# vi /var/lib/pgsql/11/data/pg_hba.conf
host     replication         replicauser          192.168.2.129/32        md5

after change restart the PostgreSQL:

[root@localhost ~]# systemctl restart postgresql-11.service
[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# systemctl disable firewalld.service

Disable or set to permissive mode SELinux

On standby:

[root@localhost ~]# mkdir -m 700 /var/lib/pgsql/11/walarchive
[root@localhost 11]# chown postgres: /var/lib/pgsql/11/walarchive

On each host generate ssh key  from Postgres user and add this key to other hosts authorized_keys file:

-bash-4.2$ ssh-keygen -t rsa
-bash-4.2$ touch ~/.ssh/authorized_keys
-bash-4.2$ chmod 600 ~/.ssh/authorized_keys

Import base backup of the primary to the standby database

-bash-4.2$ /usr/pgsql-11/bin/pg_basebackup -D /var/lib/pgsql/11/data/ -c fast -X fetch -P -Fp -R -h 192.168.2.113 -p 5432 -U replicauser
Password: ****
40952/40952 kB (100%), 1/1 tablespace

Change standby config file like below:

-bash-4.2$ vi /var/lib/pgsql/11/data/postgresql.conf
hot_standby = on
hot_standby_feedback=on

Then edit recovery.conf file and add lines below:

-bash-4.2$ vi /var/lib/pgsql/11/data/recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replicauser password=test host=192.168.2.113 port=5432 scram_channel_binding=''tls-unique'' sslmode=prefer sslcompression=0 krbsrvname=postgres target_session_attrs=any'
restore_command = 'cp /var/lib/pgsql/11/walarchive/%f %p'
archive_cleanup_command='/usr/pgsql-11/bin/pg_archivecleanup /var/lib/pgsql/11/walarchive %r'
trigger_file = '/var/lib/pgsql/11/data/finish.replication'
recovery_target_timeline = 'latest'

Start database and enable autostart:

[root@localhost 11]# systemctl start postgresql-11.service
[root@localhost 11]# systemctl enable postgresql-11.service

Now create new database and tables to test the standby database and you will see that they immediately applied on standby.

 

Directory index full!


Today I encountered with short system hang in my 2 node RAC database.When I investigate the cause of the problem a saw “Kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!” warning in the /var/log/messages file.

Jan 22 14:45:09 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:09 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:09 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:16 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:17 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:17 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:17 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!
Jan 22 14:45:17 mydb1 kernel: EXT4-fs warning (device sda1): ext4_dx_add_entry:2021: Directory index full!

The “Directory index full!” warning means that there is a problem with inode’s .
To check inode’s

[oracle@esddb1 ~]$ df -i

i556^cimgpsh_orig

Here we saw that in root directory percentage of the used idnode’s is 22 %.
Let’s look what  says RedHat knowledgebase about  this warning:

  • The ‘directory index full’ error will be seen if there are lots of files/directories in the filesystem so that the tree reaches its indexing limits and cannot keep track further.
  • There is a limit in ext4 of the directory structure, a directory on ext4 can have at most 64000 subdirectories.

I found that there are 14 million *.aud files was generated under  /u01/app/12.1.0.2/grid/rdbms/audit/ directory.

# for dir in `ls -1`; do echo $dir; find ./$dir -type f|wc -l; done

The cause of this problem is audit files which created for each connection which connects as sys user. This files needed for security compliance reasons and we can delete old files. Old *.aud files not needed by any Oracle ASM process and can be deleted without any impact to the system.

You can clean old *.aud files like below:

[oracle@mydb1 ~]$ cd /u01/app/12.1.0.2/grid/rdbms/audit
[oracle@mydb1 audit]$ find /u01/app/grid/11.2.0/grid/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +10 -delete -print

After cleanup, we can saw that percentage of free inodes increased and there are no WARNING messages in /var/log/messages file:

[oracle@mydb1 ~]$ df -i 
[oracle@mydb1 ~]$ less /var/log/messages

 

Data Pump job fails with error – ORA-31634: job already exists


Today when I want to export schema with data pump, I encountered with an error ORA-31634: job already exists. I queried dba_datapump_jobs table and saw that it has 99 jobs with the NOT RUNNING state.

SELECT owner_name,
job_name,
operation,
job_mode,
state
FROM dba_datapump_jobs;

When we are not using job name for data pump job, Oracle generates default name to the job and in Oracle data pump can generate up to 99 unique jobs. When job name already exists or you are running many expdp jobs at the same time ( more than 99 jobs), then data pump cannot generate a unique name and you get this error. Another reason why this problem occurs is when jobs are aborted, or when KEEP_MASTER=y used for the data pump the records stay there.

There are two solutions to this problem:

The first solution is dropping this orphaned tables, use result of the query below to drop tables

SELECT 'DROP table ' || owner_name || '.' || job_name || ';'
FROM DBA_DATAPUMP_JOBS
WHERE STATE = 'NOT RUNNING';
DROP TABLE DP_USER.SYS_EXPORT_SCHEMA_01 PURGE;
…
...
DROP TABLE DP_USER.SYS_EXPORT_SCHEMA_99 PURGE; 

2) The second solution is to use unique job name in data pump jobs like below:

expdp dp_user/password directory=DP_DIR dumpfile=backup.dmp logfile=logfile.log job_name=your_unique_job_name schemas=schema_name

Now you can run data pump jobs without any problem

Hanging when drop non default temp tablespace


Today i added new temp tablespace to my database and make it default tablespace. After when i want to drop old temp tablespace it hang. After many different unsuccessful attempts i find what is reason of hanging 🙂
if you have same problem please try steps bellow:

ilfan

After killing this session drop operation was successfully completed 🙂

alter system kill session ‘[sid],[serial#]’ immediate;

2