HOAB

History of a bug

Postgresql pg_upgrade failed : role <XXX> unknown

Rédigé par gorki Aucun commentaire

Problem :

When migrating a database, pg_upgrade failed with “could not connect, role <XXX> is unknown”.

My database was created with another user (postgresql)

Then transfered to another user (let's call it john)

The login authentication was password only until upgrade, where I put local to trust for upgrade.

The new destination database was created by john.

Solution :

The pg_upgrade can not use : 

  • an old database with super role postgres
  • a new database with super role john

The option “-U <username>” is applied to the both, so there is always one which is wrong.

So rename the old database super role to john (thanks to internet).

  1. start the old database, pg_upgrade give the command in the log but !!
  • Be careful ! there is a hidden option “-b” which prevent any modification :) remove it and all is ok
"/home/myuser/postgresql-9.6.2/bin/pg_ctl" -w -D "/home/myuser/database-introscope-9.6" -o "-p 50432  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/myuser'" start

Connect with : 

/home/myuser/postgresql-9.6.2/bin/psql -p 50432 -h /home/myuser -U postgres -d postgres

Check the super user role name : 

SELECT rolname FROM pg_roles WHERE oid = 10;

Create another super user

CREATE ROLE spiderman SUPERUSER LOGIN PASSWORD 'moreSecurePass';

Quit, connect with spideman, still on postgres database

/home/myuser/postgresql-9.6.2/bin/psql -p 50432 -h /home/myuser -U spideman -d postgres

Rename the original role :

alter role postgres rename to john;

Check the connection :

/home/myuser/postgresql-9.6.2/bin/psql -p 50432 -h /home/myuser -U john -d postgres

Drop spiderman role : 

DROP ROLE spiderman;

I learn to side tricks here : 

  • -h <path> : give the path to the unix socket connection
  • -b option : use for binary upgrade mode on pg_ctl (didn't see the option in the documentation)
  • Owner role has value OID=10

 

 

 

Relocatable postgresql

Rédigé par gorki Aucun commentaire

Problem :

I need a relocatable / portable version of postgresql.

Disclaimer : if possible, use the OS distribution, it will be more up-to-date and sure.

That thing said : 

  • I downloaded the source
  • Compile it on my computer (Debian / Trixie / Sid)
  • Package and run

Fail ! Obviously but here is why : 

  1. Shared library are related to an absolute path
  2. Glibc version on my computer is more recent than target server

Solution :

Two problems here and a few useful commands :

  • ldd <program name>
    • Will give you if the program is static or dynamic
    • If dynamic, the list of libraries and the searched path, example (libpq is a library from postgres)
 libpq.so.5 => /lib/x86_64-linux-gnu/libpq.so.5 (0x00007f29f9786000)
  • readelf -d <program name> : information on execution
0x0000000000000001 (NEEDED)             Shared library : [libpq.so.5]
0x0000000000000001 (NEEDED)             Shared library : [libm.so.6]
0x0000000000000001 (NEEDED)             Shared library : [libc.so.6]
0x000000000000001d (RUNPATH)            Library runpath :[/home/myuser/projects/packaging/postgresql/output/lib]
0x000000000000000c (INIT)               0x6000
0x000000000000000d (FINI)               0x162a0
  • objdump -T <program name> | grep GLIBC : list dependencies on GLIBC

 

Once I had this information : 

  • I compile postgresql on the target system : GLIBC 2.34, 2.33 were not available so postgresql compile without linking it
  • I change the runpath of the executables with these commands :
# extract postgres sources
./configure --prefix=/home/myuser/postgresql/output --without-icu --without-readline --without-zlib --disable-rpath
export LD_RUN_PATH='$ORIGIN/../lib'
make
make install

It generates a postgresql version in  /home/myuser/postgresql/output

The readelf commands returns : 

 0x000000000000000f (RPATH)              Library rpath: [$ORIGIN/../lib]

So I was now able to package it.

 

 

Fast browsing and DNS

Rédigé par gorki Aucun commentaire

Problem :

I was surfing on some sites blocked by my DNS provider (no, not yggtorrent. Absolutely not).

So Firefox provide DNS over HTTP with NextDNS, sometimes slower than my provider DNS but well, not so bad.

Then for some reason, I tried to host a local DNS resolver. Well, it WAS slow.

Solution :

Unbound is DNS resolver :

  • easy to install
  • cache request locally, so save a few ms for a lot of requests !
  • and support DNS over https, etc…

Setup is quite simple thanks to internet knowledge :

Installation :
(https://memo-linux.com/debian-installer-le-serveur-dns-unbound/

apt install unbound
cd /var/lib/unbound/ 
wget ftp://ftp.internic.net/domain/named.cache
mv named.cache root.hints && chown unbound:unbound root.hints
mv /etc/unbound/ 
unbound.conf.d/root-auto-trust-anchor-file.conf root-auto-trust-anchor-file.conf.original
mkdir /var/log/unbound
chown unbound: /var/log/unbound
# modify apparmor (see at the end)
systemctl restart unbound

My configuration file :

server:
statistics-interval: 0
extended-statistics: yes
statistics-cumulative: yes
verbosity: 3
interface: 127.0.0.1
port: 53
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
access-control: 127.0.0.0/8 allow ## j'autorise mon serveur
access-control: 0.0.0.0/0 refuse ## j'interdis tout le reste de         l'Internet !
auto-trust-anchor-file: "/var/lib/unbound/root.key"
root-hints: "/var/lib/unbound/root.hints"
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes
cache-min-ttl: 3600
cache-max-ttl: 86400
prefetch: yes
num-threads: 6
msg-cache-slabs: 16
rrset-cache-slabs: 16
infra-cache-slabs: 16
key-cache-slabs: 16
rrset-cache-size: 256m
msg-cache-size: 128m
so-rcvbuf: 1m
unwanted-reply-threshold: 10000
do-not-query-localhost: yes
val-clean-additional: yes
#use-syslog: yes
#val-log-level:2 (0: default, nothing, 2: full)
logfile: /var/log/unbound/unbound.log
harden-dnssec-stripped: yes
cache-min-ttl: 3600
cache-max-ttl: 86400
prefetch: yes
prefetch-key: yes

And an additional apparmor configuration to be able to write in a dedicated file :
(https://b4d.sablun.org/blog/2018-09-27-when-unbound-wont-write-logs/)

vim /etc/apparmor.d/local/usr.sbin.unbound

# Site-specific additions and overrides for usr.sbin.unbound.
# For more details, please see /etc/apparmor.d/local/README.
/var/log/unbound/unbound.log rw,

 

Lire la suite de Fast browsing and DNS

Bash and the empty optional arguments on command line

Rédigé par gorki Aucun commentaire

Problem :

Well, I know that having named parameter is better “-file=” etc..

But for a simple task, I wanted to give :

./mycommand arg1 arg2 ‘’ ‘’ arg5

And pass those parameters to a function… 

Solution :

Not so lost in internet but easy to do at the end ! 

So basically, as simple as : 

# Solution OK : use arrau
all_args=("$@");
myfunction "${all_args[@]}"

# Loop over parameters
for i in "${@}"; do
   echo "$i"
done
for i in "${all_args[@]}"; do
   echo "$i"
done

From :

#!/bin/bash

all_args=("$@");

myfunction() {
 arg1=$1
 arg2=$2
 arg3=${3:-'default3'}
 arg4=${4:-'default4'}
 arg5=${5:-'default5'}

 echo "arg1=$arg1"
 echo "arg2=$arg2"
 echo "arg3=$arg3"
 echo "arg4=$arg4"
 echo "arg5=$arg5"
}

echo "--------------- args hard-codede"
myfunction 1 2 "" "" yes
echo "--------------- explode array with quote"
myfunction $(printf ""%s" " "${all_args[@]}")
echo "--------------- working just expand array"
myfunction "${all_args[@]}"

With the following command line : 

./test.sh 1 2 "" "" yes
--------------- args hard-codede
arg1=1
arg2=2
arg3=default3
arg4=default4
arg5=yes
--------------- explode array with quote
arg1="1"
arg2="2"
arg3=""
arg4=""
arg5="yes"
--------------- working just expand array
arg1=1
arg2=2
arg3=default3
arg4=default4
arg5=yes

 

 

 

Cannot create GC thread but a lot of memory

Rédigé par gorki Aucun commentaire

Problem :

Launching a JVM I have the message : "Cannot create GC thread. Out of system resources"

  • Enough memory
  • Enough swap
  • Enough ulimit
  • Enough threads-max
  • Enough CPU

Event extend the PID limit...

Important (at the end) : debian version = 10.11

Solution :

After a hours of googling, I found :

But none of these solutions works and none was matching the number I had :

  • number of open files < ulimit -n
  • maximum process/tasks < ulimit -u

But in a thread, I found something that was working : UserTasksMax.
I'm running SystemD, I have around 10805 task running for my user.
And from : https://manpages.debian.org/stretch/systemd/logind.conf.5.en.html

UserTasksMax=

Sets the maximum number of OS tasks each user may run concurrently. This controls the TasksMax= setting of the per-user slice unit, see systemd.resource-control(5) for details. If assigned the special value "infinity", no tasks limit is applied. Defaults to 33%, which equals 10813 with the kernel's defaults on the host, but might be smaller in OS containers.

For my suspect PID (a lot of files) :

  • cat /proc/21890/status | grep Thread => 1 thread
  • ls /proc/21890/task | wc
  • confirmed by the usual command : ps -eLf | grep calrisk | wc

I have around 10805 threads running for a given JVM very close to the limit.

Complete guide :

https://www.journaldufreenaute.fr/nombre-maximal-de-threads-par-processus-sous-linux/

Parameters not present in all man page, it could grown up to 12288 on latest version.

To be check !

 

 

Fil RSS des articles de cette catégorie