Problem :
Launching a JVM I have the message : "Cannot create GC thread. Out of system resources"
- Enough memory
- Enough swap
- Enough ulimit
- Enough
threads-max
Enough CPU
Event extend the PID limit...
Important (at the end) : debian version = 10.11
Solution :
After a hours of googling, I found :
But none of these solutions works and none was matching the number I had :
- number of open files < ulimit -n
- maximum process/tasks < ulimit -u
But in a thread, I found something that was working : UserTasksMax
.
I'm running SystemD, I have around 10805 task running for my user.
And from : https://manpages.debian.org/stretch/systemd/logind.conf.5.en.html
UserTasksMax=
Sets the maximum number of OS tasks each user may run concurrently. This controls the
TasksMax= setting of the per-user slice unit, see
systemd.resource-control(5) for details. If assigned the special value "infinity", no tasks limit is applied. Defaults to 33%, which equals 10813 with the kernel's defaults on the host, but might be smaller in OS containers.
For my suspect PID (a lot of files) :
- cat /proc/21890/status | grep Thread => 1 thread
- ls /proc/21890/task | wc
- confirmed by the usual command : ps -eLf | grep calrisk | wc
I have around 10805 threads running for a given JVM very close to the limit.
Complete guide :
https://www.journaldufreenaute.fr/nombre-maximal-de-threads-par-processus-sous-linux/
Parameters not present in all man page, it could grown up to 12288 on latest version.
To be check !
Problem :
Logitech media server is not supported anymore by Synology (bouuuuh.)
Spotted by number of threads :
- https://community.synology.com/enu/forum/1/post/141628
- https://www.avforums.com/threads/synology-and-lms.2332981/
- https://community.jeedom.com/t/synology-maj-package-perl-lms-squeezebox-hs/57576
- https://www.homecinema-fr.com/forum/source-dematerialisee-haute-fidelite-et-dac/logitech-squeezebox-t29927392-7185.html
I have a DS218play, ARMv8 based
Solution :
Instead of switching back to the old package, or create my new one, I decided to install docker.
I followed this thread : https://stackoverflow.com/questions/52520008/can-i-install-docker-on-arm8-based-synology-nas
Quite easy.
I follow also the script here : https://raw.githubusercontent.com/wdmomoxx/catdriver/master/install-docker.sh but I download docker from original site instead.
I use this image : https://registry.hub.docker.com/r/lmscommunity/logitechmediaserver/#!
And run this command, not perfect (log, daemon, docker image should be started... will update this later).
#!/bin/bash
docker run --network=host -it \
-v "/volume1/partage/docker/lms/config":"/config":rw \
-v "/volume1/music":"/music":ro \
-v "/volume1/partage/docker/lms/playlist":"/playlist":rw \
-v "/etc/localtime":"/etc/localtime":ro \
-v "/etc/TZ":"/etc/timezone":ro \
-v "/dev":"/dev" \
lmscommunity/logitechmediaserver &> /volume1/partage/docker/lms-docker.log
Added : --network=host (and remove mapping port) and -v "/dev":"/dev"
Problem :
I manage a dedicated server in OVH and I upgrade my debian from jessie to buster. Upgrade works quite well (it seems...) and I try to restart.
Server reboot fails as unreachable, fortunately OVH rescue mode allows me to login.
I check error log and first lost myself in RAID error message, but it was more simple than that.
Solution :
I check the /etc/network/interfaces file, it was OK
I check the logs files, clean, reboot, check again, still OK except that network was unreachable for named.
I finally remember that Debian switch to systemD in latest version so I tried to create system networking file manually : too complicate, it was not working.
In rescue mode, you can access your files as a mounted point so usual commands as systemctl does not work.
The solution was to chroot a shell :
- mkdir /mnt/md2
- mount /dev/md2 /mnt/md2
- chroot /mnt/md2 bash
- systemctl enable networking
And it works...
Now I have to check all other system to be sure that everything is working...
Begining with :
sudo apt-get update
sudo apt-get clean
sudo apt-get autoremove
sudo apt-get update && sudo apt-get upgrade
sudo dpkg --configure -a
Le problème :
Dans deux cas, j'exécute la commande mount via ansible pour monter un partage samba sur deux clients Linux.
Dans les deux cas, le montage est en échec pour cause :
- soit "CIFS VFS: validate protocol negotiate failed: -13"
- soit "CIFS VFS: BAD_NETWORK_NAME"
Solution :
Les erreurs venaient à chaque fois de ma configuration Samba côté serveur.
Les tests que j'ai réalisé pour retrouver la cause :
- Tester le montage du partage à partir de Windows (j'obtenais la même erreur : le problème vient du serveur)
- A partir des linux : smbclient -L <monserveur> -A /path/to/mycredentials
- ça, ça marchait dans 1 cas, dans l'autre cas, le nom du partage était mauvais : tilt ! (n°1)
- Avec le user samba, je suis aller dans le répertoire partagé pour vérifier que j'avais bien les droits
- et là ça ne marchait pas pour le 2ème cas (negociation failed)
Après avoir remis les droits pour l'un, corrigé mon template ansible pour l'autre, tout marche.
Pour info la configuration mis en place :
[global]
workgroup = SAMBA
security = user
unix password sync = no
log file = /var/log/samba/log.%m
guest account = {{samba_user.user}}
force group = {{samba_user.group}}
force user = {{samba_user.user}}
create mode = 0660
directory mode = 0770
[myshare]
path={{samba_share.export_path}}
public=yes
valid users={{samba_user.user}}
writable=yes
browseable = yes
force create mode = 0660
force directory mode = 0770
Et pour autoriser les users à se connecter via leur compte unix et toujours autoriser ce user générique à accéder aux fichiers :
# Ensure all files are owned by {{ samba_user.user }}
shell: "chown -R {{ samba_user.user }}:{{ samba_user.group }} {{samba_share.export_path}}"
# Ensure sticky bit is present on all directories
shell: "find {{ samba_share.export_path }} -type d -exec chmod g+s {} +"
# Add default rw for default group on {{ samba_share.export_path }}
shell: "setfacl -m d:g::rwx {{ samba_share.export_path }}"
# Add default rw for default group on subdirectories
shell: "find {{ samba_share.export_path }} -type d -exec setfacl -m d:g::rwx {} +"
Merci à eux :
https://superuser.com/questions/381416/how-do-i-force-group-and-permissions-for-created-files-inside-a-specific-directo
https://lea-linux.org/documentations/Gestion_des_ACL
Problem :
I used robertdebock/ansible-role-tomcat to install a Tomcat instance using Ansible. Works well until I deploy an application on it. Then java process hangs with 100% system CPU.
Starting with tomcat users without system work correctly.
Solution :
I suspected :
- SELinux
- Linux limits
- VM slow I/O
But after a while I ran strace :
- by modifying systemd configuration
- by modifying catalina.sh configuration
All I have was a simple FUTEX wait...
And then I read the manual, as simple as :
strace -f -e trace=all -p <PID>
No need to trace from startup and by default, not all is traced...
After that, easy way, the process was reading recursively :
/proc/self/task/81569/cwd/proc/self/task/81569/cwd/proc/self/task/81569/cwd/proc/self/task/81569/cwd/proc/self/task/81569/cwd/proc/self/task/8156...
Just fixing the working_directory in the ansible role, and all is working.
Issue reported here.
Fil RSS des articles de cette catégorie