Skip to content

postgres constantly restarted inside container #541

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
nhirshler opened this issue Dec 18, 2023 · 2 comments
Open

postgres constantly restarted inside container #541

nhirshler opened this issue Dec 18, 2023 · 2 comments

Comments

@nhirshler
Copy link

Container platform

No response

Version

15:1-20

OS version of the container image

RHEL 8

Bugzilla, Jira

No response

Description

Postgres seems to be restarting constantly inside the container. I see the following output in postgres log file.
I'm not sure what causes this constant restart. If I connect to the database I can run select and insert without any issues however, these interruptions are creation a problem in node application that is trying to work with the database.
Any idea how to find the source of the problem?

2023-12-18 11:25:48.847 UTC [743] DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [743] DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [743] DEBUG: proc_exit(0): 2 callbacks to make
2023-12-18 11:25:48.847 UTC [743] DEBUG: exit(0)
2023-12-18 11:25:48.847 UTC [743] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [743] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [743] DEBUG: proc_exit(-1): 0 callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: proc_exit(0): 2 callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: exit(0)
2023-12-18 11:25:48.847 UTC [744] DEBUG: proc_exit(0): 2 callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: exit(0)
2023-12-18 11:25:48.847 UTC [745] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [744] DEBUG: proc_exit(-1): 0 callbacks to make
2023-12-18 11:25:48.847 UTC [745] DEBUG: proc_exit(-1): 0 callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: shmem_exit(0): 4 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: shmem_exit(0): 6 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: proc_exit(0): 2 callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: exit(0)
2023-12-18 11:25:48.847 UTC [746] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2023-12-18 11:25:48.847 UTC [746] DEBUG: proc_exit(-1): 0 callbacks to make
2023-12-18 11:25:48.849 UTC [1] DEBUG: reaping dead processes
2023-12-18 11:25:48.849 UTC [1] DEBUG: server process (PID 744) exited with exit code 0
2023-12-18 11:25:48.849 UTC [1] DEBUG: server process (PID 743) exited with exit code 0
2023-12-18 11:25:48.849 UTC [1] DEBUG: server process (PID 745) exited with exit code 0
2023-12-18 11:25:48.849 UTC [1] DEBUG: server process (PID 746) exited with exit code 0
2023-12-18 11:25:48.849 UTC [1] DEBUG: reaping dead processes

Reproducer

No response

@fila43
Copy link
Member

fila43 commented Feb 28, 2024

Thank you for your report, but it's really hard to find the problem. Would it be possible to provide the reproducer?

@Hipska
Copy link

Hipska commented May 8, 2025

I feel like I have the same issue.

When executing commands in psql, I regularly get these messages:

WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.

Nothing to see in the pod logs, this is what I saw in /var/lib/pgsql/data/userdata/log/postgresql-Thu.log:

2025-05-08 15:54:16.866 UTC [1] LOG:  server process (PID 9687) was terminated by signal 9: Killed
2025-05-08 15:54:16.866 UTC [1] LOG:  terminating any other active server processes
2025-05-08 15:54:16.967 UTC [9688] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.066 UTC [9691] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.067 UTC [9689] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.067 UTC [9690] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.267 UTC [1] LOG:  all server processes terminated; reinitializing
2025-05-08 15:54:17.374 UTC [9692] LOG:  database system was interrupted; last known up at 2025-05-08 15:53:27 UTC
2025-05-08 15:54:17.666 UTC [9696] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.666 UTC [9698] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.667 UTC [9695] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.768 UTC [9699] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.768 UTC [9697] FATAL:  the database system is in recovery mode
2025-05-08 15:54:17.969 UTC [9702] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.066 UTC [9701] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.166 UTC [9700] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.167 UTC [9703] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.167 UTC [9704] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.170 UTC [9706] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.170 UTC [9705] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.469 UTC [9692] LOG:  database system was not properly shut down; automatic recovery in progress
2025-05-08 15:54:18.566 UTC [9707] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.567 UTC [9708] FATAL:  the database system is in recovery mode
2025-05-08 15:54:18.666 UTC [9692] LOG:  redo starts at 0/1CFDAD8
2025-05-08 15:54:18.666 UTC [9692] LOG:  invalid record length at 0/1CFFF98: wanted 24, got 0
2025-05-08 15:54:18.666 UTC [9692] LOG:  redo done at 0/1CFFF70 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.09 s
2025-05-08 15:54:18.668 UTC [9709] FATAL:  the database system is not yet accepting connections
2025-05-08 15:54:18.668 UTC [9709] DETAIL:  Consistent recovery state has not been yet reached.
2025-05-08 15:54:18.767 UTC [9710] FATAL:  the database system is not yet accepting connections
2025-05-08 15:54:18.767 UTC [9710] DETAIL:  Consistent recovery state has not been yet reached.
2025-05-08 15:54:18.770 UTC [9693] LOG:  checkpoint starting: end-of-recovery immediate wait
2025-05-08 15:54:18.777 UTC [9693] LOG:  checkpoint complete: wrote 7 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.002 s, total=0.008 s; sync files=6, longest=0.001 s, average=0.001 s; distance=9 kB, estimate=9 kB
2025-05-08 15:54:18.868 UTC [1] LOG:  database system is ready to accept connections

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants