[bug]: LND crashed on postgres "out of shared memory" error #9541
Replies: 8 comments 10 replies
-
Is it able to run consistently with this extra section in the config?
The shared memory error here is actually complaining about there not being enough locks in a given transaction. |
Beta Was this translation helpful? Give feedback.
-
afaik it is the other way around it is out of shared memory, because lnd uses 50 connections to the DB per default Line 1573 in 27440e8 so especially during bootstraping of the graph there are a lot of parallel transactions running and
will only increase the memory usage postgress's best practice is for one client not to use more than 2x[number of cores] connections futhermore lnd uses Which is the savest, but causes the most conflicting transactions, so a minimum amount of parallel transactions should be used |
Beta Was this translation helpful? Give feedback.
-
Thank you both for the reply 🙌 I will try to reduce Is there some documentation about recommended lnd and postgresql parameters? I could't find any on github or in the docs |
Beta Was this translation helpful? Give feedback.
-
@IanPasteur the error is not because of the connection count. It is because of the locks configuration, please raise both and restart postgres to see if the issue is resolved. The maxconnections value of 50 is just an upper limit and is typically not hit with KVDB access patterns. Even if you reduce the connections count you will still encounter that shared memory error eventually with a burst of DB activity. The serialize errors are expected and benign -- they are an artifact of the KVDB schema we use to store data in postgres and will be phased out once SQL schema migration paths are available. |
Beta Was this translation helpful? Give feedback.
-
Thanks for clearing this out. I encountered higher connection count while I am browsing in RTL web interface. So I downscale PostgreSQL is configured currently this way:
I will let you know how it works in few days. PS: Are these tuning parameters recommended somewhere? Should I open a PR to LND docs? |
Beta Was this translation helpful? Give feedback.
-
here's a gist authored by @djkazic |
Beta Was this translation helpful? Give feedback.
-
moving this thread to a discussion instead. Can open an issue, if there is something to fix |
Beta Was this translation helpful? Give feedback.
-
It is important to note that those tests were conducted on a PostgreSQL setup with 6 cores, 12 threads, and 32GB of RAM, with 6GB of shared memory allocated in PostgreSQL settings. One cannot just throw around these numbers for someone using a Raspberry Pi as their PostgreSQL server. Breaking It DownEach connection an application makes to PostgreSQL can hold one active transaction at a time.
However, having more parallel workers than the system can actually utilize is pointless. It won’t improve performance but can increase lock contention, which may lead to longer transaction durations and higher conflict rates. This results in more transaction rollbacks, requiring application retries, ultimately decreasing overall performance. Optimal Connection LimitsThere is no single "best" number of connections for every workload, but for a high-conflict workload, running significantly more connections than CPU cores can handle is counterproductive. A widely accepted guideline (not a strict rule) is to keep active connections around 2× the number of cores/threads. Additionally, most performance tests and online benchmarks assume the default isolation level The Impact of
|
Beta Was this translation helpful? Give feedback.
-
Background
I run LND with PostgreSQL backend on a virtual server on server hardware. There is 4 GB RAM dedicated to LND and PostgreSQL only (bitcoind runs on another virtual server). I have about 8 channels opened and there is not much routing nor load at all.
Since update to 18.5-beta day before yesterday (not sure if it is related), LND crashed twice with this error:
This "out of memory" error is strange as there should be enough memory available (there is only 25 % of 4 GB memory used in average). Postgres memory configuration follows:
The whole database size is currently 270 MB.
There could be some issue with the db locks, as I found these errors in the postgres log:
I also found another error in postgres log and I am not sure if it is related:
Similar errors were mentioned in #8049 and it should be fixed already, so I am confused.
Your environment
lnd Version: 0.18.5-beta
Linux 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux
Bitcoin Core version v28.1.0
Steps to reproduce
I will provide more details if I would be able to discover some steps to reproduce.
Expected behaviour
LND with postgres does not consume nor demand so much RAM with such low load.
Actual behaviour
Postgres returns "out of shared memory" and LND crashes.
Beta Was this translation helpful? Give feedback.
All reactions