Home Postgres tuning max_connections
Reply: 1

Postgres tuning max_connections

JSStuball Published in 2018-01-13 06:13:24Z

I have a multi-threaded process whereby 36 threads write to the db randomly, each once every 10s on average, each with 99% idle time (sleeping).

I am not sure if this means I have 36 active connections, or effectively just one or two because of the sleep in each thread. Probably not relevant but they all use the same username.

Should I reduce the max-connections option in the config file to 36 or to something small like 4 (to reflect the probability that at any instant in time, almost certainly not more than 4 are simultaneously writing)?

Edit: is it possible that its implementation dependent, i.e. how I wrote my python code whether the connections are dropped while sleeping or not?

Vao Tsun
Vao Tsun Reply to 2018-01-13 17:40:21Z

the neatest set here would be using pgbouncer for connection pooling: https://pgbouncer.github.io/config.html

default_pool_size = 4 would keep 4 permanent connections to the postgres, pooling your 36 to use one of four when session completes.

I'm recommending pooler, because whether the connection persists on server or not depends on whether you disconnect or not. also zombies would keep a connection, while your code would initiate new sessions.

in short - to run query you have to connect to a database as user. If you run on same session another transaction, you reuse the connection (if you did not disconnect). You have to explicitly disconnect to close the session. If you fail to do it, connection will stay on server (using one of max_connections slots.

also from 9.6 on we have idle_in_transaction_session_timeout, which kills transaction if its idla for longer then n period, which would help to fight against zombies.

You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.304295 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO