Skip to content

Commit dbbcb1d

Browse files
committed
Grammar
1 parent 01ba340 commit dbbcb1d

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

docs/pages/guides/pool-sizing.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ If the number of instances of your services which connect to your database is mo
1818

1919
### Vercel
2020

21-
If you're running on Vercel with [fluid compute](https://vercel.com/kb/guide/efficiently-manage-database-connection-pools-with-fluid-compute), your serverless functions can handle multiple requests concurrently and stick around between invocations. In this case, you can treat it similarly to a traditional long-lived process and use a default-ish pool size of `10`. The pool will stay warm across requests and you'll get the benefits of connection reuse. You'll probably need to put pgBouncer (or some kinda pooler like what is offered w/ supabase, rds, gcp, etc) in front of your database as vercel worker count can grow quite a bit larger than the number of reasonable max connections postgres can handle.
21+
If you're running on Vercel with [fluid compute](https://vercel.com/kb/guide/efficiently-manage-database-connection-pools-with-fluid-compute), your serverless functions can handle multiple requests concurrently and stick around between invocations. In this case, you can treat it similarly to a traditional long-lived process and use a default-ish pool size of `10`. The pool will stay warm across requests and you'll get the benefits of connection reuse. You'll probably need to put pgBouncer (or some kind of pooler like what is offered with Supabase, RDS, GCP, etc.) in front of your database, as Vercel worker count can grow quite a bit larger than the number of reasonable max connections Postgres can handle.
2222

2323
### Cloudflare workers
2424

25-
In a fully stateless serverless environment like cloudflare workers where your worker is killed, suspended, moved to a new compute node, or shut down at the end of every request, you'll still probably be okay with a pool size `max` of `10` though you can lower it if you start hitting connection exhaustion limits on your pooler. In cloudflare the pooler is hyperdrive and in my experience it works fantastically at pooling w/ their workers setup. Make sure at the end of your serverless handler, after everything is done, you close the pool and dispose of the pool by calling `pool.end()`. Setting the pool to a size larger than 1 is still recommeded as things like tRPC and other server-side routing & request batching code could result in multiple independent queries executing at the same time. With a pool size of `1` you are turning what is "a few things at once" into all things waiting in line one after another on the one available client in the pool.
25+
In a fully stateless serverless environment like Cloudflare Workers where your worker is killed, suspended, moved to a new compute node, or shut down at the end of every request, you'll still probably be okay with a pool size `max` of `10`, though you can lower it if you start hitting connection exhaustion limits on your pooler. In Cloudflare the pooler is Hyperdrive, and in my experience it works fantastically with their workers setup. Make sure at the end of your serverless handler, after everything is done, you close and dispose of the pool by calling `pool.end()`. Setting the pool to a size larger than 1 is still recommended, as things like tRPC and other server-side routing & request batching code could result in multiple independent queries executing at the same time. With a pool size of `1` you are turning what is "a few things at once" into all things waiting in line one after another on the one available client in the pool.
2626

2727
## pg-bouncer, RDS-proxy, etc.
2828

@@ -34,4 +34,4 @@ It's a bit of a complicated topic and doesn't have much impact on things until y
3434

3535
## Need help?
3636

37-
In my career this has been the most error-prone thing related to running postgres & node. Particularly with the differences in various serverless providers (Cloudflare, Vercel, Lamda, etc...) versus a more traditional hosting. If you have any questions or need help please don't hesitate to email me at [brian.m.carlson@gmail.com](mailto:brian.m.carlson@gmail.com]) or reach out on GitHub.
37+
In my career, this has been the most error-prone thing related to running Postgres & Node, particularly with the differences in various serverless providers (Cloudflare, Vercel, Lambda, etc.) versus more traditional hosting. If you have any questions or need help, please don't hesitate to email me at [brian.m.carlson@gmail.com](mailto:brian.m.carlson@gmail.com) or reach out on GitHub.

0 commit comments

Comments
 (0)