Table Observations partitioning? #1121
-
Dear, My goal is to automatically import the new measurement of the actual RV Belgica in the STA to allow clients a seamless access to all the data. BUT as the table is already large, importing using http POST operations becomes very slow. So I'm looking for some optimisations. Thanks a lot for your answer, and please if you have some suggestions to keep the system fast with a large amount of data it would be most welcome! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Partitioning the table is certainly possible. I don't expect it to make much difference when it comes to the write-speed, but it will depend a bit on what the bottleneck is when importing your data. I suspect a substantial increase in efficiency can be had by using batch processing for the import, reducing the many separate POST actions required. The most important limitation for PostgreSQL is the speed of the disks. Network disks are usually very slow. Using SSDs close to the PostgreSQL server I can usually POST around 5000 Observations per second. In your specific use-case it might also be possible to get a good improvement by merging the FoI and Location tables, since I suspect the data in those two is mostly the same. |
Beta Was this translation helpful? Give feedback.
Partitioning the table is certainly possible. I don't expect it to make much difference when it comes to the write-speed, but it will depend a bit on what the bottleneck is when importing your data.
Partitioning helps by distributing read/write load on different physical disks, and possibly reducing the size of indexes required for queries, but I doubt those are a problem in your case.
I suspect a substantial increase in efficiency can be had by using batch processing for the import, reducing the many separate POST actions required.
The most important limitation for PostgreSQL is the speed of the disks. Network disks are usually very slow. Using SSDs close to the PostgreSQL server I can u…