Healthy number of tables in PostgreSQL Schema
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
3
down vote
favorite
I manage a Postgres + PostGIS database with two schemas of 200 and 120 tables.
I wonder if this is correct or I should organize the tables in more and smaller schemas.
So far, we hadn't had any problems of any kind with the database, despite the annoying scroll to get to the desired table in QGIS file explorer.
postgis postgresql database
add a comment |Â
up vote
3
down vote
favorite
I manage a Postgres + PostGIS database with two schemas of 200 and 120 tables.
I wonder if this is correct or I should organize the tables in more and smaller schemas.
So far, we hadn't had any problems of any kind with the database, despite the annoying scroll to get to the desired table in QGIS file explorer.
postgis postgresql database
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I manage a Postgres + PostGIS database with two schemas of 200 and 120 tables.
I wonder if this is correct or I should organize the tables in more and smaller schemas.
So far, we hadn't had any problems of any kind with the database, despite the annoying scroll to get to the desired table in QGIS file explorer.
postgis postgresql database
I manage a Postgres + PostGIS database with two schemas of 200 and 120 tables.
I wonder if this is correct or I should organize the tables in more and smaller schemas.
So far, we hadn't had any problems of any kind with the database, despite the annoying scroll to get to the desired table in QGIS file explorer.
postgis postgresql database
edited Aug 27 at 13:52
Damini Jain
1,312113
1,312113
asked Aug 27 at 13:31
guillermo_dangelo
1099
1099
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
3
down vote
accepted
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.
add a comment |Â
up vote
3
down vote
Where a size is given as “No Limit,†this means that PostgreSQL alone imposes no limit. The maximum size will be determined by other factors, such as operating system limits and the amount of available disk space or virtual memory.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
Still, you may see some performance degradation associated with databases containing many tables. PostgreSQL may use a large number of files for storing the table data, and performance may suffer if the operating system does not cope well with many files in a single directory.
REFERENCES: Limitations page of PostgreSQL
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.
add a comment |Â
up vote
3
down vote
accepted
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.
edited Aug 27 at 17:30
answered Aug 27 at 15:32


François Leblanc
29518
29518
add a comment |Â
add a comment |Â
up vote
3
down vote
Where a size is given as “No Limit,†this means that PostgreSQL alone imposes no limit. The maximum size will be determined by other factors, such as operating system limits and the amount of available disk space or virtual memory.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
Still, you may see some performance degradation associated with databases containing many tables. PostgreSQL may use a large number of files for storing the table data, and performance may suffer if the operating system does not cope well with many files in a single directory.
REFERENCES: Limitations page of PostgreSQL
add a comment |Â
up vote
3
down vote
Where a size is given as “No Limit,†this means that PostgreSQL alone imposes no limit. The maximum size will be determined by other factors, such as operating system limits and the amount of available disk space or virtual memory.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
Still, you may see some performance degradation associated with databases containing many tables. PostgreSQL may use a large number of files for storing the table data, and performance may suffer if the operating system does not cope well with many files in a single directory.
REFERENCES: Limitations page of PostgreSQL
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Where a size is given as “No Limit,†this means that PostgreSQL alone imposes no limit. The maximum size will be determined by other factors, such as operating system limits and the amount of available disk space or virtual memory.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
Still, you may see some performance degradation associated with databases containing many tables. PostgreSQL may use a large number of files for storing the table data, and performance may suffer if the operating system does not cope well with many files in a single directory.
REFERENCES: Limitations page of PostgreSQL
Where a size is given as “No Limit,†this means that PostgreSQL alone imposes no limit. The maximum size will be determined by other factors, such as operating system limits and the amount of available disk space or virtual memory.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
Still, you may see some performance degradation associated with databases containing many tables. PostgreSQL may use a large number of files for storing the table data, and performance may suffer if the operating system does not cope well with many files in a single directory.
REFERENCES: Limitations page of PostgreSQL
answered Aug 27 at 14:04
Damini Jain
1,312113
1,312113
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f294101%2fhealthy-number-of-tables-in-postgresql-schema%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password