Postgres Transaction OOM on 100k DDL statements

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite
1












We execute approximately 100k DDL statements in a single transaction in PostgreSQL. During execution, the respective Postgres connection gradually increases on its memory usage and once it can't acquire more memory (increasing from 10MB to 2.2GB usage on 3GB ram), OOM killer hits it with 9 which results in Postgres being gone to recovery mode.



BEGIN;

CREATE SCHEMA schema_1;
-- create table stmts - 714
-- alter table add pkey stmts - 714
-- alter table add constraint fkey stmts - 34
-- alter table add unique constraint stmts - 2
-- alter table alter column set default stmts - 9161
-- alter table alter column set not null stmts - 2405
-- alter table add check constraint stmts - 4
-- create unique index stmts - 224
-- create index stmts - 213

CREATE SCHEMA schema_2;
-- same ddl statements as schema_1 upto schema_7
-- ...
-- ...
-- ...
CREATE SCHEMA schema_7;

COMMIT


Including the create schema statement, approximately 94304 DDL statements were meant to be executed.



As per Transactional DDL in PostgreSQL




Like several of its commercial competitors, one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible.




We have even imported approximately 35GB of data into PostgreSQL in a single transaction without any problem, but why does the Postgres connection requires huge memory when executing thousands of DDL statements in single transaction?



We can temporarily resolve it by increasing the RAM or allocating swap, but we can say that the number of schema creation in a single transaction can increase up to 50 - 60 (Approx 1M DDL statements) which would require 100+ Gigs of RAM or swap which isn't feasible right now.



PostgreSQL version: 9.6.10



Is there any reason why executing lots of DDL statements requires more memory while dml statements does not? Isn't both handles transactions by writing to underlying WAL? So why, for DLL it's different?



Reason for Single Transaction



We sync the entire database of customers from Customer Premise (SQL Server) to cloud (PostgreSQL). All customers have different no of databases. Process is, entire data will be generated as CSV from SQL Server and import into PostgreSQL using Temp Tables, COPY and ON CONFLICT DO UPDATE. During this process, we treat each customer as a single database in PG and individual DB in customer's SQL Server as schemas in customer's PG DB.



So based on the CSV data, we will create the schemas dynamically and import data into it. As per our application design, the data in PG should be strictly consistent at any point of time and there shouldn't be any partial schema / tables / data. So we had to achieve this in a single transaction. Also we incrementally sync from customer to cloud DB every 3 minutes. So the schema creation can happen either in first sync or incremental sync. But the probability of creating so many schemas in first sync itself is very high.










share|improve this question



















  • 1




    Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
    – Andriy M
    4 hours ago






  • 3




    Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
    – ypercubeᵀᴹ
    4 hours ago







  • 1




    Independently you could put each schema in a separate transaction.
    – ypercubeᵀᴹ
    4 hours ago






  • 5




    You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
    – Erwin Brandstetter
    4 hours ago










  • @AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
    – The Coder
    3 hours ago

















up vote
3
down vote

favorite
1












We execute approximately 100k DDL statements in a single transaction in PostgreSQL. During execution, the respective Postgres connection gradually increases on its memory usage and once it can't acquire more memory (increasing from 10MB to 2.2GB usage on 3GB ram), OOM killer hits it with 9 which results in Postgres being gone to recovery mode.



BEGIN;

CREATE SCHEMA schema_1;
-- create table stmts - 714
-- alter table add pkey stmts - 714
-- alter table add constraint fkey stmts - 34
-- alter table add unique constraint stmts - 2
-- alter table alter column set default stmts - 9161
-- alter table alter column set not null stmts - 2405
-- alter table add check constraint stmts - 4
-- create unique index stmts - 224
-- create index stmts - 213

CREATE SCHEMA schema_2;
-- same ddl statements as schema_1 upto schema_7
-- ...
-- ...
-- ...
CREATE SCHEMA schema_7;

COMMIT


Including the create schema statement, approximately 94304 DDL statements were meant to be executed.



As per Transactional DDL in PostgreSQL




Like several of its commercial competitors, one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible.




We have even imported approximately 35GB of data into PostgreSQL in a single transaction without any problem, but why does the Postgres connection requires huge memory when executing thousands of DDL statements in single transaction?



We can temporarily resolve it by increasing the RAM or allocating swap, but we can say that the number of schema creation in a single transaction can increase up to 50 - 60 (Approx 1M DDL statements) which would require 100+ Gigs of RAM or swap which isn't feasible right now.



PostgreSQL version: 9.6.10



Is there any reason why executing lots of DDL statements requires more memory while dml statements does not? Isn't both handles transactions by writing to underlying WAL? So why, for DLL it's different?



Reason for Single Transaction



We sync the entire database of customers from Customer Premise (SQL Server) to cloud (PostgreSQL). All customers have different no of databases. Process is, entire data will be generated as CSV from SQL Server and import into PostgreSQL using Temp Tables, COPY and ON CONFLICT DO UPDATE. During this process, we treat each customer as a single database in PG and individual DB in customer's SQL Server as schemas in customer's PG DB.



So based on the CSV data, we will create the schemas dynamically and import data into it. As per our application design, the data in PG should be strictly consistent at any point of time and there shouldn't be any partial schema / tables / data. So we had to achieve this in a single transaction. Also we incrementally sync from customer to cloud DB every 3 minutes. So the schema creation can happen either in first sync or incremental sync. But the probability of creating so many schemas in first sync itself is very high.










share|improve this question



















  • 1




    Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
    – Andriy M
    4 hours ago






  • 3




    Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
    – ypercubeᵀᴹ
    4 hours ago







  • 1




    Independently you could put each schema in a separate transaction.
    – ypercubeᵀᴹ
    4 hours ago






  • 5




    You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
    – Erwin Brandstetter
    4 hours ago










  • @AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
    – The Coder
    3 hours ago













up vote
3
down vote

favorite
1









up vote
3
down vote

favorite
1






1





We execute approximately 100k DDL statements in a single transaction in PostgreSQL. During execution, the respective Postgres connection gradually increases on its memory usage and once it can't acquire more memory (increasing from 10MB to 2.2GB usage on 3GB ram), OOM killer hits it with 9 which results in Postgres being gone to recovery mode.



BEGIN;

CREATE SCHEMA schema_1;
-- create table stmts - 714
-- alter table add pkey stmts - 714
-- alter table add constraint fkey stmts - 34
-- alter table add unique constraint stmts - 2
-- alter table alter column set default stmts - 9161
-- alter table alter column set not null stmts - 2405
-- alter table add check constraint stmts - 4
-- create unique index stmts - 224
-- create index stmts - 213

CREATE SCHEMA schema_2;
-- same ddl statements as schema_1 upto schema_7
-- ...
-- ...
-- ...
CREATE SCHEMA schema_7;

COMMIT


Including the create schema statement, approximately 94304 DDL statements were meant to be executed.



As per Transactional DDL in PostgreSQL




Like several of its commercial competitors, one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible.




We have even imported approximately 35GB of data into PostgreSQL in a single transaction without any problem, but why does the Postgres connection requires huge memory when executing thousands of DDL statements in single transaction?



We can temporarily resolve it by increasing the RAM or allocating swap, but we can say that the number of schema creation in a single transaction can increase up to 50 - 60 (Approx 1M DDL statements) which would require 100+ Gigs of RAM or swap which isn't feasible right now.



PostgreSQL version: 9.6.10



Is there any reason why executing lots of DDL statements requires more memory while dml statements does not? Isn't both handles transactions by writing to underlying WAL? So why, for DLL it's different?



Reason for Single Transaction



We sync the entire database of customers from Customer Premise (SQL Server) to cloud (PostgreSQL). All customers have different no of databases. Process is, entire data will be generated as CSV from SQL Server and import into PostgreSQL using Temp Tables, COPY and ON CONFLICT DO UPDATE. During this process, we treat each customer as a single database in PG and individual DB in customer's SQL Server as schemas in customer's PG DB.



So based on the CSV data, we will create the schemas dynamically and import data into it. As per our application design, the data in PG should be strictly consistent at any point of time and there shouldn't be any partial schema / tables / data. So we had to achieve this in a single transaction. Also we incrementally sync from customer to cloud DB every 3 minutes. So the schema creation can happen either in first sync or incremental sync. But the probability of creating so many schemas in first sync itself is very high.










share|improve this question















We execute approximately 100k DDL statements in a single transaction in PostgreSQL. During execution, the respective Postgres connection gradually increases on its memory usage and once it can't acquire more memory (increasing from 10MB to 2.2GB usage on 3GB ram), OOM killer hits it with 9 which results in Postgres being gone to recovery mode.



BEGIN;

CREATE SCHEMA schema_1;
-- create table stmts - 714
-- alter table add pkey stmts - 714
-- alter table add constraint fkey stmts - 34
-- alter table add unique constraint stmts - 2
-- alter table alter column set default stmts - 9161
-- alter table alter column set not null stmts - 2405
-- alter table add check constraint stmts - 4
-- create unique index stmts - 224
-- create index stmts - 213

CREATE SCHEMA schema_2;
-- same ddl statements as schema_1 upto schema_7
-- ...
-- ...
-- ...
CREATE SCHEMA schema_7;

COMMIT


Including the create schema statement, approximately 94304 DDL statements were meant to be executed.



As per Transactional DDL in PostgreSQL




Like several of its commercial competitors, one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible.




We have even imported approximately 35GB of data into PostgreSQL in a single transaction without any problem, but why does the Postgres connection requires huge memory when executing thousands of DDL statements in single transaction?



We can temporarily resolve it by increasing the RAM or allocating swap, but we can say that the number of schema creation in a single transaction can increase up to 50 - 60 (Approx 1M DDL statements) which would require 100+ Gigs of RAM or swap which isn't feasible right now.



PostgreSQL version: 9.6.10



Is there any reason why executing lots of DDL statements requires more memory while dml statements does not? Isn't both handles transactions by writing to underlying WAL? So why, for DLL it's different?



Reason for Single Transaction



We sync the entire database of customers from Customer Premise (SQL Server) to cloud (PostgreSQL). All customers have different no of databases. Process is, entire data will be generated as CSV from SQL Server and import into PostgreSQL using Temp Tables, COPY and ON CONFLICT DO UPDATE. During this process, we treat each customer as a single database in PG and individual DB in customer's SQL Server as schemas in customer's PG DB.



So based on the CSV data, we will create the schemas dynamically and import data into it. As per our application design, the data in PG should be strictly consistent at any point of time and there shouldn't be any partial schema / tables / data. So we had to achieve this in a single transaction. Also we incrementally sync from customer to cloud DB every 3 minutes. So the schema creation can happen either in first sync or incremental sync. But the probability of creating so many schemas in first sync itself is very high.







postgresql transaction postgresql-performance postgresql-9.6 ddl






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 2 hours ago

























asked 5 hours ago









The Coder

1717




1717







  • 1




    Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
    – Andriy M
    4 hours ago






  • 3




    Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
    – ypercubeᵀᴹ
    4 hours ago







  • 1




    Independently you could put each schema in a separate transaction.
    – ypercubeᵀᴹ
    4 hours ago






  • 5




    You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
    – Erwin Brandstetter
    4 hours ago










  • @AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
    – The Coder
    3 hours ago













  • 1




    Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
    – Andriy M
    4 hours ago






  • 3




    Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
    – ypercubeᵀᴹ
    4 hours ago







  • 1




    Independently you could put each schema in a separate transaction.
    – ypercubeᵀᴹ
    4 hours ago






  • 5




    You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
    – Erwin Brandstetter
    4 hours ago










  • @AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
    – The Coder
    3 hours ago








1




1




Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
– Andriy M
4 hours ago




Is the CREATE DATABASE statement issued automatically as part of the first sync process (obviously not as part of the same transaction, because CREATE DATABASE cannot be executed inside a transaction block) or is it executed in a separate process? Related question (possibly a rewording of the previous one): how does the application become aware of a new customer/new database?
– Andriy M
4 hours ago




3




3




Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
– ypercubeᵀᴹ
4 hours ago





Can you modify the procedure that creates the DDL to get rid of the ALTER COLUMN statements by adjusting the CREATE TABLE statements? That would get rid of approximately 11.5K statements, for first schema only.
– ypercubeᵀᴹ
4 hours ago





1




1




Independently you could put each schema in a separate transaction.
– ypercubeᵀᴹ
4 hours ago




Independently you could put each schema in a separate transaction.
– ypercubeᵀᴹ
4 hours ago




5




5




You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
– Erwin Brandstetter
4 hours ago




You can greatly reduce the number of ddl statements by packing many clauses into a single alter table statement for the same table ...
– Erwin Brandstetter
4 hours ago












@AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
– The Coder
3 hours ago





@AndriyM create database executed in a seperate process. Customer creation is a seperate process. We maintain customer info and connection properties in a distributed way (etcd)
– The Coder
3 hours ago











1 Answer
1






active

oldest

votes

















up vote
1
down vote













A better idea entirely is to use SQL Server FDW which actually has the logic to pull in Microsoft SQL Server into PostgreSQL format (for example, Bit gets mapped to Bool). From this point



Then every three minutes,



  • you import the foreign schema into last_fetch_schema

  • if the last_fetch_schema is different from local_schema

    • you resync schemas


  • you copy all of the data over with a INSERT INTO ... SELECT ON CONFLICT DO UPDATE, and you can select only the newest data.

  • you drop the foreign schema last_fetch_schema

What do you gain?



  • On first load, you can simply use CREATE TABLE local.foo ( LIKE foreign.foo)

  • You can easily compare meta-data differences

  • CSVs loses types and leave you to infer things, FDW can read meta-data catalog.

  • Grabbing only the newest stuff is very simply if the rows are versioned/ you don't have to send the entire database anymore.





share|improve this answer




















  • That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
    – The Coder
    2 mins ago











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f220053%2fpostgres-transaction-oom-on-100k-ddl-statements%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













A better idea entirely is to use SQL Server FDW which actually has the logic to pull in Microsoft SQL Server into PostgreSQL format (for example, Bit gets mapped to Bool). From this point



Then every three minutes,



  • you import the foreign schema into last_fetch_schema

  • if the last_fetch_schema is different from local_schema

    • you resync schemas


  • you copy all of the data over with a INSERT INTO ... SELECT ON CONFLICT DO UPDATE, and you can select only the newest data.

  • you drop the foreign schema last_fetch_schema

What do you gain?



  • On first load, you can simply use CREATE TABLE local.foo ( LIKE foreign.foo)

  • You can easily compare meta-data differences

  • CSVs loses types and leave you to infer things, FDW can read meta-data catalog.

  • Grabbing only the newest stuff is very simply if the rows are versioned/ you don't have to send the entire database anymore.





share|improve this answer




















  • That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
    – The Coder
    2 mins ago















up vote
1
down vote













A better idea entirely is to use SQL Server FDW which actually has the logic to pull in Microsoft SQL Server into PostgreSQL format (for example, Bit gets mapped to Bool). From this point



Then every three minutes,



  • you import the foreign schema into last_fetch_schema

  • if the last_fetch_schema is different from local_schema

    • you resync schemas


  • you copy all of the data over with a INSERT INTO ... SELECT ON CONFLICT DO UPDATE, and you can select only the newest data.

  • you drop the foreign schema last_fetch_schema

What do you gain?



  • On first load, you can simply use CREATE TABLE local.foo ( LIKE foreign.foo)

  • You can easily compare meta-data differences

  • CSVs loses types and leave you to infer things, FDW can read meta-data catalog.

  • Grabbing only the newest stuff is very simply if the rows are versioned/ you don't have to send the entire database anymore.





share|improve this answer




















  • That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
    – The Coder
    2 mins ago













up vote
1
down vote










up vote
1
down vote









A better idea entirely is to use SQL Server FDW which actually has the logic to pull in Microsoft SQL Server into PostgreSQL format (for example, Bit gets mapped to Bool). From this point



Then every three minutes,



  • you import the foreign schema into last_fetch_schema

  • if the last_fetch_schema is different from local_schema

    • you resync schemas


  • you copy all of the data over with a INSERT INTO ... SELECT ON CONFLICT DO UPDATE, and you can select only the newest data.

  • you drop the foreign schema last_fetch_schema

What do you gain?



  • On first load, you can simply use CREATE TABLE local.foo ( LIKE foreign.foo)

  • You can easily compare meta-data differences

  • CSVs loses types and leave you to infer things, FDW can read meta-data catalog.

  • Grabbing only the newest stuff is very simply if the rows are versioned/ you don't have to send the entire database anymore.





share|improve this answer












A better idea entirely is to use SQL Server FDW which actually has the logic to pull in Microsoft SQL Server into PostgreSQL format (for example, Bit gets mapped to Bool). From this point



Then every three minutes,



  • you import the foreign schema into last_fetch_schema

  • if the last_fetch_schema is different from local_schema

    • you resync schemas


  • you copy all of the data over with a INSERT INTO ... SELECT ON CONFLICT DO UPDATE, and you can select only the newest data.

  • you drop the foreign schema last_fetch_schema

What do you gain?



  • On first load, you can simply use CREATE TABLE local.foo ( LIKE foreign.foo)

  • You can easily compare meta-data differences

  • CSVs loses types and leave you to infer things, FDW can read meta-data catalog.

  • Grabbing only the newest stuff is very simply if the rows are versioned/ you don't have to send the entire database anymore.






share|improve this answer












share|improve this answer



share|improve this answer










answered 56 mins ago









Evan Carroll

29.1k858189




29.1k858189











  • That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
    – The Coder
    2 mins ago

















  • That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
    – The Coder
    2 mins ago
















That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
– The Coder
2 mins ago





That was a good suggestion. But not everyone's SQL Server is accessible over internet. Customers has no restrictions in outbound connections, but most of the customers have difficulties in configuring Inbound Connections (this is just one, there also other cases like db / table / column creation / deletion / modification in premise patch etc..). Based on the customer volume, this isn't kinda feasible / scalable for us.
– The Coder
2 mins ago


















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f220053%2fpostgres-transaction-oom-on-100k-ddl-statements%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What does second last employer means? [closed]

Installing NextGIS Connect into QGIS 3?

One-line joke