SAN IOPS figure
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
1
down vote
favorite
Recently we've been having high disk latency issue and wanted to benchmark one of our mount points on a preprod server to get an idea of how much IOPS, Throughput and latency our SAN is capable of. Infra Hardware is maintained by another team and we do not have any visibility what they do.
We used diskspd to run test with the below parameters
Results
As you can see, the max IOPS at 8K block size was around 5800.
So my question is, for a SAN in a not so small IT shop, is that a good number? Should we be aiming for more? What is an average IOPS figure for an entry level and high end SAN's.
Adding more info based further questions from answers.
1) Why 8k block - I read that readahead uses higher block sizes, but when I analysed the bytes per read counter it was around 8K, suggesting SQL server is not able to do a lot of read aheads. Does fragmentation have a role here?
2) I did test with a smaller file size(2GB) and the IOPS and through put was much higher, suggesting that the path is not the bottleneck. Also did test with higher block size, getting a through put of around a terabyte.
Cheers
sql-server performance sql-server-2014 san
New contributor
add a comment |Â
up vote
1
down vote
favorite
Recently we've been having high disk latency issue and wanted to benchmark one of our mount points on a preprod server to get an idea of how much IOPS, Throughput and latency our SAN is capable of. Infra Hardware is maintained by another team and we do not have any visibility what they do.
We used diskspd to run test with the below parameters
Results
As you can see, the max IOPS at 8K block size was around 5800.
So my question is, for a SAN in a not so small IT shop, is that a good number? Should we be aiming for more? What is an average IOPS figure for an entry level and high end SAN's.
Adding more info based further questions from answers.
1) Why 8k block - I read that readahead uses higher block sizes, but when I analysed the bytes per read counter it was around 8K, suggesting SQL server is not able to do a lot of read aheads. Does fragmentation have a role here?
2) I did test with a smaller file size(2GB) and the IOPS and through put was much higher, suggesting that the path is not the bottleneck. Also did test with higher block size, getting a through put of around a terabyte.
Cheers
sql-server performance sql-server-2014 san
New contributor
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Recently we've been having high disk latency issue and wanted to benchmark one of our mount points on a preprod server to get an idea of how much IOPS, Throughput and latency our SAN is capable of. Infra Hardware is maintained by another team and we do not have any visibility what they do.
We used diskspd to run test with the below parameters
Results
As you can see, the max IOPS at 8K block size was around 5800.
So my question is, for a SAN in a not so small IT shop, is that a good number? Should we be aiming for more? What is an average IOPS figure for an entry level and high end SAN's.
Adding more info based further questions from answers.
1) Why 8k block - I read that readahead uses higher block sizes, but when I analysed the bytes per read counter it was around 8K, suggesting SQL server is not able to do a lot of read aheads. Does fragmentation have a role here?
2) I did test with a smaller file size(2GB) and the IOPS and through put was much higher, suggesting that the path is not the bottleneck. Also did test with higher block size, getting a through put of around a terabyte.
Cheers
sql-server performance sql-server-2014 san
New contributor
Recently we've been having high disk latency issue and wanted to benchmark one of our mount points on a preprod server to get an idea of how much IOPS, Throughput and latency our SAN is capable of. Infra Hardware is maintained by another team and we do not have any visibility what they do.
We used diskspd to run test with the below parameters
Results
As you can see, the max IOPS at 8K block size was around 5800.
So my question is, for a SAN in a not so small IT shop, is that a good number? Should we be aiming for more? What is an average IOPS figure for an entry level and high end SAN's.
Adding more info based further questions from answers.
1) Why 8k block - I read that readahead uses higher block sizes, but when I analysed the bytes per read counter it was around 8K, suggesting SQL server is not able to do a lot of read aheads. Does fragmentation have a role here?
2) I did test with a smaller file size(2GB) and the IOPS and through put was much higher, suggesting that the path is not the bottleneck. Also did test with higher block size, getting a through put of around a terabyte.
Cheers
sql-server performance sql-server-2014 san
sql-server performance sql-server-2014 san
New contributor
New contributor
edited 25 mins ago
New contributor
asked 1 hour ago
IOtester
62
62
New contributor
New contributor
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
So you ran diskspd
tool with the following parameters:
Discussing Parameters
- b8K : 8 kB block size (default: 64K)
- d60 : 60 seconds duration (default: 10s)
- o32 : 32 outstanding I/O reqeusts per-target per-thread (default: 2 )
- t8 : 8 threads per target
- h : disable software and hardware caching (deprecated: Use -Sh instead)
- r : random I/O alignment (would require a value)
- w0 : 0 write percentage of reads
- L : measure latency statistics
- c200G : 200 GB file size
So even though there are some defaults, you have decided to use different values. Question: Do you get different results when using the default values? I'm asking because SQL Server Read-Ahead varies from 64 kB right up to 1024 kB.
Further questions to Consider
I know what you're going through but your aren't giving us much to go with.
- How is the storage attached? (2 GBit / 4 GBit / 8 GBit / 16 GBit)
- What kinds of disk are in the SAN? (SSD / HDD / Hybrid / ...)
- .... and possible other questions.
Yes, you stated you have no interaction with the server team, but any information you provide is a step in the right direction. Otherwise, it's just a best guess.
Answering Your Question(s).
So my question is, for a SAN in a not so small IT shop, is that a good number?
Possibly, but then again possibly not, that depends on your requirements and how your storage is configured. On an old IBM storage, we were constantly seeing only 210 MB/s throughput over a SVC with dual 2 GBit/s SFPs attached to the server hardware. We should have being seeing figures around 420 MB/s. After reconfiguring the switch ports we were nearer the second number, than the previous 210 MB/s.
Should we be aiming for more?
That depends on your requirements, your hardware and your configuration. Too may unknowns to give a definite answer.
What is an average IOPS figure for an entry level and high end SAN's
Anywhere between 2'000 (entry level) and 25'000 (high-end; in 2015). IOPS don't seem to be the number to look at any longer according to some articles.
Reference: Do IOPS Matter? â Simple Answer, No ()
add a comment |Â
up vote
1
down vote
IOPS and Latency are two independent numbers.
5k IOPS seems decent, but the Latency of 44ms suggest (to me) that it is practically useless (when compared to a single disk's 5ms latency).
When a single SATA drive can have a transfer rate of 100 MB/s, your 45 MB/s aggregated transfer rate suggests your SAN is overloaded.
High end SANs will incorporate SSD drives which can cause the aggregated IOPs (as measured by the SAN) to measure in the millions. Your SAN team should be able to tell you what percentage of IOPs/Transfer Rate your host is getting.
You definitely need to work with your SAN team to see if the limitation is due to the lack of SAN resources or a hardware/software problem.
Fix HW/SW problems
If you and the SAN team see that you are not getting all the power you are suppose to get, there could be something wrong with your setup.
- Ensure you have updated drivers.
- Ensure you have updated BIOS/firmware
- mother board
- Fibre Card
- Fibre Switches
- SAN device
- in one case, we had to update all of the HDDs' firmware
- Ensure Multipathing is configured correctly.
- In one case of mine, the auto fail-over was causing performance problems
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
So you ran diskspd
tool with the following parameters:
Discussing Parameters
- b8K : 8 kB block size (default: 64K)
- d60 : 60 seconds duration (default: 10s)
- o32 : 32 outstanding I/O reqeusts per-target per-thread (default: 2 )
- t8 : 8 threads per target
- h : disable software and hardware caching (deprecated: Use -Sh instead)
- r : random I/O alignment (would require a value)
- w0 : 0 write percentage of reads
- L : measure latency statistics
- c200G : 200 GB file size
So even though there are some defaults, you have decided to use different values. Question: Do you get different results when using the default values? I'm asking because SQL Server Read-Ahead varies from 64 kB right up to 1024 kB.
Further questions to Consider
I know what you're going through but your aren't giving us much to go with.
- How is the storage attached? (2 GBit / 4 GBit / 8 GBit / 16 GBit)
- What kinds of disk are in the SAN? (SSD / HDD / Hybrid / ...)
- .... and possible other questions.
Yes, you stated you have no interaction with the server team, but any information you provide is a step in the right direction. Otherwise, it's just a best guess.
Answering Your Question(s).
So my question is, for a SAN in a not so small IT shop, is that a good number?
Possibly, but then again possibly not, that depends on your requirements and how your storage is configured. On an old IBM storage, we were constantly seeing only 210 MB/s throughput over a SVC with dual 2 GBit/s SFPs attached to the server hardware. We should have being seeing figures around 420 MB/s. After reconfiguring the switch ports we were nearer the second number, than the previous 210 MB/s.
Should we be aiming for more?
That depends on your requirements, your hardware and your configuration. Too may unknowns to give a definite answer.
What is an average IOPS figure for an entry level and high end SAN's
Anywhere between 2'000 (entry level) and 25'000 (high-end; in 2015). IOPS don't seem to be the number to look at any longer according to some articles.
Reference: Do IOPS Matter? â Simple Answer, No ()
add a comment |Â
up vote
2
down vote
So you ran diskspd
tool with the following parameters:
Discussing Parameters
- b8K : 8 kB block size (default: 64K)
- d60 : 60 seconds duration (default: 10s)
- o32 : 32 outstanding I/O reqeusts per-target per-thread (default: 2 )
- t8 : 8 threads per target
- h : disable software and hardware caching (deprecated: Use -Sh instead)
- r : random I/O alignment (would require a value)
- w0 : 0 write percentage of reads
- L : measure latency statistics
- c200G : 200 GB file size
So even though there are some defaults, you have decided to use different values. Question: Do you get different results when using the default values? I'm asking because SQL Server Read-Ahead varies from 64 kB right up to 1024 kB.
Further questions to Consider
I know what you're going through but your aren't giving us much to go with.
- How is the storage attached? (2 GBit / 4 GBit / 8 GBit / 16 GBit)
- What kinds of disk are in the SAN? (SSD / HDD / Hybrid / ...)
- .... and possible other questions.
Yes, you stated you have no interaction with the server team, but any information you provide is a step in the right direction. Otherwise, it's just a best guess.
Answering Your Question(s).
So my question is, for a SAN in a not so small IT shop, is that a good number?
Possibly, but then again possibly not, that depends on your requirements and how your storage is configured. On an old IBM storage, we were constantly seeing only 210 MB/s throughput over a SVC with dual 2 GBit/s SFPs attached to the server hardware. We should have being seeing figures around 420 MB/s. After reconfiguring the switch ports we were nearer the second number, than the previous 210 MB/s.
Should we be aiming for more?
That depends on your requirements, your hardware and your configuration. Too may unknowns to give a definite answer.
What is an average IOPS figure for an entry level and high end SAN's
Anywhere between 2'000 (entry level) and 25'000 (high-end; in 2015). IOPS don't seem to be the number to look at any longer according to some articles.
Reference: Do IOPS Matter? â Simple Answer, No ()
add a comment |Â
up vote
2
down vote
up vote
2
down vote
So you ran diskspd
tool with the following parameters:
Discussing Parameters
- b8K : 8 kB block size (default: 64K)
- d60 : 60 seconds duration (default: 10s)
- o32 : 32 outstanding I/O reqeusts per-target per-thread (default: 2 )
- t8 : 8 threads per target
- h : disable software and hardware caching (deprecated: Use -Sh instead)
- r : random I/O alignment (would require a value)
- w0 : 0 write percentage of reads
- L : measure latency statistics
- c200G : 200 GB file size
So even though there are some defaults, you have decided to use different values. Question: Do you get different results when using the default values? I'm asking because SQL Server Read-Ahead varies from 64 kB right up to 1024 kB.
Further questions to Consider
I know what you're going through but your aren't giving us much to go with.
- How is the storage attached? (2 GBit / 4 GBit / 8 GBit / 16 GBit)
- What kinds of disk are in the SAN? (SSD / HDD / Hybrid / ...)
- .... and possible other questions.
Yes, you stated you have no interaction with the server team, but any information you provide is a step in the right direction. Otherwise, it's just a best guess.
Answering Your Question(s).
So my question is, for a SAN in a not so small IT shop, is that a good number?
Possibly, but then again possibly not, that depends on your requirements and how your storage is configured. On an old IBM storage, we were constantly seeing only 210 MB/s throughput over a SVC with dual 2 GBit/s SFPs attached to the server hardware. We should have being seeing figures around 420 MB/s. After reconfiguring the switch ports we were nearer the second number, than the previous 210 MB/s.
Should we be aiming for more?
That depends on your requirements, your hardware and your configuration. Too may unknowns to give a definite answer.
What is an average IOPS figure for an entry level and high end SAN's
Anywhere between 2'000 (entry level) and 25'000 (high-end; in 2015). IOPS don't seem to be the number to look at any longer according to some articles.
Reference: Do IOPS Matter? â Simple Answer, No ()
So you ran diskspd
tool with the following parameters:
Discussing Parameters
- b8K : 8 kB block size (default: 64K)
- d60 : 60 seconds duration (default: 10s)
- o32 : 32 outstanding I/O reqeusts per-target per-thread (default: 2 )
- t8 : 8 threads per target
- h : disable software and hardware caching (deprecated: Use -Sh instead)
- r : random I/O alignment (would require a value)
- w0 : 0 write percentage of reads
- L : measure latency statistics
- c200G : 200 GB file size
So even though there are some defaults, you have decided to use different values. Question: Do you get different results when using the default values? I'm asking because SQL Server Read-Ahead varies from 64 kB right up to 1024 kB.
Further questions to Consider
I know what you're going through but your aren't giving us much to go with.
- How is the storage attached? (2 GBit / 4 GBit / 8 GBit / 16 GBit)
- What kinds of disk are in the SAN? (SSD / HDD / Hybrid / ...)
- .... and possible other questions.
Yes, you stated you have no interaction with the server team, but any information you provide is a step in the right direction. Otherwise, it's just a best guess.
Answering Your Question(s).
So my question is, for a SAN in a not so small IT shop, is that a good number?
Possibly, but then again possibly not, that depends on your requirements and how your storage is configured. On an old IBM storage, we were constantly seeing only 210 MB/s throughput over a SVC with dual 2 GBit/s SFPs attached to the server hardware. We should have being seeing figures around 420 MB/s. After reconfiguring the switch ports we were nearer the second number, than the previous 210 MB/s.
Should we be aiming for more?
That depends on your requirements, your hardware and your configuration. Too may unknowns to give a definite answer.
What is an average IOPS figure for an entry level and high end SAN's
Anywhere between 2'000 (entry level) and 25'000 (high-end; in 2015). IOPS don't seem to be the number to look at any longer according to some articles.
Reference: Do IOPS Matter? â Simple Answer, No ()
edited 26 mins ago
answered 1 hour ago
hot2use
7,55041951
7,55041951
add a comment |Â
add a comment |Â
up vote
1
down vote
IOPS and Latency are two independent numbers.
5k IOPS seems decent, but the Latency of 44ms suggest (to me) that it is practically useless (when compared to a single disk's 5ms latency).
When a single SATA drive can have a transfer rate of 100 MB/s, your 45 MB/s aggregated transfer rate suggests your SAN is overloaded.
High end SANs will incorporate SSD drives which can cause the aggregated IOPs (as measured by the SAN) to measure in the millions. Your SAN team should be able to tell you what percentage of IOPs/Transfer Rate your host is getting.
You definitely need to work with your SAN team to see if the limitation is due to the lack of SAN resources or a hardware/software problem.
Fix HW/SW problems
If you and the SAN team see that you are not getting all the power you are suppose to get, there could be something wrong with your setup.
- Ensure you have updated drivers.
- Ensure you have updated BIOS/firmware
- mother board
- Fibre Card
- Fibre Switches
- SAN device
- in one case, we had to update all of the HDDs' firmware
- Ensure Multipathing is configured correctly.
- In one case of mine, the auto fail-over was causing performance problems
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
add a comment |Â
up vote
1
down vote
IOPS and Latency are two independent numbers.
5k IOPS seems decent, but the Latency of 44ms suggest (to me) that it is practically useless (when compared to a single disk's 5ms latency).
When a single SATA drive can have a transfer rate of 100 MB/s, your 45 MB/s aggregated transfer rate suggests your SAN is overloaded.
High end SANs will incorporate SSD drives which can cause the aggregated IOPs (as measured by the SAN) to measure in the millions. Your SAN team should be able to tell you what percentage of IOPs/Transfer Rate your host is getting.
You definitely need to work with your SAN team to see if the limitation is due to the lack of SAN resources or a hardware/software problem.
Fix HW/SW problems
If you and the SAN team see that you are not getting all the power you are suppose to get, there could be something wrong with your setup.
- Ensure you have updated drivers.
- Ensure you have updated BIOS/firmware
- mother board
- Fibre Card
- Fibre Switches
- SAN device
- in one case, we had to update all of the HDDs' firmware
- Ensure Multipathing is configured correctly.
- In one case of mine, the auto fail-over was causing performance problems
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
IOPS and Latency are two independent numbers.
5k IOPS seems decent, but the Latency of 44ms suggest (to me) that it is practically useless (when compared to a single disk's 5ms latency).
When a single SATA drive can have a transfer rate of 100 MB/s, your 45 MB/s aggregated transfer rate suggests your SAN is overloaded.
High end SANs will incorporate SSD drives which can cause the aggregated IOPs (as measured by the SAN) to measure in the millions. Your SAN team should be able to tell you what percentage of IOPs/Transfer Rate your host is getting.
You definitely need to work with your SAN team to see if the limitation is due to the lack of SAN resources or a hardware/software problem.
Fix HW/SW problems
If you and the SAN team see that you are not getting all the power you are suppose to get, there could be something wrong with your setup.
- Ensure you have updated drivers.
- Ensure you have updated BIOS/firmware
- mother board
- Fibre Card
- Fibre Switches
- SAN device
- in one case, we had to update all of the HDDs' firmware
- Ensure Multipathing is configured correctly.
- In one case of mine, the auto fail-over was causing performance problems
IOPS and Latency are two independent numbers.
5k IOPS seems decent, but the Latency of 44ms suggest (to me) that it is practically useless (when compared to a single disk's 5ms latency).
When a single SATA drive can have a transfer rate of 100 MB/s, your 45 MB/s aggregated transfer rate suggests your SAN is overloaded.
High end SANs will incorporate SSD drives which can cause the aggregated IOPs (as measured by the SAN) to measure in the millions. Your SAN team should be able to tell you what percentage of IOPs/Transfer Rate your host is getting.
You definitely need to work with your SAN team to see if the limitation is due to the lack of SAN resources or a hardware/software problem.
Fix HW/SW problems
If you and the SAN team see that you are not getting all the power you are suppose to get, there could be something wrong with your setup.
- Ensure you have updated drivers.
- Ensure you have updated BIOS/firmware
- mother board
- Fibre Card
- Fibre Switches
- SAN device
- in one case, we had to update all of the HDDs' firmware
- Ensure Multipathing is configured correctly.
- In one case of mine, the auto fail-over was causing performance problems
answered 1 hour ago
Michael Kutz
1,478117
1,478117
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
add a comment |Â
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
It is even worse if you compare it to the latency of a ssd based setup. My own hyperconvergent cluster rarely goes over 1ms and that is mostly HDD (+ 2 LARGE ssd as cache per server).
â TomTom
35 mins ago
add a comment |Â
IOtester is a new contributor. Be nice, and check out our Code of Conduct.
IOtester is a new contributor. Be nice, and check out our Code of Conduct.
IOtester is a new contributor. Be nice, and check out our Code of Conduct.
IOtester is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f218976%2fsan-iops-figure%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password