Performance of NVMe vs SCSI for Local SSDs in GCP using Container OS
In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:
NVMe SCSI
real 157.3 150.1
user 107.2 107.1
sys 21.6 22.2
The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum
on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with
docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox
Test was executed with
time md5sum largefile
performance google-cloud-platform google-compute-engine
add a comment |
In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:
NVMe SCSI
real 157.3 150.1
user 107.2 107.1
sys 21.6 22.2
The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum
on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with
docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox
Test was executed with
time md5sum largefile
performance google-cloud-platform google-compute-engine
add a comment |
In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:
NVMe SCSI
real 157.3 150.1
user 107.2 107.1
sys 21.6 22.2
The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum
on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with
docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox
Test was executed with
time md5sum largefile
performance google-cloud-platform google-compute-engine
In Google Cloud, I did a simple performance test comparing two "local SSD" drives attached to the same VM - first one attached as NVMe, second as SCSI. I was expecting NVMe to be somewhat faster, but got 5% performance hit instead:
NVMe SCSI
real 157.3 150.1
user 107.2 107.1
sys 21.6 22.2
The Google compute VM was running COS - Container Optimized OS, and the docker container itself was a busybox running md5sum
on the same 45GB file. The results (averaged over 3 runs) are a bit puzzling - sys time is lower, user time is about the same, but the real time for NVMe is about 5% slower. The container was ran with
docker run -v /mnt/disks/nvme:/tmp1 -v /mnt/disks/scsi:/tmp2 -it busybox
Test was executed with
time md5sum largefile
performance google-cloud-platform google-compute-engine
performance google-cloud-platform google-compute-engine
edited Nov 13 '18 at 1:09
Yurik
asked Nov 13 '18 at 0:39
YurikYurik
3,62723856
3,62723856
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.
FWIW, md5sum
is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio
for that directly on top of local SSDs.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53272133%2fperformance-of-nvme-vs-scsi-for-local-ssds-in-gcp-using-container-os%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.
FWIW, md5sum
is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio
for that directly on top of local SSDs.
add a comment |
I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.
FWIW, md5sum
is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio
for that directly on top of local SSDs.
add a comment |
I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.
FWIW, md5sum
is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio
for that directly on top of local SSDs.
I believe there was a recent improvement to the guest NVMe driver which might help with this. I heard that it's shipped by default on the latest Ubuntu images, but may not be included in the COS distribution yet. The patch is available here.
FWIW, md5sum
is also not meant as a storage performance benchmarking tool, so your results also may not be very reproducible -- it has a CPU overhead (to calculate the checksum), and also runs on top of your local filesystem (which can be fragmented or not, etc.), and who knows what kind of IO size it uses to read the data in, all of which could add variability into your test. If you want to do true IO benchmarking, Google's docs have a pretty good guide explaining how to use fio
for that directly on top of local SSDs.
answered Nov 13 '18 at 21:28
DanDan
4,15911838
4,15911838
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53272133%2fperformance-of-nvme-vs-scsi-for-local-ssds-in-gcp-using-container-os%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown