What does the cuda option do in blender?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
1
down vote
favorite
As I understand, this thing just makes your gpu render scene with your cpu, but I can't see any good results using it with cycle render.Can you tell me how to activate it right, or why won't it do anythng?
cycles
add a comment |
up vote
1
down vote
favorite
As I understand, this thing just makes your gpu render scene with your cpu, but I can't see any good results using it with cycle render.Can you tell me how to activate it right, or why won't it do anythng?
cycles
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
As I understand, this thing just makes your gpu render scene with your cpu, but I can't see any good results using it with cycle render.Can you tell me how to activate it right, or why won't it do anythng?
cycles
As I understand, this thing just makes your gpu render scene with your cpu, but I can't see any good results using it with cycle render.Can you tell me how to activate it right, or why won't it do anythng?
cycles
cycles
asked Nov 10 at 21:18
Ilja Zero
61
61
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53
add a comment |
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53
add a comment |
2 Answers
2
active
oldest
votes
up vote
2
down vote
You're still using CPU as your compute device in the render settings tab. So, even though you've enabled CUDA rendering, you haven't turned it on. Once you turn it on, you'll see a noticable speedup.
Bear in mind that you're limited by your GPU's memory, so, for example, a 1GB card can only render <= 1GB of data. You might run into situations where the render fails because you've run out of memory, and you'll either have to a)reduce GPU memory usage by closing other programs that use your GPU and reducing the resolution of textures/assets in your scene, or b) switch to CPU render.
add a comment |
up vote
2
down vote
CUDA uses the CUDA cores of your GPU to do the rendering. In short, they're stream processors and do not affect how the output render looks. There is no difference between that and your GPU, in essence. I don't even think you can run an NVIDIA card without using CUDA cores for rendering for two reasons: 1. it may turn to molasses (analogous to running a PC on a USB ver. 1.0 instead of a harddrive), and 2. Blender won't know how. This is specific to NVIDIA. OpenCL is used for AMD cards. Since you have an NVIDIA card enabled, it will process the render. Otherwise, you're going to have to use your CPU. Do whatever is faster. I use a 635M and it is faster for me to use my Intel i5 3210 than the graphics card. You can read more about this here: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html
Also this reference is good for explaining what CUDA cores are: https://www.lifewire.com/what-is-nvidia-cuda-834095
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
You're still using CPU as your compute device in the render settings tab. So, even though you've enabled CUDA rendering, you haven't turned it on. Once you turn it on, you'll see a noticable speedup.
Bear in mind that you're limited by your GPU's memory, so, for example, a 1GB card can only render <= 1GB of data. You might run into situations where the render fails because you've run out of memory, and you'll either have to a)reduce GPU memory usage by closing other programs that use your GPU and reducing the resolution of textures/assets in your scene, or b) switch to CPU render.
add a comment |
up vote
2
down vote
You're still using CPU as your compute device in the render settings tab. So, even though you've enabled CUDA rendering, you haven't turned it on. Once you turn it on, you'll see a noticable speedup.
Bear in mind that you're limited by your GPU's memory, so, for example, a 1GB card can only render <= 1GB of data. You might run into situations where the render fails because you've run out of memory, and you'll either have to a)reduce GPU memory usage by closing other programs that use your GPU and reducing the resolution of textures/assets in your scene, or b) switch to CPU render.
add a comment |
up vote
2
down vote
up vote
2
down vote
You're still using CPU as your compute device in the render settings tab. So, even though you've enabled CUDA rendering, you haven't turned it on. Once you turn it on, you'll see a noticable speedup.
Bear in mind that you're limited by your GPU's memory, so, for example, a 1GB card can only render <= 1GB of data. You might run into situations where the render fails because you've run out of memory, and you'll either have to a)reduce GPU memory usage by closing other programs that use your GPU and reducing the resolution of textures/assets in your scene, or b) switch to CPU render.
You're still using CPU as your compute device in the render settings tab. So, even though you've enabled CUDA rendering, you haven't turned it on. Once you turn it on, you'll see a noticable speedup.
Bear in mind that you're limited by your GPU's memory, so, for example, a 1GB card can only render <= 1GB of data. You might run into situations where the render fails because you've run out of memory, and you'll either have to a)reduce GPU memory usage by closing other programs that use your GPU and reducing the resolution of textures/assets in your scene, or b) switch to CPU render.
answered Nov 10 at 22:28
Joseph Brandenburg
662
662
add a comment |
add a comment |
up vote
2
down vote
CUDA uses the CUDA cores of your GPU to do the rendering. In short, they're stream processors and do not affect how the output render looks. There is no difference between that and your GPU, in essence. I don't even think you can run an NVIDIA card without using CUDA cores for rendering for two reasons: 1. it may turn to molasses (analogous to running a PC on a USB ver. 1.0 instead of a harddrive), and 2. Blender won't know how. This is specific to NVIDIA. OpenCL is used for AMD cards. Since you have an NVIDIA card enabled, it will process the render. Otherwise, you're going to have to use your CPU. Do whatever is faster. I use a 635M and it is faster for me to use my Intel i5 3210 than the graphics card. You can read more about this here: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html
Also this reference is good for explaining what CUDA cores are: https://www.lifewire.com/what-is-nvidia-cuda-834095
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
add a comment |
up vote
2
down vote
CUDA uses the CUDA cores of your GPU to do the rendering. In short, they're stream processors and do not affect how the output render looks. There is no difference between that and your GPU, in essence. I don't even think you can run an NVIDIA card without using CUDA cores for rendering for two reasons: 1. it may turn to molasses (analogous to running a PC on a USB ver. 1.0 instead of a harddrive), and 2. Blender won't know how. This is specific to NVIDIA. OpenCL is used for AMD cards. Since you have an NVIDIA card enabled, it will process the render. Otherwise, you're going to have to use your CPU. Do whatever is faster. I use a 635M and it is faster for me to use my Intel i5 3210 than the graphics card. You can read more about this here: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html
Also this reference is good for explaining what CUDA cores are: https://www.lifewire.com/what-is-nvidia-cuda-834095
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
add a comment |
up vote
2
down vote
up vote
2
down vote
CUDA uses the CUDA cores of your GPU to do the rendering. In short, they're stream processors and do not affect how the output render looks. There is no difference between that and your GPU, in essence. I don't even think you can run an NVIDIA card without using CUDA cores for rendering for two reasons: 1. it may turn to molasses (analogous to running a PC on a USB ver. 1.0 instead of a harddrive), and 2. Blender won't know how. This is specific to NVIDIA. OpenCL is used for AMD cards. Since you have an NVIDIA card enabled, it will process the render. Otherwise, you're going to have to use your CPU. Do whatever is faster. I use a 635M and it is faster for me to use my Intel i5 3210 than the graphics card. You can read more about this here: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html
Also this reference is good for explaining what CUDA cores are: https://www.lifewire.com/what-is-nvidia-cuda-834095
CUDA uses the CUDA cores of your GPU to do the rendering. In short, they're stream processors and do not affect how the output render looks. There is no difference between that and your GPU, in essence. I don't even think you can run an NVIDIA card without using CUDA cores for rendering for two reasons: 1. it may turn to molasses (analogous to running a PC on a USB ver. 1.0 instead of a harddrive), and 2. Blender won't know how. This is specific to NVIDIA. OpenCL is used for AMD cards. Since you have an NVIDIA card enabled, it will process the render. Otherwise, you're going to have to use your CPU. Do whatever is faster. I use a 635M and it is faster for me to use my Intel i5 3210 than the graphics card. You can read more about this here: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html
Also this reference is good for explaining what CUDA cores are: https://www.lifewire.com/what-is-nvidia-cuda-834095
edited Nov 10 at 23:12
answered Nov 10 at 21:22
Lucidity of Power
3196
3196
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
add a comment |
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
oh, yea I enabled gpu compute, and same scene rendered in 33 secs, which is great,but I don't understand,why is it showing you only one yellow box,which indicates ,that it is using only one core,can someone explain this to me?
– Ilja Zero
Nov 11 at 8:41
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
A GPU only has one core. It's a GPU. There is no multi-thread processing in the same way that a CPU has multi-threads. This is why to take advantage of a GPU, your tile size should be much bigger (think 10x or more) because a GPU can process huge amounts of data simultaneously, although only one core on a GPU does the work in the end. Think of it this way: a GPU has one processor and multiple pipe lines to it. A CPU has multiple cores but only two pipe lines per core. That's the difference.
– Lucidity of Power
Nov 11 at 18:03
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fblender.stackexchange.com%2fquestions%2f122423%2fwhat-does-the-cuda-option-do-in-blender%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Short answer: If you have a good graphics card, when you switch to rendered viewport, it'll be much faster to move around and see the results, you can use this to your advantage, but the result will be the same/look the same, only the rendering time will change
– Eduardo Abreu
Nov 10 at 21:53