1. Is a RTX3060 (3060, not 3060ti) with 12GB VRAM ever better than a RTX3070 8GB VRAM, for GPU rendering like Enscape or Unreal Engine? The RTX3060 12GB would be a fair bit slower in fillrate, but is it ever considered more balanced because of the VRAM?
I currently have just a GTX750ti 2GB in Enscape, and I usually run out of VRAM before I run out of fillrate, in scenes with really just an average amount of texture detail - but could it be that 2GB simply is more of a bottleneck for the amount of fillrate the GTX750ti has, than the 8GB is for the RTX3070? I guess for videogames 8GB is considered enough for RTX3070; is it different in archviz?
2. Do GPU-based renderers like Enscape, Lumion, Unreal Engine use system RAM for textures etc when VRAM is all used up, at the cost of slower performance? Or is VRAM a hard limit on the size of projects you can render? In my experience it seems the latter - with my 2GB GTX750ti in Enscape, elements in scenes simply disappear and don't get rendered when VRAM is all used up, but my impression could be wrong.
(I'm limiting to these 2 choices b/c RTX40xx will draw too much power for my PSU, and b/c of budget. System RAM is 16GB and CPU is Ryzen 1700x.)
It depends exactly what you are doing. There is tradeoffs. The more graphic memory can improve performance in certain ways but the 3070 with 8 GB graphic memory may have increase performance in terms or more 'cores' (CUDA/tensor/etc.) but you have a 8 GB of video memory instead of 12 GB. The 12 GB allows for more memory space so you may have less issues of moving memory from graphic memory to system memory (memory swapping). If your application isn't utilizing the greater memory space than the 8 GBs then you'll benefit more from more cores. It depends on what is being done. A game engine like Unreal Engine 4 or 5 is going to depend on how you are using the game engine. There is still quite a latitude regarding how a game engine is used in any game or app using it.
The benefits are going to be determine by how a game or application is designed and programmed and how the software developers utilizes the features of the video card in the grand scheme of the application or video game.
Many of current generation of applications and games are moving towards utilizing between 4 and 8 GB of video RAM memory. I believe that utilization of greater than 8 GB video memory is still relatively limited. Most software developers tend to design for memory use of the largest group of users tend to have. The statistic main group of users are in the 4 to 8 GB zone. 2 GB and less are progressively being replaced by the 4 to 8 GB range and the greater than 8 GB video memory video cards are often out of price range of many users so the developers tend to focus their software optimized for 4 to 8 GB video card configurations and optimize to capitalize on GPU processing features over mere memory size. It tends to be where the investment of software developers time tends to focus around. Many applications and games are not really capitalizing on the larger RAM space in the video cards as much as they are on 'cores' in the GPU. My point is, the part of video memory above 8 GB tends to be underutilized by many apps. Just like some apps will not make optimal use of more than 8 cores of the CPU so having 18 cores may not be as optimally used by a particular app. While the OS can make use of those additional cores to run other apps and processes, but a singular app, that its developer did not think to make full utilization of more cores than it was designed to utilize.
Thanks, yes, I think I remember reading that game developers design (optimize) their games to the amount of VRAM most users have (while taking advantage of the CUDA processing cores, or perhaps shaders? - very interesting).
Do you we know how this balance of usage of VRAM vs GPU processing cores tends to work out in archviz GPU renderers such as Enscape or Unreal Engine? I guess it would depend partly on how the arch designer models their scene, but are the 8GB RTX3070 cards typically the right amt of VRAM for archviz (are most architects satisfied with 8GB VRAM for something like a RTX3070?
CUDA cores and shaders. Now, nVidia has their terms and AMD/ATI has their own equivalent but using a different term. Shaders and compute units that allows GPUs to take on some generic computing beyond just the video graphic work itself which can further enhance computer performance when some things can be offloaded to GPU and to allow the CPU to do what it does best. It really all depends on how one uses those features.
Regarding Enscape, (https://enscape3d.com/community/blog/knowledgebase/system-requirements/ ) - the software is likely optimized for 4 to 8 GB video RAM GPUs. Regarding Unreal Engine, if you are still in the UE4.xx then it is probably more geared around 4 to 8 GB. UE5 may capitalize on the 12 GB video memory but like you said, it depends on how the scene is modeled and the extent of polygon and texture resolutions. Since architectural visualization rendering is often not rendered in real-time, and spends more time rendering the scene, you can have extremely sharp rendering in high resolutions even with just 1 GB of video ram.
Hell, high quality rendering was able to be done with just a few Amiga 4000 computers back in the 90s but those renderings taking quite some time.Your playback of the rendering for say... a fly through.... would require very little. Even today, I would argue that 12 GB isn't as crucial. 8GB would still be fairly sufficient for most architectural needs. When it comes to realtime rendering, the GPU performance matters most for producing the ray tracing. The video memory may help if you have some massive scene and wanting to maximize the realtime raytraced rendering quality and frame rate quality. If you are not doing so much "realtime" raytraced of a large scene, then you probably can pre-render your scene in non-realtime for what may even be a higher quality photorendering with a little more time. This becomes a tradeoff. You probably don't need to go overboard spending a week rendering a scene if you can get a decent rendering of a scene over say... 3-5 minutes timeframe. The 8gb memory may sufficient enough.
Note: you noted the 3070 card having the 12 GB. The GPU itself would be only a slightly lower clocked version GPU with a little bit fewer CUDA cores and shaders and such than the 3080 series GPU but if the 3080 is on the stock base clock versus an slightly overclocked clock rate, it is possible the 3070 card with 12 GB RAM is a little bit overclocked and thus may be operating at a slightly faster clock rate but the 3080 would have a little more in terms of cores and shaders and may perform comparable in reality. I do believe most architectural rendering is sufficiently done with video cards in the 4 to 8 GB VRAM. 12 GB VRAM may smooth out fill because of more space and fewer swapping but it will depend what you are trying to do. If you mostly 'pre-render a scene and make a fly-through clip, the amount of video RAM is less important because during playback, you will generally only need enough video memory for 2-3 video frames (double or triple buffering). At ultra 8k resolution, 32 bits per pixel, you'll need about 135 Megabytes per frame. Which means 512 MB of video memory is more than sufficient.
Pre-rendering would just need space in memory for processing the rendering and doing calculations and all... which in most cases, even just 4 GB is sufficient because we don't renderings for print outs on large format vinyl poster sheets at far higher resolution than video displays (although they are static versus say.... video clips). At 4800 dpi, you are looking at about 1200 'pixels' per inch easily which means, a 36" x 48" sheet would have 43,200 'pixels' by 57,600 pixels... which the rendered bitmap for such large format printout may be. In which case, you just need hard drive space and stream the bytes of information to the printer for print out. In which case, you just need enough data buffer for streaming the data to the printer efficiently with the available memory in the printer itself. So even geforce gtx 950 card can do a raytraced rendering that exceeds even that of the quality of raytrace rendering that a RTX 4090 can do in realtime but such a rendering will take longer for the geforce gtx 950 and your CPU to process the rendering which obviously would not be realtime during the rendering but playback of a rendering would be. For architectural visualization, realtime photorealistic raytrace rendering is not as crucially needed and historically, wasn't done. Often, you would just pre-render the scene with fly through and produce a video file to show the client as well as print-quality rendering for the prints which is most certainly not something we render in realtime. Yes, contemporary realtime raytracing and such is quite nice, they would still be considered a somewhat low resolution rendering for large format print rendering 15-20 years ago.
I seen print-renderings done on a Commodore Amiga 4000 back in 1995 that rivals the quality of even an RTX 4090 can render in real time (a single video frame render produced in realtime -- 1/24th of a second or less) as used in games but for one frame. Granted, the Amiga would have had to spend days to render the scene for the print. It is only fairly recent that video cards can now do realtime raytracing as in to raytrace a scene in less than 1/24th of a second.
Either video cards should be sufficient for most needs in architectural visualization. The video RAM of either video cards is most likely to be sufficient for most architectural visualization needs of most architects.
Thanks so much. Yes, I read that raytracing uses more VRAM, with people even on higher-end GPUs often needing to turn raytracing off. For me, I hope to use realtime rendering (even if not with raytracing) as much as possible in Enscape, so I will have to remember that. Still reading much of what you wrote and explained, but wanted to thank you.
Standard Personal computers equipment are designed for standard electrical outlet found in residences and most offices which in US is rated for 115 to 125v @ 15A. Therefore, such PSUs should not be higher than 1600 watt for a single outlet. Electrical wiring for outlet and circuits should be rated to 40A and breaker/fuse being at 30A. Wiring should be rated for 10-15Amps above that of the number of outlet plugs per circuit for handling the electrical load of modern contemporary 21st century living. 400A was fine for 1950s and 60s. With so many more electrically powered devices including any computing device drawing power from outlets is going to need to be a bit more demanding. However, individual desktop personal computers are intended to be able to operate on regular outlets within the energy output of typical vacuums, microwaves, etc.
Nov 7, 22 5:35 pm ·
·
zoroaster
Thanks. Yes, I'm trying to avoid upgrading the PSU (so not an all-out upgrade in that sense, just the GPU).
Depends on your GPU and everything else. Ideally, you should have a peak power draw (taking the sum total of maximum wattage use of each device and components of the computer itself. Sometimes, you need to calculate from voltage and current to ascertain wattage volt x amp = wattage... roughly) shall not exceed 75% of your PSU is rated for. You may need to pay attention also on wattage draw per voltage level the PSU outputs. A modern PC PSU will output on multiple voltage levels with 5v lines, 12v lines, and 3.3v. Those are the three main voltages that the PSU output and there is a maximum current (amperage) for each. You can calculate the maximum wattage for each voltages by performing the same volts x amperage to get the wattages per voltages. The total wattages for all three voltages should equal but may exceed the rated wattage amount a little bit but the cumulative total wattage of all components of your computer at peak if all occured at the same time should not exceed 75% of the rated amount as well as not exceed 70-75% of the wattage per voltage level (eg. 12v, 5v, and 3.3v). Normally, your computer will use less than half. If your current PSU is rated as 1000w, you should aim your computer build to around 750w or less at maximum peak wattage use. By not exceeding 75% of your PSUs wattage on peak will tend to prolong your PSUs.
Typically, you computer will normally draw up to about 1/3 of that at any moment in time. Now, your video card and CPU are your among your more power demanding part. Hard drives and DVD/Bluray disc drives will usually operate its peak power uses intermittantly but when they are used. This approach I am using based on some very basic electronics fundamentals is more or less some fairly simple math stuff and you can find the info. If you are using a typical brand (eg. Dell), they are usually using the minimum size supply needed for their product which is usually only about 50 to 100w above what is needed. I tend to provide enough PSU so my maximum peak wattage use not exceed 50 to 60% of the PSU I choose to use. Usually, I choose a PSU that roughly doubles the theoretical mathematically maximum peak wattage use of the computer use. This allows the PSU to handle the load without stressing the PSU. The closer to the rated maximum output the PSU is at, the more stress is induced on the PSU components inside and the eventual failure of the PSU. Therefore, I hope your PSU is adequate enough for long time use. If you built your PC from your own choice of motherboard, ram chips, CPU, and video cards and so forth, then you probably are using the recommended size PSUs that are commonly published by various tech industry personalities such as video gaming tech industry personalities.
The recommended PSUs for gaming PCs are closer to my recommendation I indicated for PSUs, for gaming grade PCs builds for playing those AAA game titles and such. Therefore, they will usually be sufficient. If your PSU is in that level, just check the power consumption requirements. A 3070 and 3080 are fairly similar maybe about 50 watts more use. Most of video card manufacturers for 3070 and 3080 in the video card specifications you given will recommend a 750w PSU based on their example build. I'd recommend 1000w minimum but if you are at 850w, you may be just fine depending on the rest of your computer build. If your PSU is around 1200w, you should be ok still. They will likely have similar power requirements. The 3070 will have a slightly lower GPU power requirements to the 3080 but the 12 gb memory and possibly some clock rate specs will bring the cards to near equal power consumption. I'd at least would go 150w above the video card manufacturers or nvidia's recommended PSU size.
Thank you. I believe my PSU wattage is OK for a 3070; I did go by the conventional wisdom among hardware websites and tech industry personalities for gaming grade PCs as you mentioned, and hope I included enough extra wattage by doing so. But I will come back to refer to your post if it turns out I need to do more with this.
RTX3070 8GB or RTX3060 12GB: is the latter ever better for Enscape, Unreal Engine, etc?
1. Is a RTX3060 (3060, not 3060ti) with 12GB VRAM ever better than a RTX3070 8GB VRAM, for GPU rendering like Enscape or Unreal Engine? The RTX3060 12GB would be a fair bit slower in fillrate, but is it ever considered more balanced because of the VRAM?
I currently have just a GTX750ti 2GB in Enscape, and I usually run out of VRAM before I run out of fillrate, in scenes with really just an average amount of texture detail - but could it be that 2GB simply is more of a bottleneck for the amount of fillrate the GTX750ti has, than the 8GB is for the RTX3070? I guess for videogames 8GB is considered enough for RTX3070; is it different in archviz?
2. Do GPU-based renderers like Enscape, Lumion, Unreal Engine use system RAM for textures etc when VRAM is all used up, at the cost of slower performance? Or is VRAM a hard limit on the size of projects you can render? In my experience it seems the latter - with my 2GB GTX750ti in Enscape, elements in scenes simply disappear and don't get rendered when VRAM is all used up, but my impression could be wrong.
(I'm limiting to these 2 choices b/c RTX40xx will draw too much power for my PSU, and b/c of budget. System RAM is 16GB and CPU is Ryzen 1700x.)
Thanks!
More GPU does not make you a better architect/designer.
Yes, I know, hehe.
It depends exactly what you are doing. There is tradeoffs. The more graphic memory can improve performance in certain ways but the 3070 with 8 GB graphic memory may have increase performance in terms or more 'cores' (CUDA/tensor/etc.) but you have a 8 GB of video memory instead of 12 GB. The 12 GB allows for more memory space so you may have less issues of moving memory from graphic memory to system memory (memory swapping). If your application isn't utilizing the greater memory space than the 8 GBs then you'll benefit more from more cores. It depends on what is being done. A game engine like Unreal Engine 4 or 5 is going to depend on how you are using the game engine. There is still quite a latitude regarding how a game engine is used in any game or app using it.
The benefits are going to be determine by how a game or application is designed and programmed and how the software developers utilizes the features of the video card in the grand scheme of the application or video game.
Many of current generation of applications and games are moving towards utilizing between 4 and 8 GB of video RAM memory. I believe that utilization of greater than 8 GB video memory is still relatively limited. Most software developers tend to design for memory use of the largest group of users tend to have. The statistic main group of users are in the 4 to 8 GB zone. 2 GB and less are progressively being replaced by the 4 to 8 GB range and the greater than 8 GB video memory video cards are often out of price range of many users so the developers tend to focus their software optimized for 4 to 8 GB video card configurations and optimize to capitalize on GPU processing features over mere memory size. It tends to be where the investment of software developers time tends to focus around. Many applications and games are not really capitalizing on the larger RAM space in the video cards as much as they are on 'cores' in the GPU. My point is, the part of video memory above 8 GB tends to be underutilized by many apps. Just like some apps will not make optimal use of more than 8 cores of the CPU so having 18 cores may not be as optimally used by a particular app. While the OS can make use of those additional cores to run other apps and processes, but a singular app, that its developer did not think to make full utilization of more cores than it was designed to utilize.
Thanks, yes, I think I remember reading that game developers design (optimize) their games to the amount of VRAM most users have (while taking advantage of the CUDA processing cores, or perhaps shaders? - very interesting).
Do you we know how this balance of usage of VRAM vs GPU processing cores tends to work out in archviz GPU renderers such as Enscape or Unreal Engine? I guess it would depend partly on how the arch designer models their scene, but are the 8GB RTX3070 cards typically the right amt of VRAM for archviz (are most architects satisfied with 8GB VRAM for something like a RTX3070?
CUDA cores and shaders. Now, nVidia has their terms and AMD/ATI has their own equivalent but using a different term. Shaders and compute units that allows GPUs to take on some generic computing beyond just the video graphic work itself which can further enhance computer performance when some things can be offloaded to GPU and to allow the CPU to do what it does best. It really all depends on how one uses those features.
Regarding Enscape, (https://enscape3d.com/community/blog/knowledgebase/system-requirements/ ) - the software is likely optimized for 4 to 8 GB video RAM GPUs. Regarding Unreal Engine, if you are still in the UE4.xx then it is probably more geared around 4 to 8 GB. UE5 may capitalize on the 12 GB video memory but like you said, it depends on how the scene is modeled and the extent of polygon and texture resolutions. Since architectural visualization rendering is often not rendered in real-time, and spends more time rendering the scene, you can have extremely sharp rendering in high resolutions even with just 1 GB of video ram.
Hell, high quality rendering was able to be done with just a few Amiga 4000 computers back in the 90s but those renderings taking quite some time.Your playback of the rendering for say... a fly through.... would require very little. Even today, I would argue that 12 GB isn't as crucial. 8GB would still be fairly sufficient for most architectural needs. When it comes to realtime rendering, the GPU performance matters most for producing the ray tracing. The video memory may help if you have some massive scene and wanting to maximize the realtime raytraced rendering quality and frame rate quality. If you are not doing so much "realtime" raytraced of a large scene, then you probably can pre-render your scene in non-realtime for what may even be a higher quality photorendering with a little more time. This becomes a tradeoff. You probably don't need to go overboard spending a week rendering a scene if you can get a decent rendering of a scene over say... 3-5 minutes timeframe. The 8gb memory may sufficient enough.
Note: you noted the 3070 card having the 12 GB. The GPU itself would be only a slightly lower clocked version GPU with a little bit fewer CUDA cores and shaders and such than the 3080 series GPU but if the 3080 is on the stock base clock versus an slightly overclocked clock rate, it is possible the 3070 card with 12 GB RAM is a little bit overclocked and thus may be operating at a slightly faster clock rate but the 3080 would have a little more in terms of cores and shaders and may perform comparable in reality. I do believe most architectural rendering is sufficiently done with video cards in the 4 to 8 GB VRAM. 12 GB VRAM may smooth out fill because of more space and fewer swapping but it will depend what you are trying to do. If you mostly 'pre-render a scene and make a fly-through clip, the amount of video RAM is less important because during playback, you will generally only need enough video memory for 2-3 video frames (double or triple buffering). At ultra 8k resolution, 32 bits per pixel, you'll need about 135 Megabytes per frame. Which means 512 MB of video memory is more than sufficient.
Pre-rendering would just need space in memory for processing the rendering and doing calculations and all... which in most cases, even just 4 GB is sufficient because we don't renderings for print outs on large format vinyl poster sheets at far higher resolution than video displays (although they are static versus say.... video clips). At 4800 dpi, you are looking at about 1200 'pixels' per inch easily which means, a 36" x 48" sheet would have 43,200 'pixels' by 57,600 pixels... which the rendered bitmap for such large format printout may be. In which case, you just need hard drive space and stream the bytes of information to the printer for print out. In which case, you just need enough data buffer for streaming the data to the printer efficiently with the available memory in the printer itself. So even geforce gtx 950 card can do a raytraced rendering that exceeds even that of the quality of raytrace rendering that a RTX 4090 can do in realtime but such a rendering will take longer for the geforce gtx 950 and your CPU to process the rendering which obviously would not be realtime during the rendering but playback of a rendering would be. For architectural visualization, realtime photorealistic raytrace rendering is not as crucially needed and historically, wasn't done. Often, you would just pre-render the scene with fly through and produce a video file to show the client as well as print-quality rendering for the prints which is most certainly not something we render in realtime. Yes, contemporary realtime raytracing and such is quite nice, they would still be considered a somewhat low resolution rendering for large format print rendering 15-20 years ago.
I seen print-renderings done on a Commodore Amiga 4000 back in 1995 that rivals the quality of even an RTX 4090 can render in real time (a single video frame render produced in realtime -- 1/24th of a second or less) as used in games but for one frame. Granted, the Amiga would have had to spend days to render the scene for the print. It is only fairly recent that video cards can now do realtime raytracing as in to raytrace a scene in less than 1/24th of a second.
Either video cards should be sufficient for most needs in architectural visualization. The video RAM of either video cards is most likely to be sufficient for most architectural visualization needs of most architects.
Thanks so much. Yes, I read that raytracing uses more VRAM, with people even on higher-end GPUs often needing to turn raytracing off. For me, I hope to use realtime rendering (even if not with raytracing) as much as possible in Enscape, so I will have to remember that. Still reading much of what you wrote and explained, but wanted to thank you.
Add to that, I also agree with what NS said.
PS: If you have a sufficient size (full ATX tower), you can upgrade the PSU to a 1000 to 1500 watt PSU.
Standard Personal computers equipment are designed for standard electrical outlet found in residences and most offices which in US is rated for 115 to 125v @ 15A. Therefore, such PSUs should not be higher than 1600 watt for a single outlet. Electrical wiring for outlet and circuits should be rated to 40A and breaker/fuse being at 30A. Wiring should be rated for 10-15Amps above that of the number of outlet plugs per circuit for handling the electrical load of modern contemporary 21st century living. 400A was fine for 1950s and 60s. With so many more electrically powered devices including any computing device drawing power from outlets is going to need to be a bit more demanding. However, individual desktop personal computers are intended to be able to operate on regular outlets within the energy output of typical vacuums, microwaves, etc.
Thanks. Yes, I'm trying to avoid upgrading the PSU (so not an all-out upgrade in that sense, just the GPU).
Depends on your GPU and everything else. Ideally, you should have a peak power draw (taking the sum total of maximum wattage use of each device and components of the computer itself. Sometimes, you need to calculate from voltage and current to ascertain wattage volt x amp = wattage... roughly) shall not exceed 75% of your PSU is rated for. You may need to pay attention also on wattage draw per voltage level the PSU outputs. A modern PC PSU will output on multiple voltage levels with 5v lines, 12v lines, and 3.3v. Those are the three main voltages that the PSU output and there is a maximum current (amperage) for each. You can calculate the maximum wattage for each voltages by performing the same volts x amperage to get the wattages per voltages. The total wattages for all three voltages should equal but may exceed the rated wattage amount a little bit but the cumulative total wattage of all components of your computer at peak if all occured at the same time should not exceed 75% of the rated amount as well as not exceed 70-75% of the wattage per voltage level (eg. 12v, 5v, and 3.3v). Normally, your computer will use less than half. If your current PSU is rated as 1000w, you should aim your computer build to around 750w or less at maximum peak wattage use. By not exceeding 75% of your PSUs wattage on peak will tend to prolong your PSUs.
Typically, you computer will normally draw up to about 1/3 of that at any moment in time. Now, your video card and CPU are your among your more power demanding part. Hard drives and DVD/Bluray disc drives will usually operate its peak power uses intermittantly but when they are used. This approach I am using based on some very basic electronics fundamentals is more or less some fairly simple math stuff and you can find the info. If you are using a typical brand (eg. Dell), they are usually using the minimum size supply needed for their product which is usually only about 50 to 100w above what is needed. I tend to provide enough PSU so my maximum peak wattage use not exceed 50 to 60% of the PSU I choose to use. Usually, I choose a PSU that roughly doubles the theoretical mathematically maximum peak wattage use of the computer use. This allows the PSU to handle the load without stressing the PSU. The closer to the rated maximum output the PSU is at, the more stress is induced on the PSU components inside and the eventual failure of the PSU. Therefore, I hope your PSU is adequate enough for long time use. If you built your PC from your own choice of motherboard, ram chips, CPU, and video cards and so forth, then you probably are using the recommended size PSUs that are commonly published by various tech industry personalities such as video gaming tech industry personalities.
The recommended PSUs for gaming PCs are closer to my recommendation I indicated for PSUs, for gaming grade PCs builds for playing those AAA game titles and such. Therefore, they will usually be sufficient. If your PSU is in that level, just check the power consumption requirements. A 3070 and 3080 are fairly similar maybe about 50 watts more use. Most of video card manufacturers for 3070 and 3080 in the video card specifications you given will recommend a 750w PSU based on their example build. I'd recommend 1000w minimum but if you are at 850w, you may be just fine depending on the rest of your computer build. If your PSU is around 1200w, you should be ok still. They will likely have similar power requirements. The 3070 will have a slightly lower GPU power requirements to the 3080 but the 12 gb memory and possibly some clock rate specs will bring the cards to near equal power consumption. I'd at least would go 150w above the video card manufacturers or nvidia's recommended PSU size.
Thank you. I believe my PSU wattage is OK for a 3070; I did go by the conventional wisdom among hardware websites and tech industry personalities for gaming grade PCs as you mentioned, and hope I included enough extra wattage by doing so. But I will come back to refer to your post if it turns out I need to do more with this.
.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site?
Archinect
This is your first comment on Archinect. Your comment will be visible once approved.