Archinect
anchor

My new computer is slower than my old computer

sire888

greetings,

 

I have a concern. I recently put together a new computer tower with 16GB RAM and a GFORCE graphic card with 2GB dedicated memory, amongst other specs; the problem is that it renders slower than my old computer, a laptop with 8GB RAM and a GFORCE graphic card with 1GB. What could be causing this?

 

Thanks ahead,

Sire

 
May 16, 15 4:16 pm
kickrocks

Either you picked out bad hardware or you are running faulty software.

Unless you provide more specific details, not much can be said.

May 15, 15 10:26 pm  · 
 · 
Non Sequitur

first guess: Whatever model you're rendering is 4x more complex than your old ones because you assumed a shiny new machine is a superstar

second guess: CPU size and speed

third guess: You don't know how to render efficiently

May 15, 15 10:43 pm  · 
 · 

Do you have other programs running in the background like antivirus and other programs that are taking up more system resources than your old computer. If your older computer didn't have these extra stuff running in the background, that can be an issue. Additionally, if the new computer is running say... Windows 8.1 and the old computer was running Windows XP, or something, it might be that there is also more cpu intensive processes running in the background.

On top of that, are you running the same program for modeling and rendering? If it is different, the render routines maybe entirely different and more cpu intensive. 

Keep in mind not only different company's software but also different version of the same company's software will have different resource impact. For example: using the newest version of autocad is probably going to run slower and render slower than autocad from 10 years ago. The reason is there is all these bells and whistles.

For example: if you wrote a program in BASIC with simple if-then comparatives comparing only 5 options, and you then write a program that has a more complex if-then logic framework comparing 5,000 options, the one with 5,000 options to compare is going to take longer to process. Conceptually, a big part of these programs is a sophisticated framework of comparative logic. The more bells and whistles, the more it will take. Not only that, there is memory footprint. 

The additional features adds to the system resources. I can do some tasks faster on an Amiga 1200 than even a Windows 8.1. In fact, Amiga 1200 boots up in less time than my computer running Windows 8.1 but we would be comparing apples and oranges. Very different resource and footprint. Graphically, the Amiga 1200 is less intense and the operating system is magnitudes of an order smaller. My C64 will boot up in fractions of a second. Yet, we are talking only 8 kilobyte ROM based kernal.  A GUI based OS put on ROM and a flash memory for settings and put on a cart can boot up in fractions of a second compared to minutes on a C64 but it also would be many times smaller and much simpler graphics using only 16 colors.

There are so many factors as to why your rendering is taking longer. 

Your newer computer probably has a bunch of stuff running in the background taking up CPU and memory resources and you could be using different software be it a different version or from a different company which either has less efficient routines or the app is more bloated with additional features that takes up more system resources. 

If you can, you may want to shut down unneeded tasks running in the background, possibly using the same very program you were using on the older computer. It will probably run faster.

However, only marginally unless it capitalizes on the fact that you have more cpu cores or gpu / CUDA cores (or equivalent) and more RAM. You need to apply some optimization which you probably did over the years on the old computer. 

It is unlikely hardware is the issue. Your biggest issue is likely to reside in the realm of software.

I doubt anyone can diagnose without seeing what is going on and you'll need a computer tech to check this stuff over at your computer.

May 16, 15 3:20 pm  · 
 · 
accesskb

what is your CPU?  You could have a $5000 graphics card and 100GB RAM but they would mean nothing if your cpu is 1.1ghz Intel Pentium from 2005 xD

May 16, 15 7:58 pm  · 
 · 

See accesskb's answer. Your graphics card has nothing to do with rendering performance.

Also make sure your rendering software isn't limiting threads or RAM. 

May 16, 15 10:02 pm  · 
 · 

Nick Weaver,

Careful with that notion. It can be but I would lean towards software not hardware or gpu.

I doubt it is a hardware issue. Considering you have multi-core and faster computer hardware.

For your information, I am a software developer. Most of you are not or never have been.

We don't know if the old computer had an upgraded graphic card but I doubt in this particular care. Sometimes, an older computer with a high end graphic card can perform better on a particular software than a newer computer with a mediocre integrated graphic or low end graphic card. The reason for that would be that the software in such a case is gpu-centric by design vs. cpu centric. The software architecture of a particular software maybe designed to be built around the gpu or the cpu. If you use alot of hardware accelerated graphics, the graphics routines are often going to be gpu centric and in modern days CUDA cores. This is why on video games, for example are often get the biggest performance boost based on the GPU versus the CPU as well as GPU RAM vs. main system RAM. This is because game is built to capitalize on GPU processing. Rendering routines are often written around gpu using routines written in direct x, open gl and CL or is it open CL. Whatever... It is written around GPU processing. It is why is it a Graphic Processing Unit not a Videon Display Controller/Unit.  Graphic routines are like display lists on Atari 8-bit but many generations more advance and sophisticated. The idea here is the GPU does a bit of the processing vs the CPU. These days, we are using hardware accelerated graphics instead of software renderer. Amazing as it may, software still often have a setting for software rendering and may be default.

This is not likely the case given we are talking about geforce video cards with 1 GB video ram on the older computer vs 2 GB Geforce video card on a newer computer. It is unlikely to be the case.

I doubt the person would be buying a new computer with a weaker cpu than an older computer... realistically. I believe the person is probably ugrading from a core2duo or core2quad or something like that to an i5 or i7 quad core or six-core - which should be faster cpu. Why would the rendering be slower.... perhaps running software is the case.

It is unlikely hardware. The only thing I can think of would be slower RAM if the older computer and newer computer both use DDR3 and the older computer had faster but smaller DDR3 memory module. This can be an impact but doubt it.

I believe the packaged antivirus/internet security/anti-malware software suite is probably the biggest factor for performance because those programs tends to tax performance because they can take a huge impact on performance because they consume more cpu clock cycles per second than what the older computer had available to the particular task especially if it didn't have that.

Another thing to consider is version of a particular 3d modeling/rendering tool or rendering setting. Are we talking about a more cpu intensive anti-aliasing and rendering routines being used? There is all sorts of reasons.

Also consider this, the older computer's 3d modeler/renderer is setup using hardware accelerated rendering. The new computer's 3d modeler/renderer could be somehow configured to use software rendering by default. Yes, amazingly, programs may often still use a software rendering routine.

I agree with kickrocks last sentence.

Another thing is that the software being used are two completely different modeling/rendering programs by different companies and frankly put, the current software probably is not as optimal in their rendering routines. We don't know unless we have more information and the best way someone will know if if they have a look at the computers and see what is going on.

May 16, 15 11:23 pm  · 
 · 

Put it simply, it is likely software related not hardware and nothing indicated in the original post would cause me to lean on hardware configuration.

May 16, 15 11:25 pm  · 
 · 
sameolddoctor

ALL THAT MATTERS FOR RENDERING IS CPU SPEED

May 16, 15 11:35 pm  · 
 · 
natematt

I mostly want to echo what others are saying, CPU is the name of the game with rendering. There are some software that can utilize graphics cards, but most it's just brute CPU speed.

In that same line of thought, when it comes to CPUs it's really all about the benchmarking. Look up the benchmark number for your old CPU and new CPU and get back to us if that's not it.

 

May 17, 15 2:05 am  · 
 · 

sameolddoctor,

not true at least how you stated it. If you ever wrote a computer program and I am not talking hello world in BASIC, you'll know. CPU may sometime be the factor. We know the cpu is faster because it is newer than the laptop and it is a desktop computer vs the laptop. 

The hardware is faster... period. Higher clock frequency, more cache, full desktop datapath on the DDR, faster FSB, system bus, memory bus, etc. 

The only thing I can think of hardware being slower is lower RPM hard drive (not likely but possible) and slower RAM modules which is highly doubtful.

Modern rendering engine is such as contemporary raytrace rendering is going to be hardware accelerated graphics and GPU is a big part of it. If the software on the laptop was set to use hardware accelerated graphic and the desktop wasn't then that will make the difference between rendering speed. Brute CPU may only make a 5% difference in performance where a faster Graphic card may literally double performance of graphical rendering time. If the software is on the new desktop is only using software rendering and the software on the laptop is making use of hardware accelerated graphics, the laptop will perform faster by virtue of the software making use of accerated graphics vs. software rendering. 

There is a big difference between raytracing in GDI and raytracing with OpenGL and OpenCL.

May 17, 15 6:42 pm  · 
 · 

natematt,

CPU is a part but not the name of the game because 3d renderers are not built like they were 15 years ago where they using GDI. CPU makes a bit of difference but the desktop is probably across the board on hardware equal to or superior to the laptop including CPU. Which in other words should not perform any slower than the laptop. Laptop hard drives are generally 5400 rpm and desktop hard drives are normally 7400 rpm which should perform better on that part.

In addition, a laptop uses a mobile grade gpu which generally works at a lower performance than a desktop CPU. Especially a video card because a desktop delivers more power to the video card and allows for more superior graphics. In addition, the desktop cpu should also outperform the laptop's CPU not only because it is newer but also because it is a desktop cpu which means it would be clocked at 3+ Ghz and at full performance 100% of the time and where a laptop cpu is throttled down and operates at lower performance for reason of battery life, heat emission, etc.

Since 2005, 3d modelers and renderers are designed to make use of hardware accelerated graphics and that setting may or may not be set by default. GPUs with CUDA cores are far superior at rendering processing than a CPU because they are hardware engineered to be so not to mention there is massive parallel processing going on. In addition, there is blitter architecture integrated into graphic cards that allows for efficient processing of rendering fast. After all, these 3d modeling and rendering tools are taking pages from high end 3d game engines design that needs high performance high quality rendering in real time. 

I personally think the issue is not a cpu vs gpu issue but that the issue at hand is software-related.

May 17, 15 6:59 pm  · 
 · 

PS: I'm not talking about using only GPU for rendering anyway. Maya for example uses  a hybrid GPU/CPU. CPU-only rendering isn't used unless you are using pure software-renderer and that is SLOW.

GPU accelerated rendering routines improves rendering performance drastically especially when you also capitalize on CUDA via OpenCL. This is what new rendering tools capitalize on to get better graphics rendered faster. Yes, CPU is used but it is only part of the equation.

If you use software renderer mode or something then you will be processing all the rendering processing on the CPU except for the raster bitmap outputting. Not even a 12-core i7 would perform software rendering faster than even a dsingle core Pentium 4 processor at half the clock frequency that makes use of graphic accelerated graphics.

May 17, 15 7:12 pm  · 
 · 
natematt
What commonly used arch rendering software renders with gpu?
May 17, 15 8:49 pm  · 
 · 
kickrocks

iray/mental ray, v-ray rt, octane, cinema 4d, luxrender

May 17, 15 9:09 pm  · 
 · 
sameolddoctor

Richard, what Im trying to say is that the GPU and RAM seem solid enough, but weve not heard anything about the CPU, which is under the most pressure when trying to render...

May 17, 15 9:54 pm  · 
 · 

Also.... Maya, Autocad, Revit and just about every renderer in the last 5-10 years. Sure, CPU is involved and most of them but ALL the common 3d modelers, renderers and animation tools made these days utilizes the GPU or renders with the GPU. So do 90% of the game engines. 

natematt: Do you know what direct x or opengl is? If a program uses it, then it uses the gpu. open cl adds to that with the ability to do even more on the gpus with CUDA core and equivalent on ATI video cards.

It is industry standard. 

Lightwave, Blender, Unity, Unreal, Crysis Engine, Maya, and many others. If the name is something you and several others heard of, then GPU is used. "Hardware Accelerated Graphics" is another give-away at indicating the renderer uses the GPU. Sure, parts of the rendering process involves the CPU but GPU is now used. GPUs are constantly getting more advance with graphic PROCESSORS with performance comparable to your cpu. Think about it. Modern video cards with CUDA cores at 1 GHz each and you have like 640 or over 1000 of such cores with streamlined instructions that can do graphic operations in one or two cycles where it might take the CPU with its instruction set 8 or 10 clock cycles to do because the cpu instruction set isn't designed to do the stuff as efficiently as the CUDA cores for the particular types of functions. The use of this allows rendering that takes 8 hours, with a pure software  renderer,  using only the CPU to render and process 3d information and rasterize and output the raster bitmap to the videocard to output, and instead render in say an hour or less. We improved rendering drastically by advance gpu and associated architecture. They learned this from Amiga the benefit of offloading tasks away from the CPU to specialized components. On the Amiga, several components on the motherboard integral to Amiga graphics processing, they began integrating into the GPU of PC video and kept expanding on that to make graphic processing ever more powerful. In a sense, video cards are basically a "graphic processing computer" in a sense. They are far more than the simple register flip/flop digital 'switches' that old CGA video controllers and even early VGA controllers were back in the 1980s.

Simply put, cpu is only part of the equation these days. GPU is the other part. GPU being namely the graphic card is a major part of any graphic processing including rendering.

 

May 17, 15 11:02 pm  · 
 · 

sameolddoctor,

I'm following up through the posts.. bear with me.

Okay. I agree with a point you are getting at is a possibility but a remote possibility. The new computer is a desktop and also has more main system memory... that is --- 16 GB. This would also mean the CPU is likely going to be faster than the one in the old computer which is a laptop. Remember, desktop computers of any given day tends to be faster than a laptop cpu. 

I agree with what was said earlier, there is not enough info given to know precisely. Hardware is unlikely. I believe it is something more software related than hardware.

May 17, 15 11:33 pm  · 
 · 
natematt

last 5-10 years

I think you are off base. A lot of the more popular software among architectural professionals have only started leveraging GPU in the last 1-2 years. And most people are at least a few years behind with software, because that stuff is expensive.

 

May 18, 15 1:31 am  · 
 · 
kickrocks

Modeling is serial so that explains the dependence on faster processor clock speed. It took awhile to get enough cores fast enough on a scaleable GPU architecture and then programming parallelism isn't so simple in terms of generally nonparallel workloads that need to be A>B>C>D instead of A+B+R > E+F+T > D+G+I+Z. You can render like that no problem but for a real-time model, there'd be holes everywhere.

May 18, 15 2:41 am  · 
 · 
toosaturated

OP needs to tell us more before we can help...Tell us about your cpu, hard drives, operating system, and what you are using to render. cpu is the biggest factor in rendering, there are newer rendering programs that are starting to rely on gpu's like octane etc..

May 18, 15 10:42 am  · 
 · 
null pointer

This thread is useless.

If the OP actually gives a crap, he should post actual specs for the two computers for us to compare.

 

The end.

May 18, 15 10:47 am  · 
 · 

There hasn't been a commercial grade architecture modeling/rendering written in the past 5-10 years that is a software renderer. Modeling and rendering routines uses parallelism to handle parts that can be done in parallel even though there is sequential aspect. Every GPU and CPU out there can sequentially process by nature. The point is through advance mathematical rendering algorithms allows for alot of parallel computing processes to be done. 

Kickrocks, you are only thinking of pure sequential and pure serial. It is a combination of parallelism. You're rendering millions of polygons. Each with surface textures as well as light sources, etc. We capitalize on parallelism to handle multiple polygons at the same time or concurrently. 

when you are dealing with 3d modeling and rendering, you are not dealing with pixels... you are dealing with data structures, advance data sorting and data value modification. Only time when these polygons and other such data become pixels on the screen is in the rasterization process when you have a two-dimensional bitmap. to output to adisplay screen which is still a 2d bitmap matrix just as it has been for over 30 years.

rendering is algorithms and routines being applied to data structures.

GPU's includes custom designed components engineered to process rendering more efficiently then strictly relying on the cpu. 

CPU is not the biggest factor in rendering and if it was the new desktop is ABSOLUTELY going to have a faster cpu than the laptop. It not only has a full desktop class quad or six core processor clocked at probably 3 or higher GHz and a full-width DDR3 memory which means DDR3 will move the data at a faster rate than the laptop's SO-DIMM which has less pins IIRC. Laptops also runs at lower clock speed for TDP requirements and at lower performance because they don't have all the hardware features of a desktop CPU.

Like I said, we have progressively been using more and more GPU oriented processing. We are talking about rendering and real modeling and rendering programs like Maya, Lightwave Rhino, etc. ALL uses hardware accelerated graphics which means GPU are used in the rendering process and even more so now adays. 

Blender and other programs including Unity, Unreal 4 game engine already uses it and its FREE to download. 

There is three key hardware factors - CPU, GPU/graphic card, and RAM (system and graphic memory). The new desktop beats the laptop on all three areas unless you really really went skimpy on the CPU by buying an old desktop computer with an i3 processor or dual core if your lap top had quad core. You aren't likely to do that if you are buying a $130-$200 video card and also 16 MB stock. 

The person probably wasn't going for anything with less cpu cores and GHz than he already had on the laptop for a desktop. Lets move on. Lets assume the person has some basic sense of purchasing a computer. RAM makes a difference in rendering because it is the workspace available to do rendering at really high speed and having to resort to virtual memory impacts performance dramatically as mechanical seek and transfer of data back and froth to a hard drive is ALOT slower than RAM chips or otherwise chip memory.

CPU is sometimes over-rated these days. Again time to move on. The OP's issue is highly unlikely to be a hardware related. It is likely to be software related.

May 18, 15 11:21 am  · 
 · 
null pointer

"There hasn't been a commercial grade architecture modeling/rendering written in the past 5-10 years that is a software renderer"

Maxwell doesn't use CUDA or OpenCL.

The programmers have repeatedly complained about having to simplify calcs up to a point where you have to rewrite tons of code just to make use of the limited instruction set that CUDA and OpenCL allow for. They've been working on it for at least the last year and a half and we have yet to see anything aside from the siggraph demo.

Maxwell is probably the most advanced renderer out there aside from Mental Ray.

Check yourself brah.

May 18, 15 11:29 am  · 
 · 

null pointer,

It's not likely hardware because it is unlikely a brand new desktop computer with 16 GB RAM, a desktop grade GeForce video card - probably a GeForce 750+ or 900 series video card such as a nvidia geforce 960 gtx, and desktop grade CPU and oh, desktop memory tends to have more pins and such and usually faster or capable of higher FSB clock and memory bus clock.

It is highly unlikely the cpu on the new desktop is weaker than on the older laptop.

The issue is probably software related vs. hardware related. 

May 18, 15 11:30 am  · 
 · 
kickrocks

I'm not sure what you're addressing. In practical applications, most of the programs are still dependent on clock speed and a few cores in the main modeling tasks. For rendering, it's far more efficient to offload to the GPU. Not sure what you're getting that.

May 18, 15 11:56 am  · 
 · 

Ok, so we are more or less agreeing. In fact the original post is about rendering. Hence it leads credence to my point but it is probably not even a cpu or gpu issue. Sure, the specs of the hw would only confirm the issue is probably software related not hardware. 

I agree in the modeling and such, we are probably dealing with a bit of cpu and even gpu is taking care of the graphic processing but 3d wire frame modeling doesn't really make optically noticeable difference in performance because this stuff is done more or less real-time so we really can't make out the difference in time performance with our eyes. Intense rendering does (non-realtime rendering applications like most architecture rendering) or an FPS readout does (for video games).The stuff that makes higher quality realtime game render in real-time serves to make even non-realtime rendering applications' rendering be performed faster and less wait time to get good rendering as we want to get good print quality rendering done in less and less time so we have less downtime on rendering. Instead of rendering for 8-48 hours to get the quality of rendering desired, you can get it done in 1 to 6 hours. Much better, eh? That's the idea. Soon,in the not to distant future, we can even have high end print quality rendering able to be done in realtime (1/30th of a second or less rendering). This will come from faster processors and gpus and more RAM and faster RAM memory.

However, the real issue if you really follow and my point ultimately is that the issue is likely not hardware related but software related from drivers to software settings to, other apps consuming resources of cpu, memory, etc. ultimately causing slow-down. There is a lot of software related issues that can be causing slower rendering performance on a new desktop which is likely to have A) faster cpu with as many or more cores, B)more RAM and possibly faster RAM memory, C) Faster video card/GPU with also more memory and possibly faster graphic memory, and D) likely larger and possibly faster hard drive operating at a higher RPM rate, etc.

May 18, 15 12:52 pm  · 
 · 
kickrocks

What we're discussing is irrelevant to this topic to be honest. It's probably better had you opened a new thread where there can be devoted posts and information posted so it'll be useful for future reference. Bound to be more people wondering since CPU performance is stagnant despite the ridiculous static costs each generation and GPU is only going faster and cheaper.

For all I know, the user is on a AMD A10 because it's a cheap quad core and bought a low-end card with 2GB of DDR3 because bigger is better. Just call it buyer incompetence until there's a reply.

May 18, 15 2:20 pm  · 
 · 

Well we side-tracked on a point that was being pushed. How do you know the person has an AMD A10 ? PM? Why don't the person give out the specs of the hardware. Actually, it would likely be GDDR5 but what's the laptop? Mobile AMD or Pentium dual core or quad core. Probably weaker because it isn't a desktop processor.

Even an intel mobile processor on a laptop that only has like 8 GB of memory capacity is likely to be more performing than an AMD A10 desktop processor.

Intel mobile i5 processors It would be closer to an A7 or A8 or maybe A9 desktop considering it is probably an i3 based mobile processor. Unless the laptop had VERY high specs for a cpu when it came out, it wouldn't likely be an issue. I doubt we would be seeing rendering being appreciably different. The graphic card being probably a desktop grade geforce video card... likely in the 750ti range, I hightly doubt the mobile gpu on the laptop would be comparing or being considerably superior to the desktop video card. I think it is likely going to be less than the desktop video card and that will make a difference and even then rendering would be about equal in worst case scenario and likely be faster. Add that the hard drive is probably a higher RPM and there is twice as much RAM so there is less virtual memory usage which will be an improvement factor to the rendering. 

Lets add that the desktop is probably operating possibly at a higher FSB MHz and so forth.

I believe it is a software related issue unless there is any clear definitive information on hardware indicating that it would be performing better. We have inconclusive information to know for sure but if he/she is willing to state exactly the hardware and the software being used on both machines, there is know knowing exactly what the issue is.

We are talking about an older laptop being used. so unless the laptop was considerably high end like high end gaming laptop, it is likely the overall system specs across the board would perform slower than the desktop even with AMD cpu. What model AMD A10 is it? 

I can compare the cpu specs if we knew exactly what cpu. Bottom line: It is more likely software than hardware even though the cpu is probably not high end on today's hw for desktop.

I have custom built computer hardware for years and I doubt the issue is hardware as much as a software related issue which includes misconfigured software by user error.

I wouldn't say buyer incompetence but inconclusive.

May 18, 15 3:36 pm  · 
 · 

In sort: I lean towards software related over hardware related. However, I would say the reason is inconclusive unless there is specific information on both hardware and software of both the new desktop and the old laptop.

May 18, 15 3:38 pm  · 
 · 
toosaturated

I think we've said that in less than 30 words...

May 18, 15 4:04 pm  · 
 · 

I think I said that all along but my point also is I lean to software related as likely the issue. My very post on this topic implied it. It is also made point of in the second sentence. Sometimes you have to get that point by searching for that "Waldo" of a point.

That's a test of attention to detail and points given in length or in short sentence. It is there.

Again post #8.

Null pointer,

Maxwell render uses Open GL and that means it is not a software renderer. It is a hardware accelerated graphic renderer. Not all may use Open CL or optimal use of CUDA or Stram of ATI tech or the OpenCL being still relatively new. But OpenGL is not new and is fairly mature and although it is more cpu oriented but it is still accelerated graphics and not a complete software renderer because it would be GDL or something like that and no major software company makes a pure software renderer.

It is an unbiased renderer and yes it does involves gpu accelerated graphics routines using things like openGL. It doesn't bias itself to a particular video card technology such as nVidia or ATI/AMD.

Unbiased rendering doesn't mean that of course but the design architecture isn't exactly optimized to nVidia CUDA or ATI "Stream" so much. Sure, it is more cpu oriented than some but still keep in mind it is not a 'true software renderer". That stuff isn't done anymore, commercially. Sure, a hobbyist might make a renderer as such or if you are doing this for Java only and even java renderers are using openGL. openGL != openCL. Two different things so don't confuse the two.

May 18, 15 6:43 pm  · 
 · 
null pointer

Wait wait, Maxwell only uses OpenGL when you use studio to assign materials and preview the scene in polygon mode. It does not render using OpenGL. I repeat, it does not render using OpenGL. It is definitely not a hardware renderer.

Stop pretending you know what you're talking about.

You don't.

 

Also, comparing OpenGL to OpenCL or CUDA is pretty dumb.

It's like comparing PHP to Assembly.

May 19, 15 9:01 am  · 
 · 
proto

"For your information, I am a software developer. Most of you are not or never have been."

 

wow, just wow, Richard

May 20, 15 4:02 pm  · 
 · 

null pointer, 

Okay, Maxwell is substantively a software renderer (one of very few commercial grade renderers that are such) but rendering on it would be slow regardless of the cpu. Most renderers used for architecture rendering does in fact use hardware rendering or gpu-asssisted rendering because I can get substantively the graphic quality that Maxwell would provide in less time because the gpu would get the job done faster and for nearly unnoticeable improvement in rendering accuracy is not enough to warrant additional days or weeks of rendering and I would have to have like 10 computers with i7-6 or 8 cores to get the same quality rendering as what I can do with say Lightwave or Maya using opengl rendering in the same time frame as a i5 with a modest nvidia geforce 750ti.

Software rendering takes longer to process... plain and simple.

May 21, 15 4:52 am  · 
 · 

Octane basically a competitor of Maxwell and you can see right off the bat that it is multiple times faster. 

In the case of needing 10 computers of an i7 caliber and all to compete with a quad-core i5 using a gpu renderer is a bit of overhead issue of network transfering of data back and forth and such and to make up for the latency and even then you may still need more computers or such. Otherwise, you need a mobo with like 10 i5 quad-core cpus.

That's why gpu accelerated rendering works great because it gets the rendering done in a more timely manner. If you are doing animated work, video games, and such, you want the rendering to be good but rendered on a timely fashion because a video frame that is seen for only 1/60th of a second is not warrant to get overzealous with absolutely precise rendering at 2400 ppi resolution.

Even on print, it might not make any real difference on the final print because the printer isn't quite that uber.

Barely perceptible improvement isn't worth an extensive increase on time needed to get the work done.

May 21, 15 5:24 am  · 
 · 
desertstone

Okay, so I've been following this conversation for a bit. I am using Blender to do an animation that is 720 frames long at 24 fps. I am using CPU rendering with an expiremental render setting since the software doesn't allow for gpu rendering with the particular setup I'm doing. Now. I have made certain that parking on all my CPU's is disabled. My renderings started out pretty fast at an estimated 100 frames in a 24hr period. Yet it starts to slow way down after about 10 to 20 frames. I am wondering if cpu throttling due to heat could be the issue. Any thoughts


System is a Dell workstation. Running Windows 7. ( I have heard that blender doesnt play well with windows 10) and has 2 quad core cpu's with 8 threads total. (8 core. Running 7 in the software) 16gb RAM. Rendering less than 100 samples. 

May 11, 18 1:43 am  · 
 · 
randomised

"Okay, so I've been following this conversation for a bit."

For three years?

May 11, 18 2:10 am  · 
 · 
Arachnid

I have the same case. My new computer has the following specs;
Core i7 3770
24gb RAM @ 1600ghz
GeForce 1050, 1442mhz graphics clock
Samsung SSD 500GB

The old one;
Core i7 3770
8gb RAM @ 1333ghz
GeForce 750, 1189mhz graphics clock
Samsung SSD 250GB

Both of these run Cinema 4D R19 with the same settings. I had the two computers render the same scene and the old computer rendered the scene in 19 minutes while the old one rendered in 32 minutes. I just don't get it. Any ideas?

Jan 31, 19 7:52 pm  · 
 · 
Non Sequitur

Both of those 2 builds are almost identical. Same cpu and gpu are old and very similar.

Jan 31, 19 9:09 pm  · 
 · 
sameolddoctor

Its the larger capacity SSD

Jan 31, 19 9:25 pm  · 
 · 
curtkram

pagefile?

Jan 31, 19 11:10 pm  · 
 · 
Arachnid

Yes they are similar, but the hardware in the new computer are improvements by the manufacturers of the components. Except for the CPU. What I failed to mention however is that the new computer has more programs installed. Could that be the problem?

Feb 1, 19 1:15 am  · 
 · 
birder

The CPU is same so you wont see any much difference

Feb 1, 19 7:00 am  · 
 · 
Arachnid

​Not only does the old computer have less RAM, their frequency is also low compared to the new one. Other significant hardware differences are; the old computer only has 2 hard drives. The 250gb SSD runs they system and the 7200rpm 1tb hard drive for storage. The new has 500gb SSD running system, a 2tb hard drive and another 1tb hard drive. The old has a 600w power unit and the new has 650w both from EVGA. The software differences; The old computer has just Cinema 4D installed and the Adobe Suite, while the new has both the above, 3DS Studio and ​ArchiCAD

Feb 1, 19 9:34 am  · 
 · 
randomised

It's because of global warming/climate change since rising temperatures are inversely proportional to decreased computer performance.

Feb 1, 19 9:34 am  · 
 · 
Non Sequitur

funny... debating minor performance differences on budget PC builds.

Feb 1, 19 9:57 am  · 
 · 

Block this user


Are you sure you want to block this user and hide all related comments throughout the site?

Archinect


This is your first comment on Archinect. Your comment will be visible once approved.

  • ×Search in: