PDA

View Full Version : 3d Rendering



Domathoine
08-26-2008, 10:41 PM
http://helmer.sfe.se/

o.0

Cool as all hell.

redeemus
08-26-2008, 11:01 PM
OMG!!! wonder what FPS he gets on WoW :D

Skaara
08-26-2008, 11:08 PM
I want one.

Peotr
08-28-2008, 01:51 PM
OMG!!! wonder what FPS he gets on WoW :D

60 fps.



I want one.

Trust me, you don't. Unless, uh, you have some weather modeling software you've been dieing to try, or you're really fucking addicted to folding, or something.


-- Peotr (Hates Distributed Computing, Clusters or Any Form of Cloud System)

Skaara
08-28-2008, 02:00 PM
Actually, I was thinking it would make a good server for an mmo similar to EVE online. I could be wrong.

Rennys
08-28-2008, 03:13 PM
Lol, I bet he has trouble running WoW at 60 fps. Looks like he is using onboard graphics, so I'd say 10-15 fps. Anyways, doesn't matter how impressive that looks, it's for heavy duty parallelized CPU processing. Think scientific calculations for weather modeling, crypto, etc. Distributed computing has its uses, but the number of use cases it is valuable for is small and highly dependent upon the software being properly written to distribute all the work loads across the CPUs. Hell, most people have problems using the dual cores in CPUs, much less 24 cores.

From my work (embedded systems), I know we have issues utilizing two hetergenous cores on a single chip without an incredibly cumbersome communication scheme. Keep in mind that my applications are "small" compared to others.

Talas
08-28-2008, 10:34 PM
Distributed rendering is a well-studied problem and is just as viable as scientific computing. Many graphical effects are driven by scientific computing. Communication from CPU to CPU is always tricky, but rendering lends itself to being embarassingly parallel, so you don't have to worry about too much CPU-to-CPU traffic. You can include it, of course, but in its most naive form, you could render without this communication and get very good (scalable) speedup.

One of the bigger issues with distributed rendering usually comes with memory access. Managing only approximately 2GB per core can be a tough challenge, depending on the number and size of your textures. It is not uncommon for a single frame of an animated movie to have 10s of GBs of textures. Addressability isn't the only concern though, organizing it in a coherent fashion is another big hurdle, because memory access is not cheap.

You should also consider the current GPUs as multi-process machines. They have simpler CPUs, but they have more than a hundred of them, working in parallel with minimal communication. The NVidia 8000 line of GPUs, for example, has 128 cores.

Interestingly enough, one of the major inhibitors of parallel software rendering for real-time update is the draw rate on the screen. GPUs have direct access to video memory, but software implementations still have to shove all those pixels across the bus to the video memory before the draw can occur. Real-time raytracing often shows render times without this transfer, because it does impose a limitation of about 15-20 fps just from passing the pixels across the pipe.

Berae
09-02-2008, 06:30 AM
WoW at 2000 fps! But only if those boxes are working on prerendering your game >.>

OMFG the sending of your graphics calcs through a network and expect a single frame in a second is silly :D