Author Topic: GPU fakekill physics.  (Read 554 times)

According to the bullet physics forums http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=18&t=4067, version 3 of the sdk will support doing simulation on the GPU, supporting many more objects at once.  I thought this would be an ideal solution to doing the fake-kill effect in blockland because many more bricks could be simulated, as well as leaving the CPU with much less overhead.  At the same time however, this might increase the Graphics card load.  The little knowledge I have is that this type of physics processing involves the use of pixel shaders and doesn't actually effect the polygon based load on the graphics card. of course this would remain a client side effect.  If the user had an older card (without pixel shaders) the regular method could be used instead.

Is this a possibility or is my information misled?

Blockland uses Torque, not SDK, so I don't think so.

Most of the community use those expensive computers with super powered CPUs, high amounts of RAM, Windows 7 Ultimate but an integrated graphics.

It'd really make no sense to switch the bullet physics to being calculated by GPUs, which in my opinion is a really stupid idea since Blockland can't do that anyway and not everyones GPU is good enough for that.

You're probably thinking of like PhysX or something, which isn't what Blockland uses.

This is more of an engine alteration idea....

Blockland uses Torque, not SDK, so I don't think so.
Congratulations, you don't know anything you're talking about!  SDK means software development kit, and version 3 of the Bullet physics(The physics engine BL uses) SDK, there will be GPU simulation support.