Author Topic: Dwarf Fortress Megathread! - Necro'd enough to count as a vampire  (Read 135099 times)

Well, my monk died. I'm going to go and look at the traits my Spearwoman has to see how to make a character that can hit all of the time.

Well, my monk died. I'm going to go and look at the traits my Spearwoman has to see how to make a character that can hit all of the time.
What killed him?

What killed him?
And axeman and a lasher. I killed one of them, I forget which one killed me.

I'm training my Spearwoman to swim across to another continent. I really should have given her one level.

I'm training my Spearwoman to swim across to another continent. I really should have given her one level.
Just make sure she isn't a dabbling/novice swimmer, for the love of god. Make sure she has good endurance.

Just make sure she isn't a dabbling/novice swimmer, for the love of god. Make sure she has good endurance.
Lol, I died, but I made sure to end the process.

Ambushing is great, if you're good enough you can seriously sit 3 tiles away from someone filling their back with arrows.

Ambushing is great, if you're good enough you can seriously sit 3 tiles away from someone filling their back with arrows.
Yeah, the wiki tells me to train swimming and ambushing at the same time. I am swimming very sneakily.

The GPU is much much faster at doing thousands of calculations extremely quickly compared to the the CPU, even when it's not for rendering things.
Dwarf Fortress does a huge amount of small calculations for everything from path finding to water flow simulation to dwarven psychology etc etc, if all that load was shifted from the CPU to the GPU, because the GPU is effectively totally free in the game, then I would expect the game would get a huge boost in fps.

As an example for clarification, take something like a graphics shader, which is a simple program that does a bit of math to modulate colours for every pixel on the screen, every frame. It can run on a graphics card thousands upon thousands of times per second, achieving hundreds of frames per second on the screen. If you write a similar shader to run on the CPU, you would be lucky to get maybe 0.5 frames per second.
Take the same idea and apply it to the simulation in the game. On newer graphics cards this is totally possible, and not exceedingly difficult to do.

Sorry to rain on your parade, but I honestly don't think that a GPU is faster than a CPU. A fast GPU renders data around 700mhz, where a cpu can go up too and can easily succeed 3000mhz. Although, GPU's do have their own RAM built into the chip which increase the speed and aren't affected by interruptions. Although turning a GPU into a mathematical processing system and having it run faster than a CPU which's sole purpose is to process that sort of stuff seems incredibly unlikely. Also, coding that into dwarf fortress is incredibly time consuming, buggy and hard. It would be far better if multi threading was introduced.

Sorry to rain on your parade, but I honestly don't think that a GPU is faster than a CPU. A fast GPU renders data around 700mhz, where a cpu can go up too and can easily succeed 3000mhz. Although, GPU's do have their own RAM built into the chip which increase the speed and aren't affected by interruptions. Although turning a GPU into a mathematical processing system and having it run faster than a CPU which's sole purpose is to process that sort of stuff seems incredibly unlikely. Also, coding that into dwarf fortress is incredibly time consuming, buggy and hard. It would be far better if multi threading was introduced.
A GPU is a massively parallel processing unit. It crunches a lot of numbers, very fast.
The speed difference is brought about because a GPU has an enormously large number of transistors, many many more then a CPU. This introduces the large clock speed difference, but again the GPU can still process numbers faster then the CPU.
A CPU is built for general purpose operations. It can do a lot more things then a GPU instruction wise, but the fewer things a GPU can do it does them a hell of a lot faster.

I would also like to say that this probably wouldn't be much more difficult to do then multithreading. If a game isn't designed with multithreading in mind from the ground up, it's a rather large undertaking to make it multithreaded.
« Last Edit: August 21, 2011, 08:15:56 PM by blaman »

A GPU is a massively parallel processing unit. It crunches a lot of numbers, very fast.
The speed difference is brought about because a GPU has an enormously large number of transistors, many many more then a CPU. This introduces the large clock speed difference, but again the GPU can still process numbers faster then the CPU.
A CPU is built for general purpose operations. It can do a lot more things then a GPU instruction wise, but the fewer things a GPU can do it does them a hell of a lot faster.

I would also like to say that this probably wouldn't be much more difficult to do then multithreading. If a game isn't designed with multithreading in mind from the ground up, it's a rather large undertaking to make it multithreaded.


GPU's are able to process information fast due to their innate structure of being able to process HUGE matrices( due to the large amount of transistors). This may seem like the way to go but you one would have to create a program specifically created to use large matrices in order to receive the potential from the GPU's capabilities. GPU programming would only truly work well with pathfinding and maybe liquids (if the way they were handled was altered).

Ambushing is great, if you're good enough you can seriously sit 3 tiles away from someone filling their back with arrows.
I know what you mean. I was sneaking and I made a dragon give into pain by throwing stuff at him, he didn't even notice!

can you like, be a miner in adventure mode?

can you like, be a miner in adventure mode?
Sadly, no. There is some sort of program out there that lets you modify tiles though, forgot what it was.

GPU's are able to process information fast due to their innate structure of being able to process HUGE matrices( due to the large amount of transistors). This may seem like the way to go but you one would have to create a program specifically created to use large matrices in order to receive the potential from the GPU's capabilities. GPU programming would only truly work well with pathfinding and maybe liquids (if the way they were handled was altered).
It's not just matrices.
You can run some simplified C++ quite effectively on the GPU. It's still massively parallel, so everything has to be doing the same operation with different data, but that actually fits quite well with dwarf fortress.