Author Topic: is AI bot learning possible? how hard would it be?  (Read 1434 times)

things like this:
https://www.youtube.com/watch?v=qv6UVOQ0F44
https://www.youtube.com/watch?v=iakFfOmanJU
https://www.youtube.com/watch?v=xOCurBYI_gY
https://www.youtube.com/watch?v=YGJHR9Ovszs
https://www.youtube.com/watch?v=Q-WgQcnessA

but for blockland bots. learning what skills and what kills them rather than plundering endlessly into lava/things that will kill them, how to survive and what to expect over repeated generations of learning

possible?  because if it was it would be hella cool.

yes and no

no being the fact that the amount of work and computation required for truly "smart" bots make this nigh impossible (unless you greatly simplify and limit what it can learn)

yes as in its possible.

yes for AI bots that are built specifically for one task and break if you change any parameter of that task

no for the ones that can adapt to different situations

Possible yeah, doable no.

The scale of hardness jumps about a thousand folds from 2D NES games to 3D sandbox games. A simple lookout for memory changes is out of the question for a game of this scale. So you introduce an image detection system (Which is hard to optimise for), and start hacking into the games code to determine where objects/variables are (Which Badspot will kill you for).

And it's really easy to claim you developed a learning AI when all it does is run around like a brainless monkey randomly until it gets lucky. Trial and error by thousands of generations is extremely unimpressive and I would hardly call it an AI.


(Which Badspot will kill you for)
Nah. He generally doesn't give a crap about reverse engineering, and people already know how to figure out where objects and bricks are on screen. It wouldn't be too massively difficult to do this if you knew a lot about opengl and neural networks and AI learning, but the reality is almost nobody here does.

are you 'terrorist' ingame because he was talking about this in depth just before this topic got posted

it's significantly easier when you only have 2 axis to work with. Blockland is not one of those games.

On the terms of knowing where objects and such are, this would be best if the bot were part of the server, rather than a connecting client. That way they can properly discover the location of objects and collision.

actually no it wouldn't be that hard to set such a thing up
give it the image to respond to and the basic controls.
it'll look at that, start generating the nodes and such, and learn. You just need to set the objective. Maybe give it score as its fitness?
It may take more than 24 hours, but it'd eventually learn to do things.
Oh, and the depth buffer. Give it the depth buffer too.

There's not really a finite 'goal' in Blockland to train towards. Yeah, there's "not dying" but that's not how these things work. These things reward progression towards a defined goal in selective breeding. If 'not dying' is the goal, then the simplest solution is to not move, which is most likely what it's going to do. If 'total distance travelled without death' is the goal, moving back and forth between two points is an elegant solution that almost positively assures survival. If 'total distance from origin point' is the goal (which is roughly what the goal is in the examples) then you would likely have progression if you set an end point (e.g. each 'run' is timed for 5 minutes or death) but that's not useful AI behavior. Your best bet for a pathfinding, anti death AI algorithm would be to directly code in a death avoidance system mixed with A*.