Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Val

Pages: [1] 2 3 4 5 6 ... 13
3
Drama / Re: Val - Why Isn't He Banned?
« on: May 17, 2018, 04:33:12 PM »
Hi, I'm sorry for the bragging. I hope my modification help input makes up for it.

4
This topic talked about the missing sequence issue and found it was caused by overflowing the packet buffers in a certain way. To briefly sum it up, it only happens when a large amount of bytes are already written to a buffer where another chunk of ~500 bytes will cause an overflow on the 1024 size limit and 1500 absolute limit. When this happens, any remaining data can't be sent and will cause a crash since it's a malformed packet.

You describe your problem vaguely but I am certain the cause for it is similar to what was discussed in that thread. So, the only fix to this is to wait for someone to do it in a DLL or let Badspot deal with it.

5
I wrote something to make Blockland take screenshots with tiled rendering, which means splitting the frustum into multiple sections, rendering each and stitching the final result together. It also sets the correct shadow frustum to make extra crispy shadows. So I guess that solves the resolution problem.

6
yo this is too rad. Just imagine if it were a client-sided add-on that gave you a map of the server in real time. Too bad that's next to impossible.
its not really that impossible - if you had a script log newly created bricks and fed them into a live-running minimap maker, the time it would take to update only the relevant chunks of the map should be pretty fast.

There is no need to generate the image externally if you tell Blockland to render orthographically. Surprisingly, orthographic rendering is as simple as a few memory edits and requires no DLL: (default lighting)

Shadows are trickier because the frustums need to be adjusted to force highest quality resolution, not impossible with patching but much easier with a DLL: (shadows)

The disadvantage of using real-time graphics for this is limited resolution, although it's the closest an in-game representation will look like. Ray tracing is better overall for customization but I think it's dumb to reinvent the wheel when extremely powerful and configurable renderers exist (Blender cycles). It's smarter to write a proper blb/bls importer and then let Blender do the heavy lifting. (something something abstraction)

7
The color palette for a save is stored in the header as 64 RGBA colors as floats. So multiply these numbers by 255 and reconstruct the colorset from there. The only information you lack then are the divisions and their names, but they aren't required to load a save.

8
Modification Help / Re: Player::getDamageLocation
« on: March 25, 2018, 04:58:42 PM »
There was an old mod by Jookia (don't remember what) that ran each projectile hit through paintProjectile::onCollision, which calls ShapeBase::setTempColor, and it compared node colors before and after to determine what area was hit. It's clever, but limited by the fact setTempColor isn't as specific (i.e, left leg and right leg aren't unique).

The two other "hit region" mods I can think of are this and a random script by Port but both are fairly similar to what you made.

As far as a better solution goes, the "correct" way to do hit boxes/regions is by converting the world coordinates of an impact back to the target's model coordinates. To do this just do the reverse of what projection does in rasterization. An object's model coordinates are converted to world space by multiplying them by the object's transform (getTransform()), so to reverse this for a projectile impact in world space, you multiply that point by the inverse of the object's transform. I don't see any implementations of a matrix inverse function out there (besides ones that translate the homogeneous coordinates back to a 4x4 matrix, which is messy) so here's that:

Code: [Select]
function MatrixInverse(%m)
{
    %inv_rot = vectorScale(getWords(%m, 3, 5), -1) SPC getWord(%m, 6);
    %inv_pos_mat = MatrixMultiply("0 0 0" SPC %inv_rot, %m);
   
    return vectorScale(getWords(%inv_pos_mat, 0, 2), -1) SPC %inv_rot;
}

Then you simply do MatrixMulPoint(%inverse, %impact) and write some fun "is point in box" code. An easy way to get the boxes would be loading up a properly scaled player model in blender, since that's the model view!

9
Help / Re: force disconnect when loading datablocks
« on: March 20, 2018, 11:05:29 PM »
actually @val im pretty sure the exact opposite is true. stuffing too many sequences into one dsq seems to cause a bunch of problems the bigger it gets. i had to drop TSShapeConstructor use entirely for my dueling swords as after adding the scimitar, half the animations were just straight up broken if i used a TSShapeConstructor + dsq.

Indeed, the sequence path and names are sent over the network as a string which is limited to 256 bytes, and further (for no reason) limited to (shape path + filename + names) of 90 bytes. So, somewhere around a dozen names-- not surprising you experienced errors. Use the multi-seq DSQs as a supplement, not a single solution. And I'm sure you know but not using a TSShapeConstructor will mean not being able to dynamically load animations into models with similar skeletons.

On another note, I dug into the source to see why this bug happened randomly. Manually adding up the bytes of my friend's playertype sequence names didn't place it anywhere near the fixed packet size of 1023 (technically 1500), which meant a write error would only happen with the company of other event's/ghost's data. There doesn't seem to be any checks if the per-packet bitstream has ample room for another chunk of data to be stuffed in it. The process is something like this per tick:

Server end:

1. Allocate a bitstream, size is specified as 1023 but write limit is 1500
2. Write headers, connection related info
3. Write client related data like control object, move acks, etc.
4. Write events, before each event is written the code checks if the bitstream is full, using the 1023 limit
5. If not, write the event (without any checks if the event is too big even for the 1500 limit)
6. Write ghosts (even if stream is full, although it should error out quickly)
7. Write the final buffer to the client. Size can even be over the 1023 limit (ie, 1200 is valid)

Client end (simplified):

1. Receive bytes, number read = number sent as mentioned
2. Read events until nothing can be read anymore (unpack half written events too)
3. Interpret events (process)

What happens when more than 1500 bytes are written, in the case of a TSShapeConstructor event stuffed into a nearly full stream? The stream sets an error flag, resulting in a null string (hence blank sequence name in OP), but absolutely nothing is done anywhere-- it just sends that forgeter like any other piece of legitimate data. And there's supposed to be even more data after that, not to mention ghost updates haven't been written either, but as the stream keeps erroring out the size doesn't increase making it capped at 1500 bytes. So the client gets an incomplete buffer and is expected to just go with it (there's no error handling for this either, but any read functions should return 0), this causes a buffer overflow sometimes when I was testing it. In the right scenario it will cause this exact missing sequence error with whatever unlucky model.

This is a rare but serious bug in the net code. It's easily fixable-- even with a DLL!! Need some kind of check to make sure

I made some test code to trigger the bug on my LAN server. I can trigger it every 5 joins or so, and like I said some of them end in runtime errors. Result

10
Help / Re: force disconnect when loading datablocks
« on: March 19, 2018, 09:00:34 PM »
Its an error related to when the server sends a TSShapeConstructor datablock. The shape path is sent along with the names of each sequence and their corresponding dsq name, and it has nothing to do with the cache. Notice how the sequence name itself is completely blank..

I had a friend who made a player type with a massive amount of sequences and this started happening arbitrarily, and because of this it was difficult to debug. But there are two[1][2] garage game topics that address this problem directly, and the issue is writing too many bytes per datablock packet, something that would happen with craptons of sequences. An easy fix for addon makers would either be reducing sequence/dsq file name length or using that "stuff multiple sequences into one dsq" trick to substantially reduce the number of bytes written.

11
If you use the steam_appid.txt trick, you will have to pass the correct command line when launching the exe or the popup will still appear. Create a shortcut to it, edit its properties and add "ptlaaxobimwroe -steam" to the end of the target path. -noconsole if you want that too.

Though a steam-created shortcut mandates steam runs in the first place, which is better if you plan to just play the game. Some things like the linux dedicated script will need to run the exe directly though, so that's up to you.

12
Due to the nature of steam authentication, it won't allow you to directly run the exe since the API expects it to be launched from steam. You can create a shortcut from steam, though, if that's what you want.

13
Development / Re: 2018/03/16 - Blockland r1988
« on: March 17, 2018, 05:45:11 PM »
...

If C++11 features are what you're worried about, most of them should still work even if you manually link against an older runtime. Those features are specific to the language and how the compiler toolset implements them, the extent of how much it relies on the actual runtime varies. Templated classes should be defined purely in headers, stuff like lambdas will be done with the compiler, and the rest boils down to fundamentals that need the runtimes to work properly, which implies the runtimes don't have to change a whole lot since they're the basic elements. So the idea is to resolve names to msvcp71.dll and msvcr71.dll instead of <insert runtime here>.dll. It should be as easy as swapping out the stub libraries with older ones in the visual studio installation, and maybe defining a few symbols here and there. Don't quote me on that.

Most DLLs I've seen are simple patches and hooks that don't really even need the standard library anyways. The exceptions are binder DLLs where they provide an interface to an existing library which can use some standard functions, but even then C-styled ones might only use a handful of functions like malloc, free, memcpy, etc. So what I like to do is get rid of the standard library altogether and rewrite what I need from scratch. That way all dependencies are gone, and whatever I need to use can be queried explicitly from the PEB, which makes for a DLL with zero imports and very little overhead both space and speed wise. It's definitely overkill, and redirecting what runtime it links to is better, but if we're only using 1% of the standard library anyways then why not go the extra mile to make it completely detached. It's not hard to write your own list, map, etc class either. Plus it's COOL!

In the end it's tempting to just static link everything together so it "just works," I don't know why Microsoft made it so the 2005 toolset can't be used but it makes for a headache. I might have to find time to test everything I've said and lay it out in a thread to standardize everything, we need a new loader anyways.

14
Development / Re: 2018/03/16 - Blockland r1988
« on: March 16, 2018, 10:37:17 PM »
DLLs will still work so long as the authors used basic update-proofing measures like sigscanning. The only thing that can break them is changing the functions they use so the scan fails, or modifying hooked functions such that the arguments are different or new control flow breaks them, which is unlikely with these changes.

DLL loaders will break if they use something like modifying the imports table to load a bootstrap DLL. This is easily fixable with a tool that can change the PE header, though.

It might be time to reevaluate where DLL modifications are going before the existing problems keep multiplying. Many are made naively and should follow some rules, i.e proper function hooking to maintain compatibility, linking against Blockland's runtime instead of having a billion different ones because authors use whatever visual studio they have (also breaking wine support), more control over when loading and unloading happens, etc.

15
Maybe try hacking the frustum slice distances via memory editing... an easy way would be setting 0x70CC28 to the lambda value as per the nVidia paper blockland's shadows are basically copied from, so what the lambda value/split_weight variable would be in updateSplitDist. Higher means splits are shifted towards the camera, lower is farther.

Blockland's slices are already pretty spaced out compared to what you'd see in games like GTAV, but I guess with PCSS its not really a priority to have crispier shadows anyways. See if messing with the slices does anything, otherwise it might be one of those things we have to "deal with" just like in GTAV- any method to nicely blur the lines is likely beyond shader edits or simple hacks. Just one of those things to tweak until it is right, as we all know it's what we sign up for when messing with shaders anyhow :P

Also the fact that a lot of this technology has been around since forever makes one wonder why they didn't deck out the shaders with it when they were released. It looks like the original thread had some form of softened shadows too... but if their lack of existence tells us anything I'd guess they scrapped the idea because of some visual error/sacrifice. I wonder what!!

Pages: [1] 2 3 4 5 6 ... 13