Descriptive Problem Summary:
This isn't so much a bug as a shortcoming of the way BYOND determines where objects are physically in the world, and which objects should be rendered by the client. This behavior is painfully clunky and basically makes the transform feature impossible to use on anything other than screen objects, or for anything other than basic rotation or scaling down of objects. Currently, in EDO, we have beam attacks that we have to break up into a distinct object every 32 pixels and manually position rather than just being able to animate with a single call because of the potential for objects disappearing. It's really quite frustrating.
This bug manifests itself in two ways:
1) When an object's size has been changed using transform to scale up the image, it disappears when your viewport leaves the normal bounds of the image's icon:
2) When objects whose transform offsets have been changed move beyond the boundaries of their tile, they will disappear if the same physical boundaries of the object are out of view:
Code Snippet (if applicable) to Reproduce Problem:
http://files.byondhome.com/Ter13/transformbug_src.zip
ID:1822615
Apr 1 2015, 1:42 pm
|
|||||||||||||
| |||||||||||||
Apr 1 2015, 2:37 pm
|
|
Yeah, there really isn't anything setup to handle large transforms just out of view. It's a difficult thing to handle properly. The code understands how big icons work (to an extent) but handling transforms on top of that is difficult. Even the big icon handling for this kind of thing isn't perfect, as evidenced by another report.
|
Ideally we'd have the server approximate the transformed bounds, but barring that, perhaps expanding the range of boundary tiles would cover most cases.
|
I'm gonna pseudocode at you for a few minutes.
Basically, restructuring the map sending algorithm would be the cleanest way to handle this entire thing. Albeit, probably a challenge at this late point in BYOND's lifetime. TL;DR: quadtrees: #define CELL_SIZE 32 That's a really simple quadtree implementation. Let's talk about managing appearance footprints. appearance Schedule Message basically just marks that the server needs to send messages to that client. At the end of each server tick, the messages are grouped up and sent out according to whatever algorithm is necessary. A client netmessage contains an entire node to update, meaning all of the turfs and movables in that node, and all of the movables in that node's parent node would be serialized and sent to the client upon demand. A node netmessage refers to specific appearances within that node that need to be updated. Now, when a client moves around the map, the nodes that it is currently observing should be marked as being observed by clients based on their view size. Now, here's the trouble. Animate() might potentially pass through entire cells without notifying that cell of the update. This is not a trivial issue to solve. Honestly, the only way to really solve this problem would be at each frame to update the footprint if necessary. Luckily, searching a quadtree for bound-inclusion is rather trivial, so it's a lot better than what we're doing right now with per-tile granularity in the view. |
I think I have an idea on this. It won't go into the next release by any means and it may need a major version bump, but I have the concept in mind.
Right now whenever a turf is large or contains a large object or image that a player can see, it is marked as interesting--for that player only. Once the turf is so marked, it is sent regardless of its range, as are any objects on it. This system has not worn terribly well and has some flaws, as evidenced by another bug report. What I think I should do instead is have a system that will mark any object as interesting, if its possible extents or offsets--including images (visible or not) and animations--pass a certain threshold. Any interesting objects on the Z-level would be sent (at least when they changed), regardless of whether they were in view range, as there would hopefully be only a few of them. This is not necessarily trivial to setup, but it would be much more compatible with our setup than trying to redo everything as quadtrees. Much as I appreciate the mathematical beauty of quadtrees, they're just not going to happen here. The metrics for this can be left fairly rough, just keeping it in terms of bounding boxes, since the impact of transforms on a bounding box can be calculated. |
I still don't have a timetable for this, but because it ties in with the "graphical big icon glitch" thread I want to deal with it at some point. I am however rethinking the quadtree thing, because in hindsight I suspect that's probably the easiest way to handle this after all.
|
I am however rethinking the quadtree thing, because in hindsight I suspect that's probably the easiest way to handle this after all. It was actually the only reliable solution for every case I managed to think of. I know it's not easy, but if you are going to chase down quadtrees, I'd recommend rehashing the way map data is stored and sent over the network using a "chunk" system, so that larger chunks of data are sent across the network less often, and much of your sorting, view sending, and rendering business can be severely optimized in the process. Since you don't have to wrangle with specific objects, you can determine visible turf/object/etc update visibility based on chunks the client is currently observing. This would cut down on a ton of the backend processing you have to do in the form of checking every turf/object/image in the view of every client for updates every frame, and instead switch you over to a dirty-marking system that can simply build update lists relevant to all clients viewing a chunk and push them out to concerned clients at the end of the frame. If an update occurs in a chunk that has zero observers, throw it away. If an update occurs in a chunk that has 1 or more observers, the same network message can be sent to every client regarding the updates to the chunk, rather than every client building their own series of messages based on looping through all kinds of stuff. Rather than checking every object every frame, we just check every chunk sitting in the dirty queue every frame, and pipe updates to every concerned client, mark clean, purge the stored messages, and move on with our lives. This could also be used to heavily optimize the webclient that much more by allowing the webclient to push static batches of polygons to the GPU that only need to be updated with the chunk updates. No need for sorting, no need for any of that business, just maintain a VBO and update it in response to changes on demand. |
Just thought I'd mention: From a TCP standpoint, the bigger your messages, the faster they get to the endpoint. If you are sending a lot of data over a network, better to have big messages than a ton of small ones.
Each message sent into the buffer takes a little bit of overhead. If you have an unbounded buffer, what can happen is the small message in an unbounded buffer will fail to send for a little while so it can wait for more data to come flooding in. This is likely the cause of the hangup on world.Export(), given that world.Export() messages are going to be incredibly tiny. |
Well the changes I'd need to make are server-side, and would in theory affect the DS client and webclient the exact same way (albeit through different routines still).
On the webclient side of things I have very little control over the rendering process because of the fact that it uses StageXL, albeit a modified version. It may be the case that a newer version of StageXL is faster overall however, and one of the things I intend to look at in 509 is trying that. The idea of chunk processing on the server end makes a lot of sense, though. It would be a dramatic time-saver for large-player games. I do have a moderate concern that minor movement/transform changes could do more calculations than anything cares about, but maybe not. A lot depends on the ultimate depth of the tree. My main goal would be to never let any quadtree leaf get below, say, a quarter of the view size. |
I actually wrote a map chunk management system recently in DM that breaks the map up into equally-sized chunks so that I could wean my AI off of is over-reliance on areas.
I did this mostly in preparation for writing a navmesh solution to once and for all solve my headaches with pixel pathfinding in DM. But yeah, I found that the best results in terms of efficiency were roughly 32x32 chunk sizes. Things don't tend to span chunks too often, not too much data is stored in each chunk, and the client in my case is usually only observing 4 chunks at most. Arbitrary minimum granularity of 16x16 or 32x32 for your leaves/chunks is probably for the best. |