1
2
ID:2162075
Oct 19 2016, 3:01 pm
|
|||||||||||||
Resolved
| |||||||||||||
As the title says, world.Export's persist doesn't actually keep the socket open; it closes right after the message is sent. Will provide more details if requested.
|
This is a bug with sockets not persisting, not speed. They may lead to the same problem, one is an entirely different issue.
|
one is an entirely different issue. You definitely can't know this. It stands to reason that if there is a handshake blocking the desired message, continually re-shaking could be the root of the speed issues on repeated messages. But I dunno, just me, you know, thinking about stuff critically. |
In response to Ter13
|
|
Ter13 wrote:
one is an entirely different issue. BYOND doesn't send multiple handshakes. The inability for Persist to work prevents it from just using one handshake because it has to re handshake each reconnect |
Hello! Received @ (2.045); Delta: 0.082; time since accept: 0.059 Going to leave this here to see if Persist being fixed will make the delta time not be as significant. |
Are you using persist to communicate with a BYOND world or an HTTP server?
because.... http://www.byond.com/docs/notes/415.html world.Export() has an optional (currently undocumented) third argument: "flags". Currently the only flag is 1 (tentatively WORLD_PERSIST), eg: |
Looking at the code, I'm seeing many ways that a server-to-server link can be closed when persist is true, but all of them are cases where bad data has been passed. The receiving server should never shutdown its link unless it's gotten a bad message. The sending server will only shutdown in response to a bad message, or a completed message when persist is false.
One thing that I think would help would be if you could show the code you're using for world.Export() here, and also how you're handling world/Topic() for those cases. That might shed some light and lend me a better idea of what to test for. |
In response to Lummox JR
|
|
Lummox JR wrote:
Looking at the code, I'm seeing many ways that a server-to-server link can be closed when persist is true, but all of them are cases where bad data has been passed. The receiving server should never shutdown its link unless it's gotten a bad message. The sending server will only shutdown in response to a bad message, or a completed message when persist is false. Those timed cases are my own coded server, but it appears that regardless of connecting to DD or my server, they both seem to close the socket around a few lines of code (according to my debugger anyway, mileage may vary) after "Double response in SRV2SRV_MSG". |
In response to Somepotato
|
|
Ooh! That's useful info. While the double return actually shouldn't be a problem that can shut things down, I can see how that situation might come up in a persistent socket. That at least is something I can approach and fix.
|
Its not actually that error, its just the closest "landmark" I could find to the error. In the function that that error is spat out the socket is closed.
|
Lummox:
Based on my testing, a "receiver" with: /world/Topic(a, b, c, d) and a "sender" with: /world/New() will exchange the following packets (all values are hex): s->r: [ 15] FE 01 00 00 DB 01 00 00 59 09 87 79 9A 77 F4 3C D3 2A E8 69 89 40 35 5E 37 CE C2 2A E8 50 2E 75 followed by the sender closing the connection. |
We know, I pointed him to where in the code (debugger assembled code, anyway) the sockets being closed (its something that shouldn't be happening but is for some reason)
|
Oh dang. I just ran across a big ol' comment on this that says persistent connections were stopped in 487 because of a nasty bug (id:115176). Doh!
I believe however that the nasty bug in question has everything to do with the way Export connections handle requests, which is something I'm overhauling in this new build. So I'm going to try turning persist back on and running some tests. |
I opened a can of worms here, but it looks like it was a can worth opening. I think I also figured out a lot of why world.Export() was crapping the bed in threaded mode.
|
1
2
3 years we've been asking about this problem.