ID:1906517
 
BYOND Version:508
Operating System:Windows 7 Ultimate 64-bit
Web Browser:Chrome 45.0.2454.15
Applies to:DM Language
Status: Open

Issue hasn't been assigned a status value.
Descriptive Problem Summary:
Setting client.preload_rsc to 0, even in client's define, is not respected, they still download resources.

Setting client.preload_rsc to a url causes it to download the url and simultaneously download resources from the server, removing all point of using it.

The hope when i went in investigating this, was to use a CDN to fix the issue of resource downloads taking forever for foreign users.

setting preload_rsc to a URL will only prevent the main DMB from being downloaded like normal if preload_rsc is set globally at compile time. Changing it in client/new() wont do anything because they've already downloaded the RSC, or started to, by the time client/new is called. never used preload_rsc 0 so i dunno bout that
setting it to a url at compile time causes it to silently download both at the same time.

Use clumsy to either throttle or lag your connection to the test server. Once the off site one downloads, you notice the progress bar jump back and start showing the game server's download, but it's position will indicate its been downloading the whole time.
I'm hoping I can get to resource delivery issues fairly soon, and to that end I'd like to start brainstorming ideas on the most recent feature thread as to how preload_rsc or a var like it could be made to work in more helpful and reliable ways.
In response to MrStonedOne
MrStonedOne wrote:
setting it to a url at compile time causes it to silently download both at the same time.

Use clumsy to either throttle or lag your connection to the test server. Once the off site one downloads, you notice the progress bar jump back and start showing the game server's download, but it's position will indicate its been downloading the whole time.

Really? Weird.
It actually seems preload_rsc, browse_rsc/browse/etc are all fubar. browse_rsc doesn't respect existing files and will redownload regardless if the files are the same.
Bumpping, we are really wanting to set a CDN up to host resources so we can relax some of the restrictions on asset sizes without international users having slow downloads.

and they already have slow download issues now because of the send one chunk, wait for reply, send another chunk issue that causes high pings to download slowly even if the pipe is big enough for faster downloads
Sorry that this has gotten pushed to the back burner. It's really overdue. I'll put it on my look-at list.
You've obviously got other stuff you're working on right now to get 511 out of the door, but I just want to check in on this. If preload_rsc is no longer working correctly (and it doesn't seem to be), it takes a huge tole on larger servers.

This was such a huge problem for NEStalgia in the past that now as a matter of routine I always ensure that the standalone client for NEStalgia gets updated everytime there is a change to the rsc. It doesn't appear to make a difference anymore, as players connecting to a server for the first time are forced to download the rsc directly anyway.
In response to Silk Games
Can you narrow down where preload_rsc went bad? I'll run some tests at my end, but that info would help. Also, can you check that your server hasn't started doing something like forcing SSL? That would murder preload_rsc right there.
To my knowledge, preload_rsc never worked correctly. I recall talking to people about it as far back as six years ago.
I'll check my server settings and back through older compiles and see what I can find. I remember that it definitely worked (at least for the standalone client) when we launched on Steam back in April 2014.
and to that end I'd like to start brainstorming ideas on the most recent feature thread as to how preload_rsc or a var like it could be made to work in more helpful and reliable ways.

How about make url based preload_rsc accept a json file with a list of objects containing {name = , size = , url = , hash_function_here = } and then we can host the various relevant versions of our assets in one setup and byond could just download what it needs. The hard part would be the hash function, byond uses byondcrc32 internally, but if i remember correctly, it didn't generate the same hashes as normal crc32, so that would either need to be documented, or maybe changed to something like md5 that is more standard.
In response to MrStonedOne
MrStonedOne wrote:
byond uses byondcrc32 internally, but if i remember correctly, it didn't generate the same hashes as normal crc32, so that would either need to be documented, or maybe changed to something like md5 that is more standard.

byondcrc32 is really just crc32 with a modified crc table, additionally you don't want to use MD5 for comparing file hashes since creating collisions is so easy these days.
Can you PM me your current preload_rsc link? The most recent source I have access to has a commented-out preload_rsc line, but the link is too old so I get a 404.
erik, crc is no better then md5 for collisions, in fact its worst, and thats what byond already uses for comparing file hashes.

I suggested md5 because it's already in byond, so we could make byond based tools to generate this .json file, and it's already used internally (to checksum network messages).

And for these purposes the security of the hash isn't a concern, the preload url would already have to be a trusted source. (but this does bring up the concern for https support)
or just include sha1 into the language
oh even sha1 is a kaleidoscope now and days. if we are gonna include a secure hash algo in the language, sha265/512/1024 as well as bcrypt would need to come in.
well I wasn't thinking about crypto, just hashing and sha1 is pretty decent
In response to MrStonedOne
MrStonedOne wrote:
oh even sha1 is a kaleidoscope now and days. if we are gonna include a secure hash algo in the language, sha265/512/1024 as well as bcrypt would need to come in.

bcrypt isn't meant for hashing files in the slightest
That point is irrelevant because i wasn't talking about hashing files. I was pointing out that if we are going to include more secure hashing algos in the language, we shouldn't stop at sha1.
Page: 1 2