ID:2043890
 
Resolved
BYOND Version:509
Operating System:Windows 8
Web Browser:Chrome 48.0.2564.116
Applies to:Webclient
Status: Resolved

This issue has been resolved.
Descriptive Problem Summary: Logging into Severed World on the webclient requires the client to download all resources, even if they've downloaded them before. This will happen on every login.

Numbered Steps to Reproduce Problem:

1.) Login to the Severed World server
2.) Reconnect, resources will need to be downloaded again

Does the problem occur:
Every time? Or how often? Every time
In other games? Unsure
In other user accounts? Yes
On other computers? Yes

Did the problem NOT occur in any earlier versions? If so, what was the last version that worked? (Visit http://www.byond.com/download/build to download old versions for testing.)

I'm not aware of any older builds where this wasn't an issue.

Anything we can do to narrow down what's happening here? An easy way to reproduce this is to download the resources, log out for a few minutes, and rejoin the game. You'll have to redownload everything. If you rejoin instantly, the resources will at least download quicker... but I imagine they shouldn't be redownloading at all unless something requires it.
In response to Pixel Realms
I always thought this was intentional in order to show ads D: smh
include your own manifest file
The cache thing is on my list of webclient items to look at. This map chunk issue has monopolized my time a bit but I'm looking into more webclient stuff as of today.
I'm not seeing anything in the headers or responses when I test this that would indicate a problem. Can you get a sample of the request headers, response headers, and other such networking info from your tests?

I'm specifically interested in any of the /cache/xxxxx files, which are the resources, that have a 200 OK response. Those have max-age set, and the current setup has it at 10 weeks; it was intended to be 1 week but I added a 0 by mistake, though I think 10 weeks is better.

Every test I do indicates that the /cache/xxxxx files are correctly being served from the browser's cache and are not being sent to the server. The only requests I'm getting on the server end are new resources (not in the cache yet) or ones that have a temporary redirect (code 307) to another cache file.
In response to Lummox JR
Lummox JR wrote:
I'm not seeing anything in the headers or responses when I test this that would indicate a problem. Can you get a sample of the request headers, response headers, and other such networking info from your tests?

Sorry, but I'm not really sure what you're asking for here. Think you could explain how to get this information?
In response to Doohl
The network tab of Chrome's development toolbar should have that info.
Okay...

Request headers when redownloading resources after having already downloaded them 2 minutes ago:

GET /cache/248837309 HTTP/1.1
Host: 74.91.127.229:5000
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36
Accept: */*
Referer: http://74.91.127.229:5000/play
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8



And the response header:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Disposition: inline
Content-Length: 330
Cache-Control: max-age=6048000
Why is your browser sending Pragma: no-cache and Cache-Control: no-cache as part of the request headers? Seems like that's the core of the problem.

According to this page, you may need to configure Chrome to use the cache when developer tools are open, in order to get accurate results.
In response to Lummox JR
Well, even without the developer tools open, this still happens. I don't think this should affect anything.
In response to Doohl
Doohl wrote:
Well, even without the developer tools open, this still happens. I don't think this should affect anything.

What I mean is, I need the info you got from the developer tools, but without it disabling the cache intentionally, so that it acts normally. I need to see which headers were sent, which were received in return, and what the status code was.

If you change the settings for developer tools so that they do use the cache, what do the headers say then?
Request URL:http://74.91.127.229:5000/cache/4015813841
Request Method:GET
Status Code:200 OK (from cache)
Remote Address:74.91.127.229:5000


This is the header after proper configuration.



This issue seems to be the most prevalent when you are connecting to another server with identical .rsc files. This is not part of the original problem - but is there any planned fix for this?
In response to Doohl
Doohl wrote:
This is the header after proper configuration.

That says it got the file from the cache correctly, which is what I saw in all of my own tests.

Do you at any point see anything different for any of the cache/xxxx files? There should be some 307 temporary redirect cases here and there, but if you're seeing a full re-download without the cache then it should be 200 OK and should not say "(from cache)".

This issue seems to be the most prevalent when you are connecting to another server with identical .rsc files. This is not part of the original problem - but is there any planned fix for this?

That's not a bug; it's a function of how browser caching works. Connecting to a different server (even the same IP but with a different port) with identical files will always have to re-download, because the browser cache is based on the full URL. If you have file 123.png on server A, file 123.png on server B is totally different as far as it knows.

I have an idea for eventually getting around this, which is to setup a way that you can specify a mirror server. The client would then attempt to grab any files it can from the mirror first. Assuming the mirror has a consistent URL, that solves the issue for changing or multiple servers. (The tricky part is setting up some good way for the game author to quickly populate the mirror server with the right files.)

If you take changing server URLs out of the equation, does the cache problem still exist? If so, I need headers that show when the browser decided to re-download a file rather than relying on the cache. I have not yet been able to reproduce this issue at all.
In response to Lummox JR
It does seem like after only waiting maybe 10 minutes, the browser forgets about the cache and the problem returns:

Request:
GET /cache/474725744 HTTP/1.1
Host: 74.91.127.229:5000
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36
Accept: */*
Referer: http://74.91.127.229:5000/play
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8


Response:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Disposition: inline
Content-Length: 329
Cache-Control: max-age=6048000

I would suggest some sort of hash/checksum or diff check if possible so that way a cached resource file can just be updated with what's missing instead of having to download everything. This would be useful especially in a scenario like my project, where multiple worlds will share a common framework code base but each will have their own art assets, additional code, etc. that will differ from each other.
the issue is there isn't any easy way to know what is in the cache in the webclient code, it doesn't know what the browser has, so it has to trigger fetches for all of them to be sure.
The easiest way to handle that is a checksum. You can checksum the entire resource cache file and have a list of checksums for every file contained within. Two requests/fetches at most. Differential updates are not very difficult to do.
In response to Doohl
Doohl wrote:
It does seem like after only waiting maybe 10 minutes, the browser forgets about the cache and the problem returns:

Isn't that a browser issue, then? Maybe your browser cache is getting filled up and it's dumping files prematurely.

I found some info on the Chrome cache here, though I'm not sure if it'll be all that helpful for users in general. It may at least be able to help you get answers as to why the cache isn't doing its job.

This page on StackOverflow says Chrome uses a default cache time of 300 seconds, which would imply that maybe it's ignoring the Cache-Control directive entirely.

Another thing that's telling here is that Chrome is not sending an If-Modified-Since header. Although it might need a Last-Modified header to be sent; that's something I can look into at least. I suppose it's possible that Chrome does not consider max-age=[high value] alone a good enough reason to hold onto a file without revalidating, although that would be stupid. It's hardly the first stupid thing I've seen Chrome do.

Khyberkitsune wrote:
I would suggest some sort of hash/checksum or diff check if possible so that way a cached resource file can just be updated with what's missing instead of having to download everything.

Resource files are downloaded individually on the webclient, not packaged like they are in DS. The browser alone decides what's missing based on what's in its local cache.
In response to Lummox JR
Lummox JR wrote:
Resource files are downloaded individually on the webclient, not packaged like they are in DS. The browser alone decides what's missing based on what's in its local cache.

Would that explain why my code isn't working for DS but does in the webclient when it comes to uploaded custom characters?
Page: 1 2 3