ID:2312004
 
BYOND Version:511
Operating System:Linux
Web Browser:Chrome 62.0.3202.75
Applies to:Dream Maker
Status: Open

Issue hasn't been assigned a status value.
Our trace shows the following, which doesn't add up to 1.8gb:
server mem usage:
Prototypes:
obj: 1337632 (8283)
mob: 1341152 (220)
proc: 3639188 (5112)
str: 2790655 (51331)
appearance: 37014775 (158445)
id array: 4226056 (20844)
map: 185475432 (1000,800,21)
objects:
mobs: 991008 (792)
objs: 94003008 (437823)
datums: 4498732 (23842)
images: 1040536 (62911)
lists: 11024136 (130227)


The memory jump was very sudden:



Is there anything we can do to provide more information?
That memory profile only shows 350 megs of the server's 1.8g memory usage.

Crazy.
The memory profile is not complete; it just covers the big-ticket items. It would be very difficult to get a full accounting of all memory in use.

Is there anything you saw in the memory profile from before vs. after the spike? That's worth knowing, as it might point to something useful. Also if you're aware of anything in particular that might have happened around the time of the spike, that could help too.
In response to Lummox JR
Lummox JR wrote:
The memory profile is not complete; it just covers the big-ticket items. It would be very difficult to get a full accounting of all memory in use.

Covering <18% of the total memory used by a project doesn't really satisfy the definition of big ticket, IMO.
In response to Lummox JR
Lummox JR wrote:
The memory profile is not complete; it just covers the big-ticket items. It would be very difficult to get a full accounting of all memory in use.

Why? BYOND games run in a VM...
Before:

server mem usage:
Prototypes:
obj: 1337632 (8283)
mob: 1341152 (220)
proc: 3639188 (5112)
str: 2814645 (54540)
appearance: 28846965 (120520)
id array: 4179240 (19617)
map: 186545420 (1000,800,21)
objects:
mobs: 1201600 (745)
objs: 77026432 (387828)
datums: 4704076 (25333)
images: 2516884 (38731)
lists: 9404216 (90514)



After:
server mem usage:
Prototypes:
obj: 1337632 (8283)
mob: 1341152 (220)
proc: 3639188 (5112)
str: 2790655 (51331)
appearance: 37014775 (158445)
id array: 4226056 (20844)
map: 185475432 (1000,800,21)
objects:
mobs: 991008 (792)
objs: 94003008 (437823)
datums: 4498732 (23842)
images: 1040536 (62911)
lists: 11024136 (130227)
PS, sort of off-topic but it would be fantastic if projects could use the 3.6gb that Linux offers single-threaded programs, or if this could be made stable: http://www.byond.com/forum/?post=2307379#comment23440722

And the ability to diagnose memory in detail feels like the kind of thing that's fundamental to any programming engine. There's a memory leak that's corrupting our save-files, crashing our server every day, etc, and we can't do anything about it but post numbers that give a very vague, likely inaccurate snapshot of things.
In response to Super Saiyan X
Super Saiyan X wrote:
Lummox JR wrote:
The memory profile is not complete; it just covers the big-ticket items. It would be very difficult to get a full accounting of all memory in use.

Why? BYOND games run in a VM...

There are lots and lots of potential structures in play; accounting for all of them in the report would be a nightmare. The ones covered in the memory report are typically the ones taking up most of the memory. The fact that it isn't true in this case is really odd.

What I find concerning however is how the memory suddenly jumps right up to the limit. That suggests something is happening to cause a runaway acceleration that would probably affect it the same at a higher limit.
for reference, heres /tg/
server mem usage:
Prototypes:
obj: 2611380 (14752)
mob: 2617300 (370)
proc: 12658344 (27671)
str: 10239806 (174893)
appearance: 16667102 (29756)
id array: 15162724 (60986)
map: 230636656 (255,255,13)
objects:
mobs: 488128 (355)
objs: 58988288 (176340)
datums: 56687104 (387828)
images: 1228216 (17605)
lists: 152294752 (1993102)



actual usage: 918mb

The issue with the mem usage table is that it only accounts for the meta objects like lists or datums, but not their user defined vars or contents. of those 387k datums, a good chunk are lighting related, where at least 6 numerical values are changed on the datum.

That usage isn't tracked at all.
In response to MrStonedOne
Nope, user vars are in fact tracked.
well then this is inaccurate datums: 56687104 (387828), because the lighting datums alone account for ~300mb of usage. I know because I both did the math, and made something that used dll calls to get dd's current memory usage, and track before and after their creation.
I don't see what could account for that. Fragmentation doesn't seem like it'd be appreciable enough to explain a difference of roughly 5x, and datums are basically nothing more than a structure defining some basic info and a list of changed vars--which since mid-510 is now a sorted array rather than linked lists.

Are you sure there isn't something else involved in the lighting datums that might account for that, like other info the vars are holding? Numerical vars are obviously going to be negligible since they're just the direct Value struct, but an image or a list or something could point to additional stuff.
In response to Lummox JR
Lummox JR wrote:
What I find concerning however is how the memory suddenly jumps right up to the limit. That suggests something is happening to cause a runaway acceleration that would probably affect it the same at a higher limit.

Is there any way to track/diagnose this?


There are also cases where it climbs without a sudden jump, as shown here, and none of this seems to display in the memory profile.