ID:2591055
 
BYOND Version:513
Operating System:Windows Server 2019 64-bit
Web Browser:Firefox 78.0
Applies to:Dream Daemon
Status: Open

Issue hasn't been assigned a status value.
Descriptive Problem Summary:
On CHOMPStation, we're currently experience fairly constant Dream Daemon hard-crashes around an hour into the round. It seems to also occur on dev machines, usually while in combat, but we haven't been able to pin down any specific causes. Leading up to the crash, clients on the server will freeze for up to 30 seconds with no apparent cause, then resume, until finally it freezes and hard-crashes.

Faulting application name: dreamdaemon.exe, version: 5.0.513.1520, time stamp: 0x5ea354dc
Faulting module name: byondcore.dll, version: 5.0.513.1520, time stamp: 0x5ea3544a
Exception code: 0xc0000005
Fault offset: 0x0012fad4
Faulting process id: 0x1c74
Faulting application start time: 0x01d65d6022493f35
Faulting application path: C:\USERS\SS13-DEV\DESKTOP\SS13-SERVER\CHOMPTGS\SS13\BYOND\bi n\dreamdaemon.exe
Faulting module path: C:\USERS\SS13-DEV\DESKTOP\SS13-SERVER\CHOMPTGS\SS13\BYOND\bi n\byondcore.dll
Report Id: 1ebcb3b0-bc39-4eb6-bff4-17cea2770189
Faulting package full name:
Faulting package-relative application ID:

We're also seeing this error on occasion:
Source: .NET Runtime
Application: dreamdaemon.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: exception code c0000005, exception address 6AF2FAD4
Stack:

Application: dreamdaemon.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: exception code c0000005, exception address 6AC4FAD4
Stack:

Did the problem NOT occur in any earlier versions? If so, what was the last version that worked? (Visit http://www.byond.com/download/build to download old versions for testing.)
This problem is fairly new, within the last few days, we haven't updated BYOND in that time, and none of the code changes recently look like they would be causing this.

Our current code, also; https://github.com/CHOMPStation2/CHOMPStation2

Workarounds:
None known, can't very well recover from Dream Daemon abruptly exiting with a fault code.
Same issue happens usually around round-restart on this codebase: https://github.com/Yawn-Wider/YWPolarisVore

http://puu.sh/G8AHD/889a035c65.png

It runs on a Windows 10 server machine with 8gb RAM and no other processes/servers running.
1520 is out of date. The current stable release is 513.1527. Please update to that, in case the issue has already been fixed.
Ok we updated to that, but it crashed again at 1.8 gigs of RAM util. Just now.
One of my devs suggests that byond core is trying to use .net incorrectly, and that c00000005 is a fault where it tries to access stale memory.
Pretty sure BYOND doesn't use .net at all. But at 1.8GB of RAM usage, it's probably gonna crash anyways. 32-bit applications can't use more than a certain amount of RAM, and BYOND is a bit lower than that by nature of how it works.

At around 1.7-1.8GB people tend to see a lot of issues, and things running out of memory.
BYOND does not use .NET, but you're using way too much memory. That's right about at the limit of what BYOND can handle.

A few SS13 servers have had memory issues like yours, but my answer is the same to all of them: Use the memory report feature to see what's using all your memory, and then aggressively work to reduce the memory footprint. No SS13 server should be hitting that limit if it's using memory responsibly.

The usual suspect here is way too many lists or datums being initialized, often because they've been given to turfs or all atoms and declared something like this:

// all of these cause an init proc to run and are bad juju
turf
var/something[0]
var/list/mylist = new
atom
var/datum/badidea = new

A good place to look is to examine all the vars under /atom and /turf in your object tree in Dream Maker, and see if any of them are creating lists or datums in this way.
In response to Lummox JR
Lummox JR wrote:
BYOND does not use .NET, but you're using way too much memory. That's right about at the limit of what BYOND can handle.

A few SS13 servers have had memory issues like yours, but my answer is the same to all of them: Use the memory report feature to see what's using all your memory, and then aggressively work to reduce the memory footprint. No SS13 server should be hitting that limit if it's using memory responsibly.

The usual suspect here is way too many lists or datums being initialized, often because they've been given to turfs or all atoms and declared something like this:

// all of these cause an init proc to run and are bad juju
> turf
> var/something[0]
> var/list/mylist = new
> atom
> var/datum/badidea = new

A good place to look is to examine all the vars under /atom and /turf in your object tree in Dream Maker, and see if any of them are creating lists or datums in this way.

So remind me again why 1.8 is the maximum BYOND can handle? 32-bit programs should be able to handle up to 4GB of memory, and it's 2020 ffs. This feels like an excuse.
What is the reason for the limit of 1.8 gigs?
32-bit non-large-address-aware programs like BYOND can only address up to 2GiB of memory; you'll likely run into issues a little before you actually hit the limit.
In response to Rykka_Stormheart
Rykka_Stormheart wrote:
So remind me again why 1.8 is the maximum BYOND can handle? 32-bit programs should be able to handle up to 4GB of memory, and it's 2020 ffs. This feels like an excuse.

Typically 32-bit programs can really only handle up to 2 GB unless they're compiled specially to be able to handle large addresses. However experiments with this in the past have never worked out well. If BYOND ever leaps past that limit it will likely be by going 64-bit, but that's not presently on the radar.

Turnabout being fair play however, I have to lob this one back. Your server should not be getting near that memory limit. (MrStonedOne did say there appears to be a small leak if the server has multiple rounds, but that's a different issue. He's looking to find more information so I can investigate.) Servers that have found themselves nearing the limits in the past have basically always found that someone added something ill-conceived to the codebase that caused that bloat, and after tracking that something down the memory problems have gone away.