Because this is a feature request tracker, not a feature request blob of nonsense.
IMO a proc override concept isn't very workable here--I see too many potential issues under the hood. I'm not convinced the OP's concept is all that useful without knowing how much else is scheduled for the same tick, but it does provide some tick-relevant timing info. A single very hi-res timer does seem like a useful idea in its own right, though I agree it probably won't help with tick management without something also tied to the tick.

While I think that some of these side ideas might eventually need their own threads, the brainstorming here doesn't bother me as I think the whole discussion has been useful. I like that this has given a window into what games would find most helpful.
Precision only matters if a proc can be ensured to run first in a tick. Or at the very least, at the same time within a tick. Without that kind of guarantee, high detail timing doesn't really matter.
I suggested it because I hate the idea of having to make a failsafe controller to ensure the master controller is still running, or otherwise have to implement my own event loop within the gameloop.

Being able to closely tie game code to an iteration of the byond event loop is a plus.

I see too many potential issues under the hood.

Ya, if it runtimes, the internal tick never happens, -FEATURE!

If ..() is never called, the internal tick never happens, -FEATURE! (this could be used if you want to have a world.fps of say, 60, but only have clients update at a rate of 30fps, for network reasons, or only update them when you feel it's needed based on what's happened)

If you implement your own version of spawn the works with your GameLoop override, and don't use sleep, you could the reverse, and have client's fps be higher than the game code's effective fps.
I think a good compromise would be to have a specific proc run as the first thing in every tick like world/begin_tick() and have another proc run at the end of every tick like world/end_tick(). Both by default would do nothing and don't care about their return value.

That seems easier to implement and would work too, although it's not as clean as having just one proc.
Maybe even easier than all: a proc flag that guarantees the proc gets top priority in the scheduler. Then a var is moot.
Oh yeah, that would be even better actually, I just assumed that would be even harder.
The reason I only need to know how much time has elapsed in a tick is so that I can pause all work when a tick is about to go overtime. My processes all run concurrently and sleep(0) periodically, so the processes are all sharing the available time anyway. When the tick is about to go overtime, or I've reached some allotted time threshold (like 60% of a tick), the processes each spawn(world.tick_lag) so they can pick up where they left off in the next tick.
In response to MrStonedOne
MrStonedOne wrote:
ter:

As brought up in this thread, multiple times, such variable is useless for the intended purpose of the OP and volundr if you can't link it to the start or end of a tick.

It also allows for you to explicitly control where in your periodical loop internal stuff happens at. Do you want byond's internal tick handling to run before or after you do your stuff (this can make difference)

This thread, if you go back and read the op, is about the knowing how far along in a tick you are, can't do that if you don't know when tick has started.

High precision timers is ALSO something we need, but can we all please stop derailing the thread over it.

That's the thing, I think there's a misconception going on here about when a tick starts and when one ends under the assumption that you can't control how the scheduler queues up functions.

I understand how/why you came to this misconception: It's because there's literally so much code in SS13 that keeping an accurate accounting of what happens when is not possible for any one developer. Surely we can both agree that this is a problem for you in reality. However, I argue that the reason that you think this is due to a lack of centralized structure in your software, not due to an inherent problem with the software. Higher precision timers for world.timeofday would solve this problem because it's only updated at the start of each tick.

You CAN control the order in which actions are called every tick, because sleep() sorts by order of call and delay delta. So the first proc to sleep for the prescribed delta gets resumed before the second and so on. The reason you believe that you can't control which proc is resumed first is because your project's current structure makes it difficult to control how all of your developers attempt to work with it. It is not a primary problem, but rather a secondary problem, meaning it's caused by the user and not the suite itself.

By creating some kind of a high-precision function rather than variable or override structure, you could pull the current high-precision time and get that value yourself easily.

(getPreciseTime() - world.timeofday).

I really disagree that this is a derail in any shape or form, but rather a suggestion for improving the scope and utility of the feature request.

What you are basically arguing is that any conversation that doesn't conform to your predetermined solution is off topic, and that's just a little too strict of a way, I think a way to try to hedge other people out of the discussion that may actually have some decent insight into the problem.

LummoxJR wrote:
Maybe even easier than all: a proc flag that guarantees the proc gets top priority in the scheduler. Then a var is moot.

I think a setting would be useful for this. The scheduler already sorts insertions based on delay, just add "priority" setting to the sort algorithm.

set priority = 25


the higher the priority, the later it sorts. The lower the priority, the earlier it sorts.
I'm still set on my idea,

Its the most powerful of all, why specialize when you can genericize. It allows what's asked for AND MORE[tm].

You need to know how far into a tick you are.

Getting a proc called at the very start, AND a high res timer allows volundr to know how far into the tick they are, lets me set up a system that guarantees the our mc never stops firing, by tying that into the tick start, and makes it easier for new projects to set up systems for periodical calls or their own event loops.

But that's just me.
Request: A way to know how far into the tick one is

There are 3 solutions:

A var with the (high precision time here) that the tick started

Or! a way to ENSURE a proc is called at the very start of the tick, so it can save its own.

Or! A proc call that returns the tick completion in the form of a percent.

(vars aren't the right answer for the last one, it either gets set too often creating overhead, or its calculated on read, and there is a word for that, it's called a proc)

This is in the startup proc of the scheduler... by spawning many ticks in the future, I can pretty much ensure it gets processed early on in the tick.

Not absolutely perfect, but it seems to be good enough.

... scheduler init proc
        updateCurrentTickData()

        for(var/i=world.tick_lag,i<world.tick_lag*50,i+=world.tick_lag)
                spawn(i) updateCurrentTickData()
        while(isRunning)
                // Hopefully spawning this for 50 ticks in the future will make it the first thing in the queue.
                spawn(world.tick_lag*50) updateCurrentTickData()
                checkRunningProcesses()
                queueProcesses()
                runQueuedProcesses()
                sleep(world.tick_lag)

Finally got the dll working and the parameters tuned. Goonstation now runs with a world.cpu around 100 when under heavy load. Client input lag is kind of a thing of the past. Interestingly enough, this has revealed the sources of lag far better than the profiler ever did, as when load goes up, some processes start to lag behind. We discovered some serious misbehavior by atmos and will be fixing that, but it's all thanks to better timing and better knowledge about the elapsed time in the tick.
Glad the DLL is working out. Can you clarify for me exactly what it's doing for you? Seems like if it's doing what you need, then any potential var should be patterned around that.

Of course it'd also be good to share your atmosphere controller info with the other builds, so they can see similar improvements.
In response to Lummox JR
The goon atmosphere controller would be almost completely incompatible with open source variations due to the differences in design and how long they've been developed separately.

His dll and process scheduler are freely available on github though. I linked it earlier in the thread.
Yup - The process scheduler is free and opensource, along with the dll code. The main part is a macro that does the extern call.

#define PRECISE_TIMER_AVAILABLE

#ifdef PRECISE_TIMER_AVAILABLE
var/global/__btime__lastTimeOfHour = 0
var/global/__btime__callCount = 0
var/global/__btime__lastTick = 0
#define TimeOfHour __btime__timeofhour()
#define __extern__timeofhour text2num(call("btime.[world.system_type==MS_WINDOWS?"dll":"so"]", "gettime")())
proc/__btime__timeofhour()
        if (!(__btime__callCount++ % 50))
                if (world.time > __btime__lastTick)
                        __btime__callCount = 0
                        __btime__lastTick = world.time
                global.__btime__lastTimeOfHour = __extern__timeofhour
        return global.__btime__lastTimeOfHour
#else
#define TimeOfHour world.timeofday % 36000
#endif


And then within each process, there is usually a for loop looping a list of objects to update or process. Inside the loop, I call a proc on the master controller called scheck(). Scheck checks whether the time allowance for background processing in the tick has passed, and if so, it sleeps until the next tick. The master controller uses the precise time provided by btime in order to calculate the time allowance and also to calculate the amount of time elapsed during the tick.
In response to Lummox JR
I've got vgstation using the timing dll now as well, and they're seeing improvements. Unfortunately the time call itself is expensive due to the requirement of having to marshal the value as a string. Performance would be an order of magnitude better if it were implemented in the engine. Is there any way you could sneak this into 508? Even just a world.timeofhour var with millisecond precision would be sufficient to start, and a world.elapsedticktime var could come in a later major version...

After some fine tuning, I've got goonstation running quite smoothly now at a tick_lag of 0.5. Occasionally we run across some lag when large parts of the map change, or extremely large explosions occur, but for the most part it's very smooth.
Also for some reason, it wouldn't work unless pomf compiled it themselves on the vgstation host server? I blame weird windows revision specific code generation.
This shouldn't be a var Volundr, and you know that.

if it was a var, it would have to either be updated 1000 times a second, or calculated on read.

Calculated on read, hmm, there is a word for that, oh ya, its called a proc.

If byond adds this, it would have to be a proc.
Having a var calculate on read isn't a problem at all. Most internal vars have to do some kind of lookup or another, and world.fps is effectively calculate-on-read (and on write) as an equivalent to tick_lag.

All we'd really be talking about here is measuring when the tick started with a high-precision timer, and then re-measuring with high precision and taking the difference when the var is read.
Page: 1 2 3 4 5 6