Its more of a schematics complaint. Calculate on read anything should be procs.

(either that, or expose getters and setters to dm so we can make calculate-on-read/write)

Exposing getters and setters would be much uglier than you think. I've never come up with a way to do that that didn't threaten to be a performance killer.
It can be done at compile time.

How .net does it, is it compiles variables with getters or setters to map to _get_varname() and _set_varname(value) (or something to that effect)

Its really just a proc short hand for syntaxial and code style reasons.
I'd be behind that provided proc call overhead wasn't currently so monstrous in DM. It's absolutely a dealbreaker at the moment. Well, for widespread use of that pattern anyway.
it would be optional, only vars that used it would convert to get/set procs.

I suppose the only hard part comes in when you talk about overriding vars, but that shouldn't be too hard to figure out.

They would also have to actually hold data as well, unlike in .net, because we don't have private variables, so you can't enforce encapsulation any other way. but you just exclude the getter and setter procs from the var to proc mapping.
In response to Lummox JR
That's what I figured. It's also how I wrote my 'hack', as a macro that looks like a global var but calls a proc when compiled.

What is the level of effort we're talking to add this elapsed time var?
In response to Ter13
Your comment about the proc call overhead got me looking into that area to see if there are any optimizations I may have overlooked.

I did find one or two really minor things (like some redundant ifs that could be grouped into a single if), but I suspect the lion's share of the blame goes on the value heap. It's a special struct designed to hold the proc's args, vars, and stack, and when it grows past a certain default it will use malloc(), realloc(), and free(). I suspect that an earlier optimization I did for lists--in which they try to fetch pre-allocated blocks instead of allocating whenever they can--may be of help. The trick is, I don't think I can use the exact same setup I had before because this struct seems to have some odd size requirements. I'm gonna have to study this a lot more.

There's also quite a lot of clearing of vars at proc end, and I'm starting to wonder if the function call behind that should be forcibly inlined (assuming the compiler isn't doing so already), at least in a few cases here, because in theory that could shave time off that would ordinarily be used for the function call overhead. There would still be a function call, but the current method actually uses two.
Instead of guessing, profile it!

In your debugging or testing build, make some function that stores a counter in a global variable, and echos the difference.

Its super easy, you can just slap it between every line, give it a string arg to be echo'ed out as well.

As for proc calls: (some code (like verbs) omitted, profiled procs intact)

while (1) {
procspeedtest();
runcount++;
if ((runcount % 10000) == 0) {
sleep(1);
}
}
proc/procspeedtest() {
nullproc();
}
proc/nullproc() {
return;
}


The results:

                       Profile results (total time)
Proc Name                   Self CPU    Total CPU    Real Time        Calls
-----------------------    ---------    ---------    ---------    ---------
/mob/proc/procspeedtest        2.330        2.806        3.258      2830000
/mob/proc/nullproc             0.353        0.443        0.928      2830000


The difference in the two really show the overhead.

The total time vs realtime also shows the overhead. background is disabled, and so are world.loopchecks, so in **THEORY** total time and real time shouldn't stray, *technically* only sleeps should be causing that.



This really gets interesting when you think about it in of time per call, and compare that to your processors process rate.

3.258/2830000=0.000 001 151 236...
So 1.151 milliseconds. I have a 3ghz processor, so this means that each procspeedtest call took (about) 3000 processor ticks.
I absolutely do intend to profile this when I get to it in earnest. For now it's easiest for me to simply do an overview and get a glimpse of where things stand.
I've been looking into the proc overhead issue to see if there's any hay to be made there. The short answer at this time: not yet. The work being done on proc entry/exit is very necessary, so my main hope there is to be able to shave some time off of minor operations, such as Value_Clear(). (That would have across-the-board impact if I could make it any faster, but my experiments so far haven't borne any fruit.)
bump for 510 (the initial requests)
bumping by Lummox's request from reddit:

don't forget to put the tick completion var on the 510 todo list

(hooray!)

YESSSS
This is implemented and will go in 510. I'll close this request once I nail down the full version number.
Lummox JR resolved issue with message:
The new world.tick_usage var tells what percentage of the current server tick has been used up. Except when this comes from a player command or some other kind of "instant" event, this happens before any maps are sent to the players.
Page: 1 2 3 4 5 6