ID:1612577
 
(See the best response by Stephen001.)
Trying to get a near-realtime(ish) conveyor belt going, one that isn't choppy and doesn't look laggy. I'm quite a big one on performance and efficiency, so I did my best at this point to make it work well enough with the information that I had. But looking at the profiler, I can't really judge whenever my current values are either good or bad.
So I was hoping someone with some understanding of how the profiler works could explain what to look at to see if it's "good" in terms of performance and efficiency, and if my values are fine as they are.

Profiler; http://puu.sh/9O8K9/b2d58b53a7.png
EDIT: Managed to improve it a little, now http://puu.sh/9Oj4e/a70b3247f6.png

That's 500 objects being moved across a small bit of the map over about 128 conveyor belts (1-tile per conveyor piece) on a near-constant rate.

Could you post the conveyor code? It doesn't seem that bad but there could be a different approach someone can offer.
Is this average?
How did you derive a test of 500 objects across 128 conveyor belts?

Essentially for the numbers to be assessed as "good/bad" we (and you) need to set expectations of what tests constitute "normal use" and "upper limit use" for this functionality.
I've not seen your code but if you're worried about lag - and since this is a conveyor code, I'm going to go on a limb and say you have loops of some sort?

If so then you may be problem solving the wrong way. You're not programming a conveyor track and it doesn't need to be looping and processing itself. You don't really need any loops if you put the behaviour on the atoms moving onto the belt. All you need is a sort of entered event and an event when the belt is turned on.

Anything the belt does can be handled this way - and further more it only does things when they're needed.

In the case you don't use any loops maybe this could help someone else.

P.S.: I can tell by the process names that this is for SS13. Please, please, please don't tie this into "controllers" - those things are literally why SS13 runs like molasses. Nothing needs to be called every 1/10th of a second.
I don't see why the values are as high as they are.

Here's my version of your goal:

atom
movable
var/tmp
conveyable = 1
convey_time = 0
convey_dir = 0
convey_dist = 0
obj
conveyor
conveyable = 0
var/tmp
list/conveying = list()
speed = 2
proc
Convey()
while(speed)
for(var/atom/movable/o in conveying)
if(!o.conveyable)
conveying -= o
continue
if(o.convey_time<world.time||o.convey_dir!=src.dir)
step(o,src.dir,speed)
o.convey_time = world.time
o.convey_dir = src.dir
o.convey_dist = speed
else if(o.convey_dist<src.speed)
. = speed - o.convey_dist
step(o,src.dir,.)
o.convey_time = world.time
o.convey_dir = src.dir
o.convey_dist += .
if(conveying.len==0)
return
else
sleep(TICK_LAG)

Crossed(var/atom/movable/o)
if(o.conveyable)
conveying += o
if(conveying.len==1)
Convey()
..(o)

Uncrossed(var/atom/movable/o)
conveying.Remove(o)
..(o)


Simple, short, sweet, and fast because there's no unnecessary always-on behavior like your approach has.

The realtime controllers in SS13's codebase are idiotic, and whoever wrote them needs to not program.
There's a difference here, my code isn't in any way related to SS13 or their code. I've been using their Master Controller method because it seems to work quite well to maintain performance and deal with a boatload of crap at the same time.

I would have probably pasted my (A bit shorter) code by now, but I've hit a wall, as in, my function (Conveyor code) is being called approximately 10,000 times per 2 seconds wherein it shouldn't even be running at all.
I'll post the code and a new profiler log when I've resolved that, though.
Same goes, please give an indication of what constitutes "normal use" and "max use" tests for your game, as described above. The act of coming up with that alone does you massive favours, in terms of how you think about the problem.
Uh, well, I partially managed to resolve that, luckily.
Here is the new profiler: http://puu.sh/9QAm3/15d6d84eaf.png
It looks a lot better than the previous 2, but there's probably something I can be doing better.

The logs are from 288 conveyor belts on a never-ending on state, altogether moving around 306 objects.

An on/off button will be added later, since it'll fold nicely with my construction systems I have planned for another time.
Any way, code;
/obj/machinery/conveyor/Process_Realtime()
set background = 1
..()
affecting = loc.contents - src // moved items will be all in loc
if(affecting.len >= 1) // Is or is above 1. If not, no items are being processed.
spawn(1)
var/items_moved = 0
for(var/obj/A in affecting)
if(A.loc == loc) // prevents the object from being affected if it's not currently here.
var/turf/NewTile = get_step(A, dir)
var/counter = 0
for(var/atom/movable/A2 in NewTile)
if(O)
step(A,dir)
continue
else
counter++
if(counter >= MAXOBJECTS)
break // Too many objects, STAHP.
step(A,dir)
items_moved++
sleep(5)
if(items_moved >= MAXOBJECTS)
break

(PS; MAXOBJECTS is 10)
(PPS; Edited code a tad, new profiling logs added.)
In response to Stephen001
Stephen001 wrote:
Same goes, please give an indication of what constitutes "normal use" and "max use" tests for your game, as described above. The act of coming up with that alone does you massive favours, in terms of how you think about the problem.


Well, there's not really an indication, I suppose. As long as I can get my code to run as efficient as possible I'm quite alright with it. I suppose normal use would be somewhere around the values you can see on my post right above here, wherein it's hitting 1.000 CPU with a 100,000 calls, but I'm sure someone can find a flaw in my plan and help me reduce that even further.

TL;DL version: I have no idea, as long as it's as efficient as it can be, I'll go with it.
That's why I want you to take a decision, on what is reasonable. You need to decide "Okay, what do I reasonably expect to see here, in terms of the number of conveyor belts, and the number of objects? What could my players feasibly produce?". Can they put 7.2 million objects (i.e the whole freaking game) onto conveyors? Can they put down thousands of conveyor belts? Is this sane for them to do, and so, reasonable for you to expect to handle?

Otherwise you (and we) are going to be here all day, and I assume you've got the rest of your game to actually make, unless this feature is it, is the whole game.

On an unrelated note, where is O defined for if(O) to work?
In response to Stephen001
Stephen001 wrote:
That's why I want you to take a decision, on what is reasonable. You need to decide "Okay, what do I reasonably expect to see here, in terms of the number of conveyor belts, and the number of objects? What could my players feasibly produce?". Can they put 7.2 million objects (i.e the whole freaking game) onto conveyors? Can they put down thousands of conveyor belts? Is this sane for them to do, and so, reasonable for you to expect to handle?

Otherwise you (and we) are going to be here all day, and I assume you've got the rest of your game to actually make, unless this feature is it, is the whole game.

On an unrelated note, where is O defined for if(O) to work?

I'm not expecting anything above this amount of conveyor belts, really, I think what I've done is slightly overkill, with the time mining would take. My main concern would be that when the entire game is complete, the amount of processing required for all these conveyor belts may become a problem down the line, so I want to kind of see if I can perfect it (with your help, that's why I'm here) so the issue is can become is minimal.

And looking at it.. I got stuck at first, but I also have (WIP) organizers for the conveyor belts which are defined by variable as O. This is to make sure things don't get stuck on the organizer, so it ignores the maximum-object count that can be on any turf/conveyor.
Can you show us where Process_Realtime() is called?
That'd be the realtime controller thingy I've been trying to make functional (and fast) for some time;

var/global/datum/controller/realtime_controller/realtime_controller //Set in world.New()
var/global/realtime_controller_iteration = 0

datum/controller/realtime_controller
var/processing = 0
var/timer = null

var/machines_cost = 0

datum/controller/realtime_controller/proc/rt_process()
processing = 1
spawn(0)
while(1) //far more efficient than recursively calling ourself
if(processing)
timer = world.timeofday
process_machines()
machines_cost = (world.timeofday - timer) / 10
realtime_controller_iteration++
sleep(3)


datum/controller/realtime_controller/proc/process_machines()
for(var/obj/machinery/M in machines_realtime)
if(M)
M.Process_Realtime()
sleep(-1)


It's a quite simple thing, works well enough.
Do note that the sleep(3) at the end of rt_process() can't be taken any higher or you'll start noticing the conveyor belts looking like they're lagging. I've carefully altered it and found that this is the highest, best setting for it.
This is some pretty good spaghetti spawning you've got going on here. Also realtime_controller_iteration will undergo type-conversion to a float at some point and basically become inaccurate. I dunno if your world is alive long enough for that to matter though.

You essentially want Ter13's solution, basically. If you need the controller mechanism, then drop the sleep out of Convey() and rename to Process_Realtime(). And probably make the sleep() in your controller sleep(tick_lag). Oh and lose sleep(-1).
In response to Stephen001
Stephen001 wrote:
This is some pretty good spaghetti spawning you've got going on here. Also realtime_controller_iteration will undergo type-conversion to a float at some point and basically become inaccurate. I dunno if your world is alive long enough for that to matter though.

You essentially want Ter13's solution, basically. If you need the controller mechanism, then drop the sleep out of Convey() and rename to Process_Realtime(). And probably make the sleep() in your controller sleep(tick_lag). Oh and lose sleep(-1).

Spaghetti code as in it all runs throughout one another? It's partially supposed to work like that, mainly because it then processes quicker, thus less "lag" can be seen from the conveyors and all of their moving objects. I've tried to lessen the calls by making it only continue if process_machines() returns a value, but it seemed to be unsuccessful.
Well no, spaghetti as in "throw it all at the BYOND built-in scheduler and pray".

You essentially have a cascading tasks problem.

You inspect each conveyor every ~4 ticks, each conveyor spawning a task. So on the first run through, you spawn 288 task items for the BYOND scheduler to handle. You backlog check on every conveyor Process_Realtime() call, find nothing.

At tick 1, all 288 tasks become active. They each find an object to move, do so, then sleep for 5 ticks.

At tick 3, we spawn another 288 tasks, again checking backlog on each task creation, and again, finding none as unless you're on a Pentium 2 or hosting on 56k, we can probably move 288 objects a step inside of 100 ms, or at least 200 ms which is the max time we've got until those processes are backlogged.

At tick 4, we move another 288 objects around, and sleep for 5 ticks. We now have 576 tasks in the sleep state.

At tick 6, we wake up those 288 first tasks, move another 288 objects around, sleep for 5.

At tick 7, we wake up the controller again. Now, maybe if we're spilling over from tick 6, sleep(-1) actually starts to have a purpose. But only due to the oddball scheduling strategy we've picked. Schedule another 288 tasks. So we end with 864 tasks in a wait state, we're starting to stress the BYOND scheduler's priority calculations a little now, our useful processing time per tick is reduced. (sys time / user time contention, in the UNIX world)

...

Rinse and repeat, until all objects in list from the first call for each conveyor is finally empty, or we've moved 10 in that task, and so, our task-list finally stops ballooning and the scheduler starts to recover.

Average time between object moves to the user? 200 - 400 ms, depending on how the numbers line up. Tasks needed to perform said moves? Maaaaaany.
I suppose as an addendum, this is why your attempts to increase the sleep() within the controller result in poor performance both visually and otherwise. At sleep(4) your scheduling is such that 288 tasks come live from sleep at the same time a newly spawned set of 288 go live. Over a number of iterations this problem exacerbates as everything conveniently lines up, and performance takes a massive dive as we push out tasks horribly.
I guess I get that part, worst is that I have ticklag on 0.33, makes it worse.

Although all of these things happen, world CPU never really gets any higher than 5 at most, with all of these conveyors on it's around 1-2, some times 3.

Took some of your info, and combined it with some of my former ideas, would the following code do anything to help with the situation?

EDIT: It did not. This is actually a lot worse and I think I broke most parts of the controller.

var/global/datum/controller/realtime_controller/realtime_controller //Set in world.New()
var/global/realtime_controller_iteration = 0

datum/controller/realtime_controller
var/processing = 0
var/timer = null

var/machines_cost = 0

datum/controller/realtime_controller/proc/rt_process()
processing = 1
spawn(0)
while(1) //far more efficient than recursively calling ourself
if(processing)
timer = world.timeofday
if(process_machines() == 101) // If returns 101, AKA finished. backlog check and continue.
sleep(-1)
continue
machines_cost = (world.timeofday - timer) / 10
realtime_controller_iteration++
sleep(3)


datum/controller/realtime_controller/proc/process_machines()
for(var/obj/machinery/M in machines_realtime)
if(M)
M.Process_Realtime()
sleep(-1) // Small breather before we return finished state, this may help with finishing unfinished tasks?
return 101 // 101 as in finished.
The problem isn't the controller on it's own (although much of that sleep(-1) is bogus there), it's the controller in combination with the conveyor itself that produces the task flooding, owing to the fact you spawn a loop within each conveyor, so you've effectively got nested spawned loops, which makes the BYOND scheduler sad-face.

I strongly suggest you take up Ter13's route, and adjust appropriately for whatever specific requirements you have, like the use of the controller.
Page: 1 2