In response to Laser50
Laser50 wrote:
I've been using their Master Controller method because it seems to work quite well to maintain performance and deal with a boatload of crap at the same time.

Not to attack you persay, I've tried to ask various SS13 coders and they all just kind of shrug at the subject since it's either too complicated(arguably convoluted)/ too much work to change anything / it's the epitome of legacy code.

But what exactly do these controllers do good? As far as I can tell it's creating a pseudo task controller with in an environment that already has one.

Further more, I'm not sure if you are doing this but in it's original usage every proc you can imagine gets shoved into a controller and loopchecked even if it doesn't need to be. Every 1 or 2 10ths of a second no less.
In response to Jittai
Jittai wrote:
Laser50 wrote:
I've been using their Master Controller method because it seems to work quite well to maintain performance and deal with a boatload of crap at the same time.

Not to attack you persay, I've tried to ask various SS13 coders and they all just kind of shrug at the subject since it's either too complicated(arguably convoluted)/ too much work to change anything / it's the epitome of legacy code.

But what exactly do these controllers do good? As far as I can tell it's creating a pseudo task controller with in an environment that already has one.

Further more, I'm not sure if you are doing this but in it's original usage every proc you can imagine gets shoved into a controller and loopchecked even if it doesn't need to be. Every 1 or 2 10ths of a second no less.

Oh, no offense taken at all. I think that they're using it as a way to process everything without turning everything into it's own loop. It's more structured to work with, and because of the delaying in-between, everything has time to finish itself, and might keep some spare time with it.
I can't give you a definitive answer, though. But it works for me, and I don't really change it beside tweaking some smaller bits of it.
--------------------------------------

And toward Stephen;
I guess that'll be the right thing to do, although if this were to stack up, would this be in any way visible in the game? E.G World.cpu variable, or anywhere?
I left it on while picking up my diploma (Yay), and we're about 53,000 loops in, with a total of 14,000,000 conveyor calls, world CPU is still as it was when I left it.
The machinery_cost variable is also (still) displaying 0 at all times. Not sure if I broke that, but it hasn't changed.

But knowing how to spot that would be great. Since it'd make it easier for me to look at the issue.

(Just because why not, here's the profiler; http://puu.sh/9Rbcb/cf4c6a3a4f.png)
You should test the cpu with set background off. Set background is more or less what the controllers are doing - that is reshuffle tasks to avoid "lag". With it on it eschews profiler readings.
The only thing I've given set background = 1 was the controller itself, and since it doesn't inherit into processes called by it, it's not in the conveyor code (Not sure if the code I showed on the forums still show set background = 1 enabled, but It's not there any more with the tests/profiler I provided you on the post above.
In response to Laser50
Best response
You'll not really see it in CPU, and similarly the amount of tasks is more a product of the number of active conveyors, so the issue basically ramps up to some factor of the number of active conveyors, then stays there, in terms of it's own CPU use.

Where you will see it, is in client responsiveness in networked scenarios, with map threads off. The more clients (+ the more other stuff you've got going on) the more network I/O has to contend with this mass of tasks in your backlog checks, and the lower actual CPU through-put you'll manage. At that point it depends on your OS, on UNIX you'll see very big network I/O CPU times compared to what you'd expect, on Windows I think the process counters exclude it so world.cpu wouldn't see it, but procmon would.

It's not that the tasks are active, it's that they're there at all, in such numbers.

This is also another good reason to use Ter13's solution, in combo with your controller if you fancy (and suitable loop exit conditions in his code). You'd actually be using a controller properly in his instance.

At the moment, you're violating the design rationale for even having a controller ("I think that they're using it as a way to process everything without turning everything into it's own loop.") by spawning a loop in each conveyor's process_realtime call. Because ... it's not processing in real-time, as the function states, it's actually deferring it.
In response to Stephen001
Stephen001 wrote:
I strongly suggest you take up Ter13's route, and adjust appropriately for whatever specific requirements you have, like the use of the controller.

<3
I guess that would be for the best, although I will not forget what I learned here, and will continue to see if I can improve my current code based on your last posts regarding overloading the task manager, if I manage, that is.

Other than that, thanks a lot for your input, guys!

Vote's gonna go to Stephen for his continued input. Since it seems fair.
Page: 1 2