ID:2128963
 
(See the best response by Clusterfack.)
I was recently browsing this topic http://www.byond.com/forum/?post=2049761, when I read that calling procs in DM actually causes performance overhead. Now, I have been aware that calling named functions in procs causes overhead, according to the documentation anyway, but the procs themselves?

My approach to DM code was always a modular, logical construct with little to no duplication of lines through the use of called procs, breaking up large chunks into smaller chunks, and so on. Now that approach has been thrown into doubt.

My question is basically whether or not proc calls make a significant enough impact on performance to force optimization or duplication of lines? As a follow up, I assume this only applies to user defined procs and not built in proc commands like image() and such?

How much is too much when it comes to procs?
As a follow up, I assume this only applies to user defined procs and not built in proc commands like image() and such?

If I'm not mistaken, the overhead applies to all procs -- however, built-in procs are inherently faster than user-defined procs.

I wouldn't say there is a such thing as "too much" when it comes to procs, as long as your design is sound. Performance-wise though, I wouldn't worry too much.
literally don't worry about it until it becomes a problem
Everything has overhead. Creating objects (such as datums, lists, images, matrices, etc.). Calling procs, including user-defined as well as built-in and ..(). Declaring variables, reading variables (including implicit src), and assigning variables. If-statements. The list goes on, which is why it's not important.

It's not a problem unless you let it be a problem, in which case, you either have some lessons to learn about optimization, or you should use a different engine.
As Super Saiyan X said, don't worry about it.

If your code is actually beginning to run too slow there's almost certainly better places you can optimize than proc call overhead. Proc call overhead becomes a problem when you're doing millions of proc calls. And when you are you're either doing something very advanced like lighting, AI or whatever.
Best response
A simple phrase has always helped me out with this:

"The profiler is king"

And yes, anything can be worth optimizing for if you're doing that operation enough times in a small time span. But if you are trying to optimize before profiling based on the assumption that proc call overhead is significant, don't.

Profile which procs are costly by self cpu, track them down to their origin in the code using total cpu, and then either decrease the necessary looping (ie reducing nested loops causing n^2 or higher cost), lower the scope (ie looping through a small global list rather than world), and reduce repetition (ie stop calculating the same result many times over unnecessarily).

As to your extremely specific question, while it varies from system to system, the so called 'proc call overhead' is about four orders of magnitude (.0001) lower than a single call to view(), and on the same order of magnitude as simple operations such as +/-. Hopefully that helps puts into perspective how wasteful it is to try to optimize based on such a general principle.
Thank you, everyone. I think my question is very well answered at this point, specifically with the information provided by Clusterfack. I'll bookmark this thread for future reference, too.
Built-in procs--like findtext() for instance--are compiled as instruction codes and are basically as fast as you can get. Datum procs, like icon.Shift(), are called as actual procs, and they incur proc call overhead. So do user-defined procs.

Proc call overhead is kept as low as possible, but it is not insignificant. If you're calling a proc that does little else but calculate a value, you're better off doing it in a #define so it can be inlined. Inline code will avoid the hassle of setting up a new proc context, beginning the proc, and handling the proc teardown.
In response to Lummox JR
Lummox JR wrote:
Built-in procs--like findtext() for instance--are compiled as instruction codes and are basically as fast as you can get. Datum procs, like icon.Shift(), are called as actual procs, and they incur proc call overhead. So do user-defined procs.

Proc call overhead is kept as low as possible, but it is not insignificant. If you're calling a proc that does little else but calculate a value, you're better off doing it in a #define so it can be inlined. Inline code will avoid the hassle of setting up a new proc context, beginning the proc, and handling the proc teardown.

Ah, good to know. I usually keep my procs pretty self contained, so long as they're useful and cut down on clutter. I do need to look into inlines a bit more, since I've never really had a need for them before making this topic.
In response to ForwardslashN
Be careful how you use them. They can make debugging a painful process.
In response to FKI
FKI wrote:
Be careful how you use them. They can make debugging a painful process.

Quite true. In fact I often run into trouble trying to figure out which code in BYOND itself was running, when I look at a trace, because the compiler inlined something. For instance one of the major functions called by SendMaps()--which is itself a fairly large function--gets inlined because it's called on only the one place.