In response to Marekssj3
When you're only reorganizing your code to make it smaller, like rewriting

proc/Foo(x)
if(x)
return 1

as
proc/Foo(x) if(x) return 1


you're doing nothing at all except shaving a few irrelevant bytes of your .dm file, at the cost of making it less readable. This is something that just is not worth it in the long run.
Agreed, but I do prefer single-line if statements if the block is only a single instruction. Especially when that instruction is a return.

Minimizing scope traversals and using || and && operators properly is really solid, though.

Also, organizing || and && operations is really important. Generally you want the heaviest operation to be last.

So like:

if(canAct()&&!dead)
//do something


should be:

if(dead||!canAct()) return
//do something


That's because it's preferable to bail out early rather than invoke the function if the boolean is false. || and && operator short-circuiting is a great way to prevent unneeded instructions and speed up code.
To avoid creating extra code blocks, you can do stuff like this:
health <= 0 && Die()

// instead of
if(health <= 0) Die()

I first saw this done in Lummox JR's Runt source a few years ago. Yeah, the compressed version.

Similarly,
proc/Foo(x)
if(x)
return 1

// can be written
proc/Foo(x) return x && 1

// or
proc/Foo(x) return !!x
In response to Kaiochao
Kaiochao wrote:
To avoid creating extra code blocks, you can do stuff like this:
health <= 0 && Die()
>
> // instead of
> if(health <= 0) Die()

I first saw this done in Lummox JR's Runt source a few years ago. Yeah, the compressed version.

Once again, at the expense of readability. Something like that should not be a bottleneck, so there is no reason for it to be done there. Even in a bottleneck, hopefully if statements are not so process-heavy that they contribute notably to how quickly or slowly code is processed.
Agreed with pop. Understanding short-circuiting though. The bottleneck in the line that Kaiochao wrote is ironically the proc call overhead of calling Die(), so the scope traversal is really actually a non-issue. More important is understanding that && and || short-circuit, potentially skipping a large number of heavy instructions and sometimes pre-empting runtime errors.
Yeah, if(!a) b is equivalent to a && b for short-circuit logic purposes, so the only reason to use the && form is if you're compressing code. Except in Perl, where such things are considered a high art for some reason.
Ter13 said:
Setting the return value can in some cases be faster than directly invoking return. In other cases it can be slower, though these cases have nothing to do with the return value. It has to do with scope traversal and internal housekeeping DM has to do.

I have more to learn regarding computer science so I'll take your word for it. (I've been slacking honestly but this encouraged me to get back to reading.)
IMO learning higher level math is just as important as learning how to program
In response to Zagros5000
Zagros5000 wrote:
IMO learning higher level math is just as important as learning how to program

Maybe some calculus, series, and if you're ambitious some differential equations, but those are not terribly advanced to be honest. Once you move past those you're getting into the realm of leaving things to the experts (cryptography, PRNGs, fast computational algorithms like fast matrix multiplication, etc.).

I don't see how even basic graduate level math courses like topology, abstract algebra, and analysis are generally applicable to day-to-day programming (though there are definitely some cases, especially for analysis).
In response to Popisfizzy
Have to agree with you on that. Everything up to and including the calculus level can be quite beneficial to programming, but anything more advanced than that probably isn't.
I have never had to differentiate or integrate anything when it comes to programming.
In response to Doohl
Doohl wrote:
I have never had to differentiate or integrate anything when it comes to programming.

Few programmers will need to, especially if they're just using algorithms implemented by someone else. But for some things—numerical methods, for example—you'll often need not just calculus, but analysis in general, especially for things like working out convergence criteria, rates of convergence, complexity, and so on.
Calculus can be surprisingly useful for a number of things. For instance, one of the things I'd like to do with the BYOND code for map chunks is take rotations into account with animations, and right now I'm not doing that. Calculus is what I'll need to get the min and max positions of the visual corners of the icon during each stage of the interpolated transform.

That said, the math for such a thing is really hairy. The final equation for the x and y of each corner has six terms, combining t, sin(n+mt), cos(n+mt), t*sin(n+mt), t*cos(n+mt), and a constant. Calculating the min and max on that is somewhat ugly, since the first derivative is in this form:

a*t*sin(m+nt) + b*t*cos(m+mt) + c*sin(m+nt) + d*cos(m+nt) + e

Solving that equation for zero along the interval 0 ≤ t ≤ 1 is non-trivial. I could just use interval math to handle this without calculus, but that won't produce as tight a rectangle.
Its more about the problem solving skills you develop but either way my profs always tell me that Lin alg and discrete math is a must for comp sci students who wanna do anything with their degrees, didn't say much about calc but I'm pretty sure we're just forced to take it to develop problem solving skills
I've always wanted to go and learn more than + * / and -.

I've been interested lately in trying to make map generators. I've been craving board games quq
In response to Zagros5000
Zagros5000 wrote:
either way my profs always tell me that Lin alg and discrete math is a must for comp sci students who wanna do anything with their degrees

Well, linear algebra is so ridiculously useful that I have a hard time thinking of a significant math field that it isn't used all the time. Maybe formal language theory? At the same time, linear algebra is so very basic that I wouldn't call it advanced math whatsoever. It's actually one of the things I'd like to write a tutorial on (specifically, a tutorial series on matrices and vector spaces) if there were better math markup on these forums.

As for discrete math, that term is so broad that I can't comment.
In response to Popisfizzy
It's considered one of the harder first year math courses at my school, as for discrete math I take it second year so idk much about it either except the description they give; include elementary number theory (gcd, lcm, Euclidean algorithm, congruences, Chinese remainder theorem) and graph theory (connectedness, complete, regular and bipartite graphs; trees and spanning trees, Eulerian and Hamiltonian graphs, planar graphs; vertex, face and edge colouring; chromatic polynomials).

I still have 3 more years before I can judge what I'm gunna end up using or not tho
In response to Kozuma3
Kozuma3 wrote:
I've always wanted to go and learn more than + * / and -.

Well, there's some really cool algebras that use those as their only operations. For example, the algebra of polynomials over the real numbers is a graded, infinite dimensional algebra that is also a unique factorization domain, meaning that it has prime elements (kind of like prime numbers in the natural numbers). As such, the main operations are addition (consequently, subtraction too) and multiplication. You can also do division on them (and specifically, Euclidean division).

Honestly, polynomials are ridiculously cool in ways they simply do not really explain until you start learning more about abstract algebra.
I lost you at algebra.
In response to Kozuma3
Kozuma3 wrote:
I lost you at algebra.

So, most people know what a polynomial is, usually in terms of polynomial functions. E.g., something like f(x) = 5x3 + x2 - 8 or something like that. Here, x is taken to be a real number that can freely vary, so you end up with a function.

But something else you can also do is treat x as an indeterminate, that is as essentially a label. And in that case, x0, x1, x2, etc. are all distinct indeterminates, i.e. distinct labels. Thus, you can just consider arbitrary elements like A = a0x0 + a1x1 + a2x2 + a3x3 + ... + anxn + ... for real numbers a0, a1, a2, a3, ..., an, ....

This leads to a natural generalization of addition. Just like when you have f(x) = x3 + 2 and g(x) = 3x3 + 4x - 1 it makes sense to have (f+g)(x) = 4x3 + 4x + 1 (that is, adding terms with matching powers of x), you can do likewise with these "infinite polynomials": (a0 + a1x1 + a2x2 + a3x3 + ... + anxn + ... ) + (b0 + b1x1 + b2x2 + b3x3 + ... + bnxn + ... ) = (a0 + b0) + (a1 + b1)x1 + (a2 + b2)x2 + (a3 + b3)x3 + ... + (an + bn)xn + ... and it works very similar to addition of, say, real numbers or integers or etc (though, because of it being infinite dimensional, you have to be careful about certain things).

Likewise, just like for things like f(x) = 2x2 and g(x) = x + 1, you can define (fg)(x) = 2x3 + 2x2 by multiplying each term and adding the powers of the x terms, you can likewise do for polynomials: (a0x0 + a1x1 + ... + anxn + ...)(b0x0 + b1x1 + ... + bnxn + ...) = a0b0 + (a0b1 + a1b0)x1 + (a0b2 + a1b1 + a2b0)x2 + (a0bn + a1bn-1 + ... + an-1b1 + anb0)xn + ... . Once again, it works in a relatively familiar manner (though you still need to be careful about some things when doing it).

There are lots of other cool things about polynomials, but this is just a presentation of the algebra of polynomials (more specifically, formal power series) over the real numbers.
Page: 1 2 3 4 5 6