ID:2578231
 
I recently introduced myself to render_source/target in my attempts at implementing basic particle systems to figure out what a nice way to do them natively might be like. The most basic particle system I came up with was this:
https://imgur.com/xgYB1Z2

This is a basic emitter with:
Direction: The direction that particles are emitted in.
Emit frequency: Number of particles emitted per second of real time.
Lifetime: How long the particles exist.

For each particle, this is the data that differs between them:
Distance: How far to go.
Spread: How far from the emitted direction to go in.

That data is generated per-particle using datums I called generators. They generate values.

In the video, the distance and spread are provided by a specific type of generator: a random number generator (as opposed to a constant or time-dependent one), with these properties:
Interval: Lower and upper bounds limiting the possible generated numbers.
Interpolator: Controls the distribution of the random numbers generated.

Internally, random number generators are fed with rand(), which is a random number between 0.0 and 1.0. This interpolator re-maps that value to the range, so that 0.0 is the lower bound and 1.0 is the upper bound, but the way it happens depends on the desired probability distribution. A linear interpolator provides uniform probability. A different interpolator might provide a Gaussian distribution, squeezing values towards a certain value.

The random number generators in the video generate values in a range with uniform probability (using a linear interpolator), the distance has a range between 200 and 300 pixels, and the spread has a range between 0 and 30 degrees.

It's effectively just this: lerp(lower, upper, rand()).
But, to make it flexible and each part substitutable, each part (generator, interval, interpolator) is its own datum that can be swapped out for any other.
Swappability like this is what object-oriented programming is all about, so I'd hope the native particle system is at least this flexible.

So, it emits particles that move their generated distance away, in their generated direction, during their lifetime. But also:
Their transparency is faded in and out in thirds, with easing on both ends.
The movement is eased with QUAD_EASING to give it the appearance of drag.

And it's all done using animate(), so there's no physical motion.
Every particle is created (with new) and deleted (with del), so there's no object pooling (yet?).
It's running at world.fps = 60, and the emitter is emitting 1,000 particles per second.
On my PC, world.cpu stays below 3%.

Most of that CPU usage seems to be just from the client-server communication.
According to task manager, Dream Daemon uses 0% CPU until I join it, when it jumps up to a little under 1% (quad core, so world.cpu multiplies that by 4).
Also according to task manager, Dream Seeker comes in at 11% with 25% GPU usage.
I'm all for offloading work from the server to the client. This just shows that's working.

But, it would be better if there was only one object the client needed to know about: the emitter. That's where the somewhat-recently-announced particle system feature comes in, hopefully, we'll see.

But wait, there's more. I mentioned this leading to render_source/target.
Here's what 10 million particles per second looks like:
https://imgur.com/6BumvKm

This came from wanting to add more emitters to the scene. How?

The first obvious thing to try was to just create more emitters with random positions and random directions. This not only meant the CPU usage of each emitter would stack up, but also that there would be a lot more distinct particles onscreen. That's bad, but this would probably be the only way to have multiple different emitters in the scene. What about multiples of the same emitter?

The next obvious thing to do was to add invisible anchor objs around the scene with random positions and random directions, and add all those to the vis_locs of a single emitter in the void. That worked, the single emitter would be duplicated across all of them. However, it was still performing really badly, and I'm not sure why. I was hoping to get a "GPU Instancing" effect, just from that.

And finally I tried out render sources. In the video, the "one true emitter" render source is at the bottom-left. Midway through, I exploit a bug (todo: report it) where click-and-holding Dream Daemon's title bar causes animations to stop in Dream Seeker, to show all the anchor objs. All those blobs are some of the 10,000 copies of the true emitter, on random tiles, in random directions.
And it ran at ~60fps with the same world CPU as you'd expect from just having 10,000 objs in view of a client (only about 20% actually, or 5% in task manager), except that each of those 10,000 emitters was appearing to emit 1,000 particles in every passing second.

Also, it looks pretty identical at lower world.fps, as long as client.fps is 60. The only difference is that the emitters can only emit as fast as the tick rate (though they may emit more than once per tick). The animations play out the same because they're done by the client in the first place.




So, given that render sources must be visible for render targets to duplicate them, it seems like they'd only be appropriate in static scenes where the camera doesn't move, or HUDs where you have full control over what's visible and when.

It would be nice if they could be used in the world for things like tiled area effects or just duplication of particle effects. The only issue is that you'd probably get clipping happening easily, since targets will be closer to the camera than the source as you approach them. You can already duplicate things perfectly fine with vis_contents/locs (the camera doesn't need to see the source), but it doesn't come with the same performance gains as render_source/target does.

I had an idea for big patches of dense interactive tall grass like in Stardew Valley:
https://www.youtube.com/watch?v=vxW_ZykCx0k

Remember, the renderer has trouble rendering a ton of appearances onscreen at the same time. Luckily, it doesn't hurt the renderer at all for most of the grass to be invisible, only there to detect motion. By only having a single render source obj of visible grass, the renderer only has to render one, while all the otherwise invisible grass objs can set their render target to that, duplicating its appearance at very little cost. (Wouldn't it be nice if that happened automatically? I'm pretty sure turfs already do something similar.)

Once something moves into a piece of grass, it just has to play its waving animation, which is just a one-off animate() chain of its transform.

The issue is that all the duplicate grass of a patch will be invisible until you can see the one main grass, at which point all the duplicate grass in range will pop-in.

I haven't gotten around to this yet, but I thought I'd make this post in the meantime, because the "10 million particles per second" thing seemed interesting enough already.



Questions for the community:

What other uses of render_source and render_target are there?

What would a better more user-friendly approach to reusing things already being rendered in a mostly similar way, for massive performance gains?

How far would you like BYOND to go towards something like:
Godot's 2D particle system?
Unity's VFX graph?

How much do you imagine that would cost to develop, and would something like that even be worth it?
That's awesome. I haven't tinkered with render_target/source at all, this really makes me want to.
Recently, I noticed that if you prefix the render_source/target text with an asterisk, the original object isn't rendered. Helps to read the documentation better.

That makes it behave like plane masters, and you can add it to client.screen to enable it without needing to be physically nearby.