ID:1541519
 
Resolved
Advanced lighting and other special effects are now possible via PLANE_MASTER, a new value that can be used in appearance_flags. Create an atom with this flag and make it visible to a player (e.g., on the HUD or as an image), and then all other icons on the same plane will be drawn to a temporary surface. That temporary surface will be shown on the current scene and subject to the color, alpha, blend_mode, and transform of this master atom. (The master atom's icon itself is not drawn. This atom is just a placeholder for grouping all the others together.)
Applies to:DM Language
Status: Resolved (510.1320)

This issue has been resolved.
The current blend modes are awesome, but it's missing ONE thing, which is the ability to do dynamic lighting.

Perhaps a method of approaching this would be to provide a blend mode that is designated as an alpha target for a specific alpha mask. The Alpha target and alpha mask could be rendered against each other to produce cutouts of graphics, similar to how many RPGMaker games have pulled off dynamic lighting:

http://img600.imageshack.us/img600/6783/lighting.png

Essentially, RPGMaker games use a combination of static lighting via pre-rendered black->alpha maps they can then apply ambient color to.

This, we can already do.

They also use partially opaque additive cookies to handle bright light halos. We can already do this as well.

What we cannot, do however, is blend the lightmaps against an alpha mask to "cut out" parts of the static lighting, allowing us to have players carrying torches, or having a visibility range. Global ambience changes are also extremely difficult in BYOND, owing to the lack of client-side specificity in how things are rendered.
I've been bugging Lummox for this exact thing since blend modes were introduced. Even told him to play some Link to the Past to see what I was talking about. (He had never played it! What kind of nerd is he?!)

Never heard much else about it.
yea++
If Link to the Past basically did it, and BYOND can't, this cannot be added fast enough. I mean it would be very useful, and I always thought BYOND could do anything SNES games could do pretty much given how old SNES is.

Regardless though, lighting is essential in so many games. Anything that helps improve lighting is good.

This has my vote for addition.
Just wanted to make sure we were all on the same page.
This is an example of what we're talking about right?

Like in the Lost Woods?

Actually, I thought we were dealing with something like the tunnel leading to Sanctuary. Like in this part around 17:20

http://youtu.be/7Fog1Y9McwY
To be honest while I'd love to do this, I have no idea how I'd pull it off with our drawing code. In hardware mode I'm not sure what it'd take, and in software mode I'm completely at a loss.
@Lummox: Currently, how are sprites stored in memory? Are we using raw texture data? Is there any way to sub out the alpha channel of one texture for another?

Basically, my thoughts on the matter are that alpha masks should basically just direct an appearance to render using the alpha channel of one appearance, and the color channels of another appearance.

When an object is added to the overlays list of another object with blend_mode = BLEND_MASK, it should search for all graphics that would render under that object, then generate an alternate appearance that specifies the mask object's appearance. Multiple masks could be handled via masking a masked appearance, generating a third new unique appearance.

Since masks themselves should be tied to specific objects (and not act globally like blend_modes do), my thinking is that creating a masked appearance would be something that happens when the appearances are modified and added to the reftable, rather than when things are actually rendered.
In response to Ter13
Ter13 wrote:
@Lummox: Currently, how are sprites stored in memory? Are we using raw texture data? Is there any way to sub out the alpha channel of one texture for another?

A new texture could be created that had the alpha of one and the color of another, but I don't see how that would be helpful to lighting concerns. As I understand it, what people want for lighting is to be able to render the map, possibly except for screen objects, and render a multiplicative lighting layer on top--a separate image built by blending light sources in additive mode and then blending their result with the main image.

I'm sure shaders can do this; I just don't know enough about them to know how. And I don't see how it'd be possible to do anything like this in software mode.
@Lummox: The way many games handle lighting, is they take a black texture completely covering an area.

Then they cut holes in that texture by blending the alpha map with another image. After that's done, they then blend the composite over the map.

While this wouldn't necessarily create full dynamic lighting, it would make possible creating effects like this:



Now, imagine that screenshot, but instead of using a 1-bit alpha channel, an 8-bit alpha channel. Then imagine setting the blend mode of the entire composite to multiplicative blending.

Plus, there are other effects it would allow that we can't currently use. Pokemon employs many of these effects. For instance, whenever a pokemon's stats are increased or decreased, a square graphic is overlapped on their icon, which is blended against their alpha channel only. It's a fairly nice effect, and being able to do it on the client-side would be a big boost to what 500 can do.

Shader support would be nice, to be honest, but it seems a bit overkill. I bet there are three users here that know HLSL/GLSL/CG. It took me nearly three months to learn how to author shaders, so really, my guess is that even if you do implement shader support into the language, nobody will use it.
This composite blending is what I'm talking about, though. I have no idea how I'd approach that in either hardware or software mode.
Masking in GDI, if I recall, is usually achieved by blitting the texture from memory with XOR (SRCINVERT) set on. Then the mask bitmap (1=transparent, 0=masked) is ANDed (SRCAND) against the screen, then the texture is blitted again from memory using XOR (SRCINVERT).

As for hardware mode, typically you render to a transparent backbuffer, then just blit that.
In response to Ter13
GDI wouldn't support anything but a barebones light/dark blend unless you did an intensive AlphaBlend() operation, but either way there's the problem that that's going to use a crapload of memory and be extremely slow; AlphaBlend would just be that much slower.

In DirectX, we're rendering to a backbuffer and calling Present() to do the final scaling and such. I have no idea how to work a second layer into that. Questions like these are notoriously Google-proof; the only good answers I found on the subject all involved shaders.
Isn't the stencil buffer pass/fail? Seems like that wouldn't be as useful as being able to use colored lights.
...I think we've gotten a little off topic here. I'm not asking about colored lighting at all. I'm asking about rendering holes in appearances using alpha masking.
In addition, looking at the PlgBlt() function, I see an obvious hook for implementation of just this.

BOOL PlgBlt(
_In_ HDC hdcDest,
_In_ const POINT *lpPoint,
_In_ HDC hdcSrc,
_In_ int nXSrc,
_In_ int nYSrc,
_In_ int nWidth,
_In_ int nHeight,
_In_ HBITMAP hbmMask,
_In_ int xMask,
_In_ int yMask
);


"The argument hmbMask: A handle to an optional monochrome bitmap that is used to mask the colors of the source rectangle."

While it's not a a full-alpha solution, that at least makes single-mask blitting possible by using a monochrome bitmap derived from the alpha channel of the appearance in question.

As for alpha masking in Direct3D, the trick is doing it on the GPU.

Basically, you set your current texture, then you modulate it with another texture that you load in. Select the color channel of one texture, and the alpha channel of another, and you have an alpha-blended renderable. You just have to render it on a primitive after, and you have a masked texture.

// The vertex struct to use has two sets of texture coordinates, one for the main texture and one for the mask.
struct tDXVertexTextureMask
{
FLOAT x, y, z;
D3DCOLOR color;
FLOAT u1, v1;
FLOAT u2, v2;
};

// Define the FVF to use.
#define D3DFVF_TEXTURE_MASK (D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_TEX2)

// Create your vertex buffer like usual using the above struct and FVF. When applying the texture coordinates,
// make sure to set u1 = u2 and v1 = v2, so the textures will line up exactly.

// Time to render! Setup the FVF and texture states.
device->SetFVF(D3DFVF_TEXTURE_MASK);

device->SetTexture(0, fTexture);
device->SetTexture(1, fMaskTexture);

device->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);
device->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
device->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE);
device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
device->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
device->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE);

// Use the color from the previous texture, and blend the alpha from the mask.
device->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_SELECTARG1);
device->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_CURRENT);
device->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
device->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
device->SetTextureStageState(1, D3DTSS_ALPHAARG2, D3DTA_CURRENT);

// Render your primitive


This approach passes the texturing stages to the GPU, so all of the operations will perform there, rather than in software.
In response to Ter13
I don't really follow this at all, but that could be because we're not using textures directly. We're using sprites.

I don't see how PlgBlt() could be reasonably used for mask-based lighting either. For one thing, PlgBlt() isn't all that fast, and it definitely wouldn't perform fast with a very large image.
I was under the impression that you were using PlgBlt() to manage transforms.

http://www.byond.com/forum/?post=1376932#comment6583796
I'm not comprehending why we're so concerned with software mode.

http://store.steampowered.com/hwsurvey/videocard/

Given a reasonable estimation, software rendering mode appeals to <2% of the reasonable global marketshare in terms of people who lack a DX9 or higher GPU.

DirectX9 or lower only is <4% of the global marketshare.

DirectX11 and DirectX10 are available on over 95% of all gaming PCs according to the steam hardware survey.

Furthermore, across all mobile devices OpenGL shadermodel 2.0 or higher is available on essentially 70% of them. Roll those in with *nux users, macs, PCs and various laptops, and you'll find that guess what? Of the people that tend to play games, essentially 100% of them have access to a device that has OpenGL shadermodel 2.0 or greater. I don't think they make motherboards anymore that don't support some variant of hardware accelerated graphics onboard.

Does anybody actually rely on software mode? As far as I can tell, software mode breaks everything, turns into an anti-aliased mess in no seconds flat, and basically doesn't actually serve a purpose other than annoying people by getting turned on from time to time.

The only thing it allows us to do, is make one or two interface elements transparent, and sit on top of the map, which in all honesty, everybody I know avoids because it turns on software rendering mode.

It seems like making a choice between software and hardware mode for the overwhelming majority of our users is:

Work badly and look terrible, or work right, and look fine. That's not really a choice so much as the option to shoot yourself in the foot. Even worse, is giving people the option to shoot themselves in the foot hinders those who aren't actively making a poor choice by tying our options to theirs.

EDIT: wording
Page: 1 2 3