ID:1330682
 
Applies to:DM Language
Status: Open

Issue hasn't been assigned a status value.
Right now, sound x, y, z is treated relative to the camera, since this value is relative, not absolute, you have to change it for player every tick to get the right effect (e.g 0, 0, 0 is always the location of the client, as is)

I'm suggesting that you can make this an absolute world position by setting some flag, with the position of the player being either the clients eye or some special variable on the client

Thus the x, y and z would be in world positions, and the "ears" of the client would be in the location you set and you wont have to do as much processing each tick to get nice sound
Some of the problems involved in this:

1) The verticality of the sound wouldn't tend to have anything to do with the map z level, and would tend to be related more to a different, user-defined var.

2) How would the axis mapping work? Linking x to x and y to y is a terrible idea for topdown games. I've been a proponent of an angled soundscape that treats north as a mix of up and forward (as if you're looking down on the game field from an angle) which means y maps to <0,1/sqrt(2),1/sqrt(2)> and verticality maps to either <0,1/sqrt(2),-1/sqrt(2)> or <0,1,0>. The needs of each game are likely a little different. Also of course in isometric the mapping would be hugely different.
Slightly different problem and a little off topic, but I was messing around and noticed having 5 sound x,y updates per tick (at a pretty far relative distance) was taking longer than tick_lag 0.25 to complete each request. Not really sure why, gave up after I saw the lag and went back to volume adjustments instead.
Coordinates could be individually prefixed with w or i for world or isometric, or mapped through an optional 3x6 matrix (sound x/y/z plus player x/y/z) (could be used to represent just about anything else, but would fairly technical, so many people wouldn't be able to use it)
In response to Uristqwerty
I'd think only 3x3 would be necessary. I doubt translation as part of the matrix would be needed (that would be a 3x4), and so all that would matter would be mapping the delta values to the x,y,z sound axes. However, the z axis would be a question mark because it'd have to be based on a var other than z, likely a user-defined one the client doesn't have access to. Of course, one possible solution is to manually adjust z but automatically adjust x,y, so in that case you'd still want a 3x3 matrix, where the 3rd row would represent the translation which is substituting for z.

Handling x and y but leaving the user to deal with z is probably reasonable, but it does seem kinda hacky to me.
I would prefer it if sounds could actually be attached to atoms themselves. In fact, I think that would be really awesome! Right now, atoms are just silent, visual objects. Sure, you could give them a proc to make a sound, but wouldn't it be nice to have something more built-in, that could serve as a kind of primary sound for an object? I can imagine this might be a pretty big change, but it makes a lot of sense in my opinion. Atoms could be both visual and audible objects.

How it could work is every atom could have a built-in sound var that can be set to a /sound object and even changed at runtime, just like an icon. The initial sound begins playing as soon as the atom is first created. In most cases, you would probably want your /sound object to play in loop, but it should still be left up to the developer. Then, if the atom's sound var is set to another sound at runtime, then that sound will instantly begin playing. If the previous sound did not finish playing by then, it will be stopped and "replaced" by the new sound. Perhaps there could also be a way to override this behaviour. I think this would be consistent with how the icon var works, and should also be very easy to use.

So what this means is that if a mob has a sound var set to a /sound object, and that mob comes close enough to a client as determined by the /sound's falloff var, then a client will quite literally hear that mob coming!

Anyway, this is not something that you would want to have to softcode, since it would require looping through lots of objects, checking distances, adjusting /sound coordinates, and playing the sounds, which I don't doubt would result in potentially massive lag. I'm sure if "atomic sounds" could be implemented from the inside, it would end up being much more efficient, and I think it would be a great feature to complement the icons.


I think this would encompass the feature requested in this topic as well. Instead of setting a sound's location directly, you would locate() a turf at a given set of coordinates, then set that turf's sound var accordingly. I kind of just realized that all of this does kind of require automatically adjusting a /sound's coordinates anyway, so I guess it could work from either perspective.
Now that I think about it, it doesn't really make sense for atoms to just simply have a sound var, because if the sound is not set to loop, then it will simply become unusable once it has finished playing. Instead, atoms would need have a sounds list var to take any incoming sounds. Unless a sound has repeat set, it will only remain in the list for as long as it plays. After the sound plays to the end, it is removed from the sounds list. Also, this has the obvious benefit of easily giving atoms more than one sound. So instead of a sound var complementing an icon var, there would be a sounds list var that would be more comparable to something like the overlays/underlays list.

So "attaching" a sound to an atom would look something like this:
atom.sounds += sound('mysound.ogg', repeat=1)
//Since repeat is set, this sound will remain in the list
//and play indefinitely, unless it gets removed manually.

This may be completely different form how sounds are currently handled. Normally they are just sent as output, a lot like text strings are. In this case however, a sound would be "instanced" within the atom.sounds list. To complete this idea, each individual instance of a sound would need to have a loc var, which would be set to the atom's location, where the sound will be played. I suppose you could also change that directly at runtime, even though that seems like a strange thing to do. This is a bit more complex than I had thought. I still think it would be worth it though, if this is even possible.

Edit:
Also, if sounds could be extended this far, it might be nice if there was a client.sounds list to match the format for atoms. This would give you a list of sounds currently being sent to that client. The world wouldn't really need this, since in reality the sound is sent to each client connected to the world.