Your /DBQuery/proc/Execute() isn't actually that bad when you think about it.

4.405 seconds / 65 ~= 68 ms average total per call to execute a query. Round-trip latency to the remote host is 65 ms, so really the actual BYOND + MySQL execution time for the query on average is 3 ms. Of which it perhaps spends 0.5 - 1 ms within MySQL server actually parsing and executing the query, and the rest within various encoding/decoding on MySQL and BYOND's ends to bring the result back to you in DM.

It would be nice perhaps if the MySQL functions would sleep the calling proc while they're doing their work to allow BYOND to carry on, but you don't really want this implemented until Lummox has managed to put an enhancement in for eager return of control to the sleeping proc, otherwise you'll allow execution of the BYOND world to be smoothed out on tick completion, but would impose a world.tick_lag minimum execution time on procs using MySQL functions.
The latency side is fairly outside of BYOND's control, or MySQL's, or perhaps indeed yours if these are remote BYOND worlds that you yourself are not running.

If that's the case, maybe create a dev-help topic with the relevant code in and description of what you are looking to do, and we can explore code-based solutions to reducing this by reducing the frequency of actual MySQL calls, or batching etc.
Was there an outcome of this handled off the forums?

Quite curious to see the results..
In response to A.T.H.K
No, unfortunately, though I've still been tinkering around with things on my own. Unless there was something else Stephen wanted me to do?
In response to Doohl
Might be good to run that dd command I posted, it may explain somethings or at least rule out the disk being a bottleneck.
/usr/local/byond/bin$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 9.28342 s, 116 MB/s
Not especially, in terms of the bug report itself. It seems pretty much like the slowness is due to latency, and a few misunderstandings about measuring actual CPU use vs measuring tick utilisation.

Whether you'd like some help to review the logic you've actually got to reduce the amount of querying is another matter, but isn't really something for this bug report.
In response to Stephen001
I think you're right about the latency being an issue.

Aa simple solution could be to have the MySQL server on each machine this would reduce latency obviously as it's all local.

From here you'd want each server to be in a Master to Master cluster this allows the MySQL server to push or pull the data from each box in the cluster, updating the relevant fields.

http://www.rackspace.com/knowledge_center/article/ mysql-master-master-replication

This is really the smartest way to do it, at the moment it looks like you're relying on one remote server? If that goes .. you're pretty much .. screwed ..
This is still an issue that I'd like to get resolved. If either the MySQL server or game server are having network issues, the game will completely freeze and seize up until a reply is given (or I guess until it times out). This means having an enormous spike in CPU whenever there is an SQL query.

This is a huge issue that will make maintaining Severed World a nightmare.
I'll put something on my 510 list to at least look into this, but I can't make any promises. A core problem is that to fix this issue, all MySQL procs will effectively have to sleep, moving the proc into the background until a callback is called.

Testing any changes to this will be problematic, as I'm not running a MySQL server of my own to test with.
I can confirm that mysql operations block, and do not sleep, preventing all other operations.

A fact we generally depend on in ss13, but it would be worth it to have to take into account sleeps and refs disappearing to prevent that annoying issue.

Doohl and anybody else, you are free to pm me and i can help you work out a system to mitigate this issue. Generally speaking, it's ALWAYS good to mitigate database related latency issues by moving the database closer to the servers, or abusing replication and mysql proxies/caches/relays.

I took a brief look into this to see what my options might be, and I'm sad to report they don't look too good for a quick implementation. The MySQL API is blocking by nature. That means the only way to use it in a non-blocking way is to create a thread and send it the data, send the data back, and handle it in some kind of message.

None of that is impossible, and I don't think it'd have the same issues as our previous threading experiments, so this is totally doable. The bad news is it's not easy. I was rather hoping that the blocking was a result of our implementation or that the API had some options for non-blocking calls, but that isn't the case.
Just a quick update: this is still a massive problem.
Unfortunately I believe the solution to this problem will be very difficult, because as I mentioned the API is blocking. The only way around it would be to alter all (or at least most) calls to the API so that they used a separate thread.
Yes we have similar issues with this that are very frustrating. Since selecting from a database is nearly instantaneous (thousandths of seconds) compared to inserting into it (tenths of seconds).

We've mostly been looking into ways that involve potentially using a .dll or another external method to execute INSERT queries on another thread - along with code to delay all queries for the next second or two to account for the database being locked up during that time.

Of course a lack of experience in that area obviously slows the process down by a lot though the idea is relatively simple.
a mysql proxy could potentially be helpful for fire and forget queries, i think they have a way to make it so it doesn't wait on the reply
In response to MrStonedOne
MrStonedOne wrote:
a mysql proxy could potentially be helpful for fire and forget queries, i think they have a way to make it so it doesn't wait on the reply

They don't in the regular API, though. MariaDB has that, but MySQL doesn't.
how a mysql proxy works, is it acts like a server, takes the query from the client, then fires it off to the actual server based on conditions. The number one use is to allow you to make read only locally replicated databases, and a master database you write to. This way select queries go to the local database, and insert/update queries go to the remote database that is set as a replication master to all the local database slaves.

I'm thinking that there is a way to do a fire and forget, have it send off the insert/update query, and just return 0 rows affected back thru the api, for when you don't care about the response. I'm gonna see if i can find one that will do this.

Another thing that could help, is a way to do this in byond, where it sends the query on another thread, and doesn't bother doing anything with reply or waiting for it.

It becomes a lot easier to make mysql be async when you just want to selectively ignore the response.

Ya you have to do all the hard work of asyncifing it, but if you can skip on having to track what context to return control to, and just make it so new (non-fire and forget) requests on the same connection while the other one is still procing cause it to return to old behavior and hang for that fire and forget query, it throws alot of the required work out the window.
You need to thread MySQL and poll:
A friend wrote https://github.com/blackawps/gm_tmysql4 as a garrysmod module, but you should be able to take inspiration from/use the implementation.
It would be fantastic to have this working if there is any chance. I'd like to use MySQL in our project over a filesystem.
Page: 1 2