I recently answered question on the gamedev.net multiplayer and networking forum, that I feel warrants further distribution. The question dealt with trying to use MySQL eventing and MySQL data as the "server" for a large-scale multi-player web-based game.
Web programmers coming to games generally have a hard time writing scalable solutions, because games are very different. Similarly, games programmers trying to build web sites often find themselves with horribly complex systems that don't work -- again, because games are very different.
MySQL is not a good general message bus, and is an even worse general user-to-user communications channel. Trying to do what you want to do that way is likely to lead to meltdown as soon as you get any real number of users. If you want a reliable message queue, I recommend something like RabbitMQ or ZeroMQ or whatever. However, most games don't use that kind of messaging at all. Instead, they use a connected process model.
The server side of a particular game instance is a single, long-lived process. Clients connect to this process, generally using long-lived connections, and issue requests. The server will process those requests, enforcing timing, rules, etc, and once things happen, it will notify connected clients. Clients that are not connected won't see the event; instead, they get a dump of the entire world state when they join (as "late joiners").
If you have to use HTTP for these connections, then your life will be full of pain, because HTTP is not a persistent connection. The best you can do is to double-buffer, with a lightweight user message queue process -- you can probably write something in node.js, say. Here's how it would work:
On the client side, fire off TWO XMLHttpRequests() to the same server, both containing your credentials. In each readystatechange handler, handle the messages returned, and then re-fire a new request to the game server.
On the server, build some data structure per-user for the first user connection. This structure forwards/logs in to the active game process, and starts receiving events/messages from the game (as well as the initial state dump). Each incoming HTTP request gets routed to this handler, without immediately returning anything. When the *second* request comes in, the *first* request is given all the currently queued data, and the queue is emptied, and the second request is deferred. If there are no events, defer both requests until there is some data, when you send back the data through the first requests.
Overall, this will make the XMLHttpRequests double-buffer, and the latency in forwarding events will be determined by how quickly the client can process the returned data and come back with a new request. That new request can also carry user control data for the game server.
Of course, the web being what it is, you will find that sometimes, requests will stall for possibly seconds. That's just life -- users will see lag spikes. HTTP goes over TCP, which will not allow later, fresher data to "overtake" earlier, staler data on the wire. (One of the main reasons to use UDP)
So, your system will probably look something like:
Client <--> Double-buffered COMET gateway <--> Game process server <--> Database server
You'll want to make sure that the client connection to gateway is "sticky" so that the same client always gets the same gateway. As long as load is low (say, under 10,000 connected users) you can probably do this by simply only running one gateway; if you want truly massive scale, though, you'll need to start playing with smart load balancers and stuff. Note that some mobile broadband providers will use multiple proxies, so that two successive requests from the same user will come in with different IP addresses, so your load balancer will probably have to "know" what the user session is, and load balance based on that. I think some traditional Java-style "application load balancers" will let you do that.