Hopefully the buy/sell requests are single threaded/single queue, so I doubt there would be any deadlock (transaction failures), however the bulk could still be implemented on a multi-threaded setup just by doing "1 at a time" server side until complete (for each loops are not that costly). Also prevents users from clicking/network traffic buildup already, and I wouldn't care for a "please wait" while it processes 700+ purchases (because I am saying, buy 700 @ <200k, first failure bails out completely and returns the 412 items or whatever)Either I click 700 times and server processes 700 requests, or I click once, and the server processes 700 requests (if they still chose that route).
The commit could be thrown on a timer (once a second/minute/whatever). to reduce disk load. Committing every transaction is NEVER recommended anyways (and yet so many programmers do it, and it's always the first thing I clean up...). Even in a multi-threaded setup, this is doable: one server queue handles item IDs < 1000 while the other handles IDs > 1000. No deadlocks/transaction failure, but once again, I just don't see the need for multi server setup, peak players for steam isn't too high @ 1,700-4,200 http://steamcharts.com/app/506140
But chances are, they have each server geo submitting its own commit request to a central DB which leads right back to the "committing every transaction" problem, but at that point, why bother with a server in every geo, if it still funnels to a central DB and just adds another stop to the packet?
Either way, having all the servers connect to a central DB that "holds ALL the rows" means trouble eventually, (you want to "shard" the data, have a login server, have a trade house server for half the items, have another for a 2nd half, have a "craft" server etc.) Multi-connections are nice, when you aren't contending for the same spot of data, otherwise you put that data access in a single threaded queue, and commit once in a while. Commits are super expensive, I wish I found a highly detailed article on why...best I found was here http://stackoverflow.com/questions/33042679/is-committing-empty-transactions-expensive
I know it can ruin absolute atomicity, but atomic transactions are why things are slow (cause it needs to sync EVERYTHING ALWAYS), so delaying/batching it a few seconds can give surprising results if disk load was a huge factor, and we aren't dealing with people's lives here/flying space shuttles.
Sorry if this is long/rambling/incoherent though, I attempted to address everything. It's always interesting to think about/discuss/learn these things for me, and yea, I feel your pain, high complexity queries are never any fun for a DB, full table scans always popup.