Peter Thawley: Creating a RAM disk for Sybase’s ASE DBMS

International Sybase User Group
International Sybase User Group

Over on ISUG‘s SIG-ASE mailing list, Peter Thawley wrote up the following reply that I think everyone using Sybase ASE and is thinking of using a RAM disk should be aware of.  When I asked Peter if I could repost his message on my blog he agreed 🙂

Creating a RAM Disk

Peter Thawley
Peter Thawley

Joe and Shane are spot-on about a task context switching off the engine on an i/o to a RAM-disk based device … and yes Joe, there is nothing you can do about this right now. Normally, one would think of this as a good thing … and it is for that specific user since they get to consume more cpu/engine time thereby getting better response time for their request.

Now, to throw a wrench into this! In these cases where some or all of the database is cached, one does have to be aware of the potential for other user tasks to experience some amount of starvation. Image a bunch of tasks, each consuming a full time slice (100ms by default) before yielding. For systems doing pure OLTP (short) transactions with users getting on an engine and getting off reasonably quickly … little risk of a problem. For mixed workload applications with some OLTP and some DSS/reporting, the potential for starvation is quite real and nearly guaranteed for environments with fully cached DBs. I’ve seen some trading systems in tier 1 investment banks brought to their knees by an innocent IT person deciding to buy a lot of memory to cache the entire DB only to wonder why performance started going to hell. [Of course, it was Sybase’s fault … ( – ; )]

In these cases, be thinking about execution classes/engine groups to segregate OLTP and DSS users onto their own disjoint set of engines using dynamic listeners to keep execution engines and network engines aligned within the same engine groups. You may also want to consider reducing “clock tick length” to keep a timeslice period lower than 100ms … I’ve seen some sites successfully using 50ms and even less … there seems to be little downside since most systems do the async disk io and net io checks a lot more frequently than 100 ms due to the “io polling process count” param.

Just trying to present a balanced view here …. This is going to be important for more people to consider as in-memory database techniques and/or features / products become more prevalent.

Peter
_____________________________________

Peter Thawley
Senior Director / Architect
CTO Group, WMO
Sybase, Inc.

Share Button

Leave a Reply

Your email address will not be published. Required fields are marked *