Wanted: SAP manuals in ePub format

Every time SAP comes out with a new set ofSybase SAP pdf manuals, the meta data has to be corrected. Often the stored titles, description, etc are wildly wrong. Very sloppy and unprofessional for a mega corp the size of SAP.

The ePub book format has been out for many years and has many features that make it tablet, phone, PC, whatever friendly. Reading a SAP manual at night? No problem, change the font color to white on black so you don’t wake your spouse. The font is too small? No problem, choose a larger or different font. You can’t do any of that with a PDF. Try reading the ASE Admin guides on a 4″ iPhone. I dare you. You might as well pour salt in your eye sockets.

Share Button

SAP IQ: dbisql is unable to load the SybaseIQ SQLAnywhere plugins. SOLVED!

I recently patched an SAP IQ server to 16.0 SP8 PL30 and ran into an interesting error message when trying to start dbisql:

$ dbisql
Interactive SQL could not load the "SQLAnywhere" plug-in.
Its "ngdbc.jar" file has moved or has been deleted. You will not be able to connect to the databases handled by that plug-in.
Interactive SQL could not load the "SybaseIQ" plug-in.
Its "ngdbc.jar" file has moved or has been deleted. You will not be able to connect to the databases handled by that plug-in.
Interactive SQL could not load the "HANA" plug-in.
Its "ngdbc.jar" file has moved or has been deleted. You will not be able to connect to the databases handled by that plug-in.
Interactive SQL could not load the "GenericODBC" plug-in.
Its "ngdbc.jar" file has moved or has been deleted. You will not be able to connect to the databases handled by that plug-in.
Interactive SQL cannot start because it is not installed correctly. No database plug-ins has been registered.
To fix this problem, you should reinstall the program.

If you scan your IQ directory, you will notice there isn’t a “ngdbc.jar” file. You take a look at another IQ box that is working and it doesn’t have the ngdbc.jar file either. The error message is incorrect. The message should report that it isn’t able to access the saip16.jar (or saip11.jar if you’re on IQ 15.x) and/or the jodbc4.jar file in the $SYBASE/IQ-(IQ RELEASE)/java directory (e.g. $SYBASE/IQ-16_0/java).
Verify that the two files exist and the permissions are correct:

316 -rwxr-xr-x.  1 sybase sybase  320071 Mar 20 14:51 jodbc4.jar
112 -rwxr-xr-x.  1 sybase sybase  112325 Mar 20 14:56 saip16.jar

If everything looks okay and it still gives the error, you will need to re-register the plugins. The first thing is to move the dbisql ‘registry’:

$ mv $SYBASE/IQ-(IQ RELEASE)/bin64/dbisql_64.rep $SYBASE/IQ-(IQ RELEASE)/bin64/dbisql_64.rep.old

Next, re-register (for IQ 16):

$ cd $SYBASE/IQ-(IQ RELEASE)/java
$ dbisql -Xregister sa16 SybaseIQ com.sybase.saisqlplugin.IQISQLPlugin "$(pwd)/saip16.jar:$(pwd)/jodbc4.jar"

for IQ 15:

$ cd $SYBASE/IQ-(IQ RELEASE)/java
$ dbisql -Xregister sa11 SybaseIQ com.sybase.saisqlplugin.IQISQLPlugin "$(pwd)/saip11.jar:$(pwd)/jodbc4.jar"

Newer IQ 15 patches use SQL Anywhere 12, so if you have saip12.jar instead of saip11.jar in your java dir, use that:

$ cd $SYBASE/IQ-(IQ RELEASE)/java
$ dbisql -Xregister sa12 SybaseIQ com.sybase.saisqlplugin.IQISQLPlugin "$(pwd)/saip12.jar:$(pwd)/jodbc4.jar"

dbisql should now work 🙂

The dbisql_64.rep is simply a glorified ini file that contains the following which should be transferrable but I’ve found that dbisql is very finicky with regards to this file.

[SybaseIQ]
classLoaderName=sa16
mainclass=com.sybase.saisqlplugin.IQISQLPlugin
classpath=/opt/client/SAP-IQ/IQ-16_0/java/saip16.jar:/opt/client/SAP-IQ/IQ-16_0/java/jodbc4.jar

You may need to update the permissions if other users rely on this particular dbisql installation.

Share Button

SAP Sybase ASE 16.0 major features

Sybase .. er… SAP will be releasing Adaptive Server Enterprise 16 within in the next few months (currently expected in Q2 2014). SAP has made the ASE 16.0 manuals available.

Kevin Sherlock sums up the major new features quite well:

  • create or replace functionality
  • multiple triggers
  • monitoring threshold based events
  • configuration tracking history
  • partition level locking
  • log space usage tracking
  • CIS to HANA

While the number of major features may be a bit lacking on first glance to justify being a major release, Jeff Tallman of SAP provides a bit of reasoning on what was really changed:

Hidden under the covers of ASE 16 is a lot of rewrites to spinlock areas of code – so, while you are seeing what looks to be a scattering of features, the main work was done in scaling and eliminating contention – both on high core counts as well as lower core counts – the later especially with proc cache and ELC configuration – as well as log semaphore contention and eliminating the problem of buffer unpinning. Some of these changes required linking in machine code – which means only supporting current releases of certain platforms/OS’s – which by itself often dictates a new platform number. However, there are a number of new features – if you read the NFG, you can see a laundry list – one of which may or may not be mentioned there is HADR mode – which more tightly integrates ASE & SRS – not only is there a synchronous RepAgent (requires an new SRS SP to be launched later), standby ASE is readonly for normal users (ASE actually detects it is standby – and unless you are a privileged user such as RS maint or sa_role, writes are blocked), but ASE also now supports client failover from primary to standby ASE without OpenSwitch – in short term, available for Business Suite – later this year (perhaps) custom apps.

However, with regard to Full Database Encryption…..from a data security standpoint, you can think of it as filling a gap between column level encryption and standard MAC/DAC controls – especially with predicated permissions in the mix. Remember, in column level encryption, we decrypted data at the materialization stage (and encrypted it in normalization) which meant that the data was encrypted both in memory as well as on disk. This was important, because, when you have database users with different access requirements – and especially if you want to keep DBA’s from seeing the data, you need to encrypt the data in memory as well as on disk – and with different users/different requirements, you also need to be able to encrypt different columns with different keys. As a result of encryption, some common database performance techniques – such as leaf/range scans on encrypted cols – were penalized as the index was sorted by the encrypted value (otherwise, it would be security hole) – and no real secure encryption techniques exist that would preserve the lexigraphical sequence. As a result, often times a different index was used for the query or if that index was selected, it was a full leaf scan followed by decryption & sorting – quite a bit of overhead compared to the unencrypted leaf scan. Of course, Encrypted Columns took a bit of effort to set up as someone had to go through and identify every column of sensitive data, determine which Column Encryption Key to use and who should have access – some planning.

Encrypted Columns = data at rest and in memory fully encrypted – and only select designated users could see the data – others saw a default literal value.

Full Database Encryption is intended to solve the problem of ensuring the data at rest is encrypted, but sort of assumes that all legitimate users of the database have the same access rights to the data. Since all users have the same access rights, there is no need to encrypt in memory, use different keys for different columns, etc. As a result, the encryption happens just prior to being written to disk – or just after being read from disk – and on a page basis vs. individual column basis. As a result, index key values, etc. are in their normal sorted order – meaning there is no penalty for leaf scans/range scans any more. Yes, the PIOs may take a slight bit longer but I would be willing to wager we could encrypt the data far faster than traditional disk-based storage can either write it to disk or read it from disk. The time aspect may be very very slightly noticeable on large physical read driven queries. Of course, encryption does use CPU – that might be more noticeable – depending on how much physical IO you are doing. However, since most apps operate on 95%+ cache hit rates, it might not be that noticeable. Remember as well, for write intensive apps, it is often not your SPID doing the writes – it is the HK Wash, checkpoint, someone else pushing your page through wash marker, etc. Keep in mind that one of the drivers for this was SAP ERP applications – where performance is extremely critical due to the way the applications tend to operate (a lot of client side joins to avoid temp tables due to vendor incompatibilities with respect to tempdb). As a result, performance was a key consideration. Level of effort for implemenation is minimal – set up your keys and encrypt the database. Voila!

Full Database Encryption = data at rest fully encrypted – all legitimate users have access.

Hopefully, this not only addresses the speed question, but also the differences. — Jeff Tallman in response to ASE 16: When and what major features?

SAP has overhauled ASE bringing it up to modern performance and scalability. It’s far too early to determine whether the rebuilt engine will live up to the our expectations.

Share Button

Preliminary REVIEW: SAP’s HANA – In memory database with local/remote data stores

Recently I attended a quite well done presentation regarding SAP’s HANA data storage product and scoured the various websites mentioning HANA. I’m not going to go over the that as all that information is really on SAP’s website.

I was inspired to get access to it last night from a friend so this preliminary review is only from a few hours of hands-on playing with it. Perhaps if time allows I’ll perform an in depth analysis of it.

HANA is a memory resident hybrid OLTP/OLAP/DSS system with aspects of being able to quickly load data from a myriad of sources into memory.

You’re currently limited to 2TB of memory for the data AND any memory required to run HANA including sorting tables. If you’re using SAP business applications, this limit is 4TB. The systems have to be certified by SAP and the software will be installed by the hardware vendor unless you’re certified. There are ways to determine exactly how much memory you’re using but like Sybase ASE’s MDA tables, getting actual utilization requires a lot of patience and hair pulling.

The platform is RedHat Linux on Intel hardware from HANA certified hardware vendors (currently nine vendors).

What happens if the box goes down? There is local storage (or SAN type storage if you prefer) that the memory resident database loads initially from and writes to periodically. When I killed HANA it took a few minutes to recover but most of this was reading from the local storage directly into memory. At this point, I’m not certain as to what extent, if any, that HANA checks the data on load. It seems to perform of checksum on the log records on recovery but I wasn’t able to verify in the time I had available.

In SAP’s marketing literature it says that CPU caches are specifically targeted for high performance of certain types of operations whereas other DBMS systems do not. Again, I didn’t have time to determine how HANA implements it. There are a couple methods that can be done to perform this:

  1. Run part of a process in kernel space, think kernel module, but this can be quite risky not only from a stability aspect but also security wise. In the early days of Linux, this wasn’t uncommon, now days it is quite rare.
  2. Write parts of your application, the really time sensitive parts, in assembly language utilizing the capabilities to control what goes where on the cpu to some extent. You’ll find this often with multimedia or cpu intensive applications.

SAP HANA isn’t unique in any of these aspects as all of these have been around for a number years before SAP developed HANA but it is unique that it implements all of them. Is it worth the cost of the hardware and software licensing? It depends.

If you care about performance and willing to pay for it, HANA may be a very good fit for your company.

Share Button