SAP Sybase IQ: Do not use LF indexes in IQ v16

Really, there isn’t a good reason to use LF indexes in version 16 and higher of IQ. Use HG indexes instead. From Mark Mumy of SAP fame:

Mumy, Mark mark.mumy at sap.com
Mon Aug 17 11:31:37 MST 2015

There is no need with IQ 16 SP08.20 and beyond to use an LF. The HG index has been augmented over the past few years to take on the same characteristics for low cardinality data while adding parallelism and other join/sort features that just don’t exist in the LF.

Remove all LF indexes and replace them with HGs, if they don’t have one. There is no need for an HNG as the LF (don’t use) and HG have many of those features and can cost less.

Mark

Mark Mumy
Director, Enterprise Architecture, Global HANA CoE | SAP
M +1 347-820-2136 | E mark.mumy at sap.com
My Blogs: http://scn.sap.com/people/markmumy/blog

Share Button

SAP IQ and SQL Anywhere: Locking and blocking only for a maximum specified time

Usually in IQ, you will lock a table for a short time (dml operations) or longer when performing a data load, table/index maintenance or issuing a holdlock within your transaction. These locks can cause other processes to block or even abort with an error “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)”. It can become problematic if you have an application that doesn’t close the transaction in a reasonable timeframe.

Consider the following scenario: A process that retrieves and loads dozens of data files to load into a single table every few minutes.
Issue: Occasionally the “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)” error will occur causing the load to fail for that file(s).

What can be done? Several things really:

  1. Trap/ignore the error
  2. Retry loading the file. How many times do we attempt this before we collide with the next set of files to load?
  3. Issue holdlock (not sure if it works with load table – I haven’t tried it) causing write access to block. If the load table is stalled, it will hold the lock indefinitely
  4. Combine all the files and load that monolithic file. Potentially faster as setting up a load table connection is very expensive, but if the file fails, the entire set fails. Do we retry?
  5. Issue temporary locks, but block only for X time and only allow itself to be blocked for Y time before rolling back, erroring.

Using the temporary locks/blocking seems to be the best option for this particular scenario to me. It allows for the load tables to run in succession without me having to set up the queue and if it does error out, I can decide in the outside code whether or not to retry the load table. Basically it gives me more control over the locking/blocking.

set temporary option date_order = 'YMD';
set temporary option timestamp_format = 'yyyy/mm/dd hh:nn:ss.sss';

-- "If the blocking option is set to “On”, any transaction attempting to obtain a lock that conflicts with an existing lock held by another transaction waits until every conflicting lock is released or until the blocking_timeout is reached. If the lock is not released within blocking_timeout milliseconds, then an error is returned for the waiting transaction. If the blocking option is set to “Off”, the transaction that attempts to obtain a conflicting lock receives an error." - https://wiki.scn.sap.com/wiki/display/SQLANY/How+to+Debug+Blocking+in+SQL+Anywhere
SET TEMPORARY OPTION "blocking" = 'ON';

-- we will wait up to 10 seconds to obtain a lock before rolling back
SET TEMPORARY OPTION "blocking_timeout" = '10000';

-- allow blocking of other transactions to maximum 5 sec before rolling back
SET TEMPORARY OPTION blocking_others_timeout = '5000';

LOAD TABLE ${REPOSITORY_SCHEMA}.activity (
    ConnHandle,
    LSServer,
    Name,
    Userid
)
USING CLIENT FILE '${OUT_CSV_FILE}'
NOTIFY 1000
QUOTES ON ESCAPES OFF
FORMAT bcp
DELIMITED BY '|'
ROW DELIMITED BY '\n';

COMMIT;
Share Button

HOWTO: Connect to SAP Sybase IQ Multiplex using JDBC and the SQL Anywhere driver: “Connection error: TCPIP requires a server name”

SAP’s documentation, like other vendors, often sucks. In today’s wonderful documentation sucky-ness the examples from SAP to connect to IQ using a JDBC connection string like so:

“jdbc:sqlanywhere:UID=***;ENG=MyIQ;PWD=***;links=tcpip(Host=MyIQ;PORT=40000)”

This is fine except when you connect to IQ multiplex and you have logical servers set up. You won’t know which IQ node you will be on and since each IQ node has an unique name (eng=), this won’t work. Alright, let’s drop the eng=MyIQ.

“jdbc:sqlanywhere:UID=***;PWD=***;links=tcpip(Host=MyIQ;PORT=40000)”

This will result in an immediate error of “Connection error: TCPIP requires a server name”. Don’t panic, there is a fix. Drop the ENG=MyIQ and links=tcpip(Host=MyIQ;PORT=40000) replacing it with the HOST=MyIQ:40000.

“jdbc:sqlanywhere:UID=***;PWD=***;HOST=MyIQ:40000)”

That works. That is so much more simple than the demented links=tcpip(Host=MyIQ;PORT=40000) nonsense.

Now the fun bit of working code. The following “IQConnect” program accepts four parameters host port username password and spits out the names of the tables on the IQ server:

import java.io.*;
import java.sql.*;

public class IQConnect {
    public static void main(String[] args) {
        if (args.length == 4) {
            String HostName;
            int Port = 0;
            String UserName;
            String Password;

            try {
                Port = Integer.parseInt(args[1]);
            } catch (NumberFormatException e) {
                System.err.println("Port # must be an integer");
                System.exit(1);
            }

            HostName = args[0];
            UserName = args[2];
            Password = args[3];

            String arg;
            Connection con;

            try {
                String ConnectionParams = String.format("jdbc:sqlanywhere:uid=%s;pwd=%s;host=%s:%d", UserName, Password, HostName, Port);
                con = DriverManager.getConnection(ConnectionParams);

                Statement stmt = con.createStatement();
                ResultSet rs = stmt.executeQuery("select table_name from sys.systable;");

                while (rs.next()) {
                    String TableName = rs.getString(1);
                    System.out.println(TableName);
                }

                stmt.close();
                con.close();
            }
            catch (SQLException sqe)
            {
                System.out.println("Unexpected exception : " +
                  sqe.toString() + ", sqlstate = " +
                  sqe.getSQLState());
            }
            catch (Exception e)
            {
                e.printStackTrace();
            }
        } else {
            System.out.println("Requires Host Port# User Password");
        }
    }
}

Compile it with:

javac -cp $IQ16/java/sajdbc4.jar:. IQConnect.java

Execute it with

java -cp $IQ16/java/sajdbc4.jar:. IQConnect MyIQ 40000 dba sql

Of course, if you have $IQ16/java as part of your $CLASSPATH you don’t need to include “-cp $IQ16/java/sajdbc4.jar:.” to execute the program.

Share Button

SAP IQ: Uptime

At some point you will be asked how long your IQ server has been up.

To determine when the IQ server was started issue:

SELECT PROPERTY('StartTime')

Output:

2015-08-14 18:04:53.918

For the number of days, hours, and minutes we can slice and dice like so:

-- Declarations
DECLARE @UPTIME_DAYS INTEGER
DECLARE @UPTIME_HOURS INTEGER
DECLARE @UPTIME_MINUTES INTEGER

-- Populate variables
SELECT DATEDIFF(day, PROPERTY('StartTime'), CURRENT TIMESTAMP) into @UPTIME_DAYS
SELECT DATEDIFF(hour, PROPERTY('StartTime'), CURRENT TIMESTAMP) % 24 into @UPTIME_HOURS
SELECT DATEDIFF(minute, PROPERTY('StartTime'), CURRENT TIMESTAMP) % 60 into @UPTIME_MINUTES

-- OUTPUT
SELECT @UPTIME_DAYS || ' days ' || @UPTIME_HOURS || ' hours ' || @UPTIME_MINUTES || ' minutes '

Output:

132 days 15 hours 40 minutes
Share Button

HOWTO: SAP IQ Drop Device File from DBSpace

Sometimes when we build an IQ Data Warehouse, our initial space/growth estimate is off significantly enough to warrant reducing the size of the dbspace(s). The process is quite easy put there are a few steps to perform:

Determine how much space is used by the data/indexes. The usage field is the percentage of the totalsize that is being used. For example, the below shows 38% of 4.99TB being used. (4.99 TB * 1024 GB/TB) * 0.38 = 1941.70 GB used:

select convert(varchar(20), DBSpaceName), Usage, TotalSize from sp_iqdbspace() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          Usage TotalSize
------------------------------------
IQ_USER_MAIN         38    4.99T

Determine which device files are part of a dbspace

select convert(varchar(20), DBSpaceName), convert(varchar(20), DBFileName), convert(varchar(70), Path), DBFileSize, Usage from sp_iqfile() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          DBFileName           Path                                                                   DBFileSize Usage
---------------------------------------------------------------------------------------------------------------------------------
IQ_USER_MAIN         IQ_USER_MAIN_FILE_01 /sap/iq/devices/test_usermain001.iq                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_02 /sap/iq/devices/test_usermain002.iq                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_03 /sap/iq/devices/test_usermain003.iq                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_04 /sap/iq/devices/test_usermain004.iq                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_05 /sap/iq/devices/test_usermain005.iq                                    1024G      34

Verify that the used data will fit in the new allocation

sp_iqdbspace IQ_USER_MAIN

BACKUP DATABASE

backup database FULL to '/backups/test_FULL.dmp';

Alter the file to be dropped to be read only

alter dbspace IQ_USER_MAIN ALTER FILE IQ_USER_MAIN_FILE_05 readonly

Empty the device file (raw device) to be dropped (if dropping more than one file, drop the device file *LEAST* used first)

sp_iqemptyfile 'IQ_USER_MAIN_FILE_05'

Drop the device file (you may need to restart before IQ will let you do this)

ALTER DBSPACE IQ_USER_MAIN DROP FILE IQ_USER_MAIN_FILE_05

Verify the device was dropped

select convert(varchar(20), DBSpaceName), convert(varchar(20), DBFileName), convert(varchar(70), Path), DBFileSize, Usage from sp_iqfile() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          DBFileName           Path                                                                   DBFileSize Usage
---------------------------------------------------------------------------------------------------------------------------------
IQ_USER_MAIN         IQ_USER_MAIN_FILE_01 /sap/iq/devices/test_usermain001.iq                                    1024G      48
IQ_USER_MAIN         IQ_USER_MAIN_FILE_02 /sap/iq/devices/test_usermain002.iq                                    1024G      48
IQ_USER_MAIN         IQ_USER_MAIN_FILE_03 /sap/iq/devices/test_usermain003.iq                                    1024G      45
IQ_USER_MAIN         IQ_USER_MAIN_FILE_04 /sap/iq/devices/test_usermain004.iq                                    1024G      47
Share Button

HOWTO: SAP Sybase IQ Loading Tables with AutoIncrement/Identity with Zero as a value SOLVED

If the source table has an identity or autoincrement field AND the value starts at zero “0” then load table either server -> server or from file will not work.  IQ is hard coded to reject the rows with the zero value for identity/autoincrement using load table.  Creating a staging table with a numeric field instead of identity/autoincrement will not work with the export file because the export file has the field flagged as an identity/autoincrement field.
 
This isn’t documented anywhere.
 
Workaround/resolution:
1. Extract out the source table
2. Create a staging table using the same DDL as the destination table, swapping the identity/autoincrement field with numeric(10,0)
3. Load the table into the staging table
4. Turn on identity_insert for the destination table

Set temporary option identity_insert= 'destination_table';

5. Insert into destination table from staging table

insert into destination_table select * from staging_table;

6. Turn off identity_insert for the destination table

Set temporary option identity_insert = '';

7. Drop staging table
8. Rebuild any views that are invalid
FW Howto: List Invalid Views and fix the views in SAP Sybase IQ

Share Button

SAP IQ: Error: server ‘iq_mpx_1’ was started on an incorrect host ‘myhost1’: this server was created with this connection string ‘host=myhost11:5535’ SOLVED

Recently I built a SAP_IQSAP IQ Multiplex cluster and ran into a self inflicted issue. After I configured the secondary nodes I updated the coordinator node (primary node) with the private (interconnect) and public (what you connect to with an application) connection information. Problem was, I made a small typo and didn’t catch it until after I tried starting the coordinator node.

I configured the coordinator node as such:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost11' port 5535;

Upon attempting to start the coordinator node it failed to start with the following message:

MPX: startup failure message: server 'iq_mpx_1' was started on an incorrect host 'myhost1': this server was created with this connection string 'host=myhost11:5535
-- (stcxtlib/st_database.cxx 9455)
Database server shutdown due to startup error

As soon as I saw the message I swore but the fix is quite simple. First, shutdown any secondary nodes. Update your IQ configuration file (or start command line options) so it starts in single node mode and overrides the multiplex configuration:

# single node mode
-iqmpx_sn 1

#For use starting multiplex databases only. Starts the server with override to acknowledge that the write server is starting (1) on a different host, (2) with a different server name, or (3) using a different path to its catalog (.db) file. Do not start two write servers against the same database.
-iqmpx_ov 1

Start the IQ coordinator and reissue the alter multiplex command:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost1' port 5535;

Update your IQ configuration file to either remove or comment out the lines we added earlier.

Start up your coordinator. It should now start fine. Please note you will need to resync your secondary nodes before starting them.

Share Button

UPDATED: HOWTO: Extract Permissions from SAP Sybase IQ using the LIST() function

Back in August I provided an updated method to extract the permissions for tables in a SAP IQ server. This provided SAP_IQquite a few GRANT statements to execute. If you have thousands of tables and/or thousands of users, the created a large file that can take a while to execute. We can reduce the number of GRANT statements by combining the SELECT, DELETE, UPDATE, ALTER, REFERENCE and INSERT grants into a single GRANT statement with the LIST() function.

The LIST() function joins a value from a set of rows with a specified delimiter. Think of this as the equivalent of the Perl JOIN() command.

Take for example the following strings:

VALUE1
VALUE2 
VALUE3

In Perl if you wanted a single string with a comma as the delimiter you would write the following:

my $STRING = join(', ', 'VALUE1', 'VALUE2', 'VALUE3');
print "$STRING\n";
VALUE1, VALUE2, VALUE3

In IQ we can do the equivalent with the LIST() function:

CREATE TABLE #test (column_a CHAR(6));
INSERT INTO #test VALUES ('VALUE1');
INSERT INTO #test VALUES ('VALUE2');
INSERT INTO #test VALUES ('VALUE3');

SELECT LIST(column_a, ', ') FROM #test;
VALUE1, VALUE2, VALUE3

The extract permissions for IQ tables code as updated to use the LIST() function:

SELECT 'GRANT ' 
    + LIST ( stp.auth, ', ' )
    + ' ON '
    + USER_NAME(st.creator)
    + '.'
    + st.table_name
    + ' TO '
    + su.user_name + ';'
FROM
    ( SELECT grantee, stable_id, 'SELECT' as auth FROM sys.systableperm WHERE selectauth = 'Y'
      UNION ALL
      SELECT grantee, stable_id, 'DELETE' as auth FROM sys.systableperm WHERE deleteauth = 'Y'
      UNION ALL
      SELECT grantee, stable_id, 'UPDATE' as auth FROM sys.systableperm WHERE updateauth = 'Y'
      UNION ALL
      SELECT grantee, stable_id, 'ALTER' as auth FROM sys.systableperm WHERE alterauth = 'Y'
      UNION ALL
      SELECT grantee, stable_id, 'REFERENCE' as auth FROM sys.systableperm WHERE referenceauth = 'Y'
      UNION ALL
      SELECT grantee, stable_id, 'INSERT' as auth FROM sys.systableperm WHERE insertauth = 'Y'
    ) stp,
    sys.systable st,
    sys.sysuser su
WHERE stp.stable_id = st.table_id
    and su.user_id = stp.grantee
    and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
GROUP BY su.user_name, st.table_name, st.creator

Produces the following SQL output:

GRANT SELECT ON MYUSER.my_table TO otheruser_i;
GRANT DELETE, UPDATE, INSERT, SELECT ON MYUSER.my_table TO otheruser_u;
GRANT DELETE, UPDATE, INSERT, SELECT ON MYUSER.my_table TO otheruser_e;
Share Button

SAP Sybase IQ and ODBC (Linux/UNIX)

Connecting to SAP SAP_IQSybase’s IQ server shouldn’t be scary but the documentation leaves a lot to be desired. Log on to your favorite Linux or Unix box and make sure you source the IQ.sh file wherever you installed the software.

Let’s use the following hypothetical server:

IQ Server:  MyIQboxOfDoom
Port:  5000
Hostname: myiqboxofdoom

If you want a single odbc.ini file for all users, you’ll need to create the file and store it somewhere. Make sure you set the ODBCINI environment variable somewhere (NOT the IQ.sh file as it will but overwritten anytime you patch IQ) to the file. For example: export ODBCINI=/opt/databases/odbc.ini

If you want each user to have their own odbc.ini file, they will need to create the file named as ${HOME}/.odbc.ini The format will be the same as odbc.ini
odbc.ini / .odbc.ini contents:

[MyIQboxOfDoom]
CommLinks=tcpip(ip=myiqboxofdoom;port=5000;DOBROADCAST=NONE;VERIFY=NO)
AutoStop=no

Connecting is a simple matter of:

dbisql -c "dsn=MyIQboxOfDoom;uid=myuser;pwd=mypass" -nogui

Of course you could embed the user in the odbc.ini file like so:

[MyIQboxOfDoom]
CommLinks=tcpip(ip=myiqboxofdoom;port=5000;DOBROADCAST=NONE;VERIFY=NO)
AutoStop=no
uid=myuser
Share Button

UPDATED: HOWTO: Extract Permissions from SAP Sybase IQ

Back in 2013 I posted a method of extracting the table/view permissions from SAP IQ. With later 15.4 patches and v16 the USER_NAME() function would fail on the grantee column of sys.systableperm. An easy way to extract the users is to simply join the grantee column with sys.sysuser. I broke up the case statement to we don’t have a leading comma in the grant statement: “grant , insert on “. Since we’re dealing with tiny tables, there shouldn’t be any undue harm in simply running each grant generation statement separately and union them (to make dbisql happy for those of you that don’t have multiple results enabled).

    SELECT 'GRANT SELECT ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.selectauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
    
    UNION ALL 
    
    SELECT 'GRANT DELETE ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.deleteauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
    
    UNION ALL 
    
    SELECT 'GRANT UPDATE ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.updateauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
    
    UNION ALL
    
    SELECT 'GRANT ALTER ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.alterauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
    
    UNION ALL 
    
    SELECT 'GRANT REFERENCE ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.referenceauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
    
    UNION ALL
    
    SELECT 'GRANT INSERT ON ' + USER_NAME(st.creator) + '.' + st.table_name + ' TO ' + su.user_name + ';'
    FROM
        sys.systable st,
        sys.systableperm stp,
        sys.sysuser su
    WHERE stp.stable_id = st.table_id
        and su.user_id = stp.grantee
        and stp.insertauth = 'Y'
        and lower(su.user_name) not in ('dba', 'public', 'sys', 'dbo', 'diagnostics', 'rs_systabgroup', 'sa_debug', 'extenv_main', 'extenv_worker', 'sys_spatial_admin_role')
Share Button