HOWTO: SAP Sybase IQ, SQLAnywhere, and Powershell

Connecting to IQ can be a rather pain in the tuckus.

In the v16 IQ client for Microsoft Windows, the .net assembly changed to “Sap.Data.SQLAnywhere” from “iAnywhere.Data.SQLAnywhere”.

Another annoyance is the loading of the assembly where SAP recommends loading it via

Add-Type -AssemblyName ("Sap.Data.SQLAnywhere.v4.5, Version=, Culture=neutral, PublicKeyToken=f222fc4333e0d400")

This forces you to remember the exact version of the assembly as well as the public key token. If you’re worried about a nefarious individual gaining access to your machine and infecting your assemblies, continue to use the recommended method.

A simpler mechanism is using “LoadWithPartialName” where powershell will load the v4.5 driver for SQLAnywhere if it is in the Global Assembly Cache (GAC):


Working example of connecting to IQ from PowerShell with a v16.1 IQ client (v17 SQLAnywhere client under the covers):

$HostName = ""
$Port = 20001
$User = "DBA"
$Password = "password"


# Connection string
$connectionString = "Host=$HostName`:$Port;UserID=$User;Password=$Password"

# Create SAConnection object
$conn = New-Object Sap.Data.SQLAnywhere.SAConnection($connectionString)

# Connect to remote IQ server

# simple query
$Query = 'select 1 as col1 from dummy'

# Create a SACommand object and feed it the simple $Query
$command = New-Object Sap.Data.SQLAnywhere.SACommand($Query, $conn)

# Execute the query on the remote IQ server
$reader = $command.ExecuteReader()

# create a DataTable object 
$Datatable = New-Object System.Data.DataTable

# Load the results into the $DataTable object

# Send the results to the screen in a formatted table
$DataTable | Format-Table -Auto


Share Button

SAP IQ and SQL Anywhere UnixODBC UserDSN and SystemDSN issues: WORKAROUND

SAP’s IQ / SQLAnywhere ODBC driver on Linux/*nix doesn’t work with a system that has both UserDSNs (~/.odbc.ini) and SystemDSNs (/etc/odbc.ini).

Currently when the driver looks for a DSN (db instance), the resolution is:

  1. $ODBCINI – must contain the exact full path name of the .odbc.ini file.
  2. $HOME
  3. Current directory
  4. $PATH

Notice that /etc/odbc.ini is not processed. A workaround to use SystemDSNs is to set the ODBCINI variable to /etc/odbc.ini but it doesn’t help if you have a mix of SystemDSNs and UserDSNs.

I’ve requested SAP to fix this. They’ve opened KBA2571931 CR812127. Unfortunately, SAP requires a support contract to view the KBA 🙁

CR812127 order of resolution:

  1. $ODBCINI – must contain the exact full path name of the .odbc.ini file.
  2. $HOME
  3. Current directory
  4. $PATH
  5. /etc/odbc.ini
Share Button

Connect SAP IQ to SAP IQ using ODBC the easy way

The documentation on setting up a linked server is overly complex in SAP’s documentation and doesn’t actually tell you what you need to do. This is rather typical of SAP’s IQ documentation unfortunately. Setting up the linked server is easy and only takes about 20 seconds.

First add an entry into the $SYBASE/interfaces file. Seriously. There shouldn’t be any reason for this because we specify the ip/port in the server definition. Chock it up to SAP not cleaning up ancient code.

    query tcp ether siq02 50000
    master tcp ether siq02 50000

Next we add the server entry into the local IQ server. Yes, you really do want to use the ASA (SQL Anywhere) driver here. You don’t have to do anything special for multiplex unlike the IQ driver which really doesn’t give you anything more than a raging headache and the urge to excessively partake in your least favorite poison.


Finally we add the remote login mapping. In this case, I’m mapping the remote user “k2k23” to the local dba account:

CREATE EXTERNLOGIN "dba" to "iq_02" REMOTE LOGIN "k2k23" IDENTIFIED BY klj3haq345as56t

You are now free to set up your proxy tables and load from (remote) location without issue.

Share Button

SAP Sybase IQ: Do not use LF indexes in IQ v16

Really, there isn’t a good reason to use LF indexes in version 16 and higher of IQ. Use HG indexes instead. From Mark Mumy of SAP fame:

Mumy, Mark mark.mumy at
Mon Aug 17 11:31:37 MST 2015

There is no need with IQ 16 SP08.20 and beyond to use an LF. The HG index has been augmented over the past few years to take on the same characteristics for low cardinality data while adding parallelism and other join/sort features that just don’t exist in the LF.

Remove all LF indexes and replace them with HGs, if they don’t have one. There is no need for an HNG as the LF (don’t use) and HG have many of those features and can cost less.


Mark Mumy
Director, Enterprise Architecture, Global HANA CoE | SAP
M +1 347-820-2136 | E mark.mumy at
My Blogs:

Share Button

SAP IQ and SQL Anywhere: Locking and blocking only for a maximum specified time

Usually in IQ, you will lock a table for a short time (dml operations) or longer when performing a data load, table/index maintenance or issuing a holdlock within your transaction. These locks can cause other processes to block or even abort with an error “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)”. It can become problematic if you have an application that doesn’t close the transaction in a reasonable timeframe.

Consider the following scenario: A process that retrieves and loads dozens of data files to load into a single table every few minutes.
Issue: Occasionally the “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)” error will occur causing the load to fail for that file(s).

What can be done? Several things really:

  1. Trap/ignore the error
  2. Retry loading the file. How many times do we attempt this before we collide with the next set of files to load?
  3. Issue holdlock (not sure if it works with load table – I haven’t tried it) causing write access to block. If the load table is stalled, it will hold the lock indefinitely
  4. Combine all the files and load that monolithic file. Potentially faster as setting up a load table connection is very expensive, but if the file fails, the entire set fails. Do we retry?
  5. Issue temporary locks, but block only for X time and only allow itself to be blocked for Y time before rolling back, erroring.

Using the temporary locks/blocking seems to be the best option for this particular scenario to me. It allows for the load tables to run in succession without me having to set up the queue and if it does error out, I can decide in the outside code whether or not to retry the load table. Basically it gives me more control over the locking/blocking.

set temporary option date_order = 'YMD';
set temporary option timestamp_format = 'yyyy/mm/dd hh:nn:ss.sss';

-- "If the blocking option is set to “On”, any transaction attempting to obtain a lock that conflicts with an existing lock held by another transaction waits until every conflicting lock is released or until the blocking_timeout is reached. If the lock is not released within blocking_timeout milliseconds, then an error is returned for the waiting transaction. If the blocking option is set to “Off”, the transaction that attempts to obtain a conflicting lock receives an error." -

-- we will wait up to 10 seconds to obtain a lock before rolling back
SET TEMPORARY OPTION "blocking_timeout" = '10000';

-- allow blocking of other transactions to maximum 5 sec before rolling back
SET TEMPORARY OPTION blocking_others_timeout = '5000';


Share Button

HOWTO: Connect to SAP Sybase IQ Multiplex using JDBC and the SQL Anywhere driver: “Connection error: TCPIP requires a server name”

SAP’s documentation, like other vendors, often sucks. In today’s wonderful documentation sucky-ness the examples from SAP to connect to IQ using a JDBC connection string like so:


This is fine except when you connect to IQ multiplex and you have logical servers set up. You won’t know which IQ node you will be on and since each IQ node has an unique name (eng=), this won’t work. Alright, let’s drop the eng=MyIQ.


This will result in an immediate error of “Connection error: TCPIP requires a server name”. Don’t panic, there is a fix. Drop the ENG=MyIQ and links=tcpip(Host=MyIQ;PORT=40000) replacing it with the HOST=MyIQ:40000.


That works. That is so much more simple than the demented links=tcpip(Host=MyIQ;PORT=40000) nonsense.

Now the fun bit of working code. The following “IQConnect” program accepts four parameters host port username password and spits out the names of the tables on the IQ server:

import java.sql.*;

public class IQConnect {
    public static void main(String[] args) {
        if (args.length == 4) {
            String HostName;
            int Port = 0;
            String UserName;
            String Password;

            try {
                Port = Integer.parseInt(args[1]);
            } catch (NumberFormatException e) {
                System.err.println("Port # must be an integer");

            HostName = args[0];
            UserName = args[2];
            Password = args[3];

            String arg;
            Connection con;

            try {
                String ConnectionParams = String.format("jdbc:sqlanywhere:uid=%s;pwd=%s;host=%s:%d", UserName, Password, HostName, Port);
                con = DriverManager.getConnection(ConnectionParams);

                Statement stmt = con.createStatement();
                ResultSet rs = stmt.executeQuery("select table_name from sys.systable;");

                while ( {
                    String TableName = rs.getString(1);

            catch (SQLException sqe)
                System.out.println("Unexpected exception : " +
                  sqe.toString() + ", sqlstate = " +
            catch (Exception e)
        } else {
            System.out.println("Requires Host Port# User Password");

Compile it with:

javac -cp $IQ16/java/sajdbc4.jar:.

Execute it with

java -cp $IQ16/java/sajdbc4.jar:. IQConnect MyIQ 40000 dba sql

Of course, if you have $IQ16/java as part of your $CLASSPATH you don’t need to include “-cp $IQ16/java/sajdbc4.jar:.” to execute the program.

Share Button

SAP IQ: Uptime

At some point you will be asked how long your IQ server has been up.

To determine when the IQ server was started issue:



2015-08-14 18:04:53.918

For the number of days, hours, and minutes we can slice and dice like so:

-- Declarations

-- Populate variables

SELECT @UPTIME_DAYS || ' days ' || @UPTIME_HOURS || ' hours ' || @UPTIME_MINUTES || ' minutes '


132 days 15 hours 40 minutes
Share Button

HOWTO: SAP IQ Drop Device File from DBSpace

Sometimes when we build an IQ Data Warehouse, our initial space/growth estimate is off significantly enough to warrant reducing the size of the dbspace(s). The process is quite easy put there are a few steps to perform:

Determine how much space is used by the data/indexes. The usage field is the percentage of the totalsize that is being used. For example, the below shows 38% of 4.99TB being used. (4.99 TB * 1024 GB/TB) * 0.38 = 1941.70 GB used:

select convert(varchar(20), DBSpaceName), Usage, TotalSize from sp_iqdbspace() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          Usage TotalSize
IQ_USER_MAIN         38    4.99T

Determine which device files are part of a dbspace

select convert(varchar(20), DBSpaceName), convert(varchar(20), DBFileName), convert(varchar(70), Path), DBFileSize, Usage from sp_iqfile() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          DBFileName           Path                                                                   DBFileSize Usage
IQ_USER_MAIN         IQ_USER_MAIN_FILE_01 /sap/iq/devices/                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_02 /sap/iq/devices/                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_03 /sap/iq/devices/                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_04 /sap/iq/devices/                                    1024G      38
IQ_USER_MAIN         IQ_USER_MAIN_FILE_05 /sap/iq/devices/                                    1024G      34

Verify that the used data will fit in the new allocation

sp_iqdbspace IQ_USER_MAIN


backup database FULL to '/backups/test_FULL.dmp';

Alter the file to be dropped to be read only

alter dbspace IQ_USER_MAIN ALTER FILE IQ_USER_MAIN_FILE_05 readonly

Empty the device file (raw device) to be dropped (if dropping more than one file, drop the device file *LEAST* used first)

sp_iqemptyfile 'IQ_USER_MAIN_FILE_05'

Drop the device file (you may need to restart before IQ will let you do this)


Verify the device was dropped

select convert(varchar(20), DBSpaceName), convert(varchar(20), DBFileName), convert(varchar(70), Path), DBFileSize, Usage from sp_iqfile() where DBSpaceName = 'IQ_USER_MAIN';
DBSpaceName          DBFileName           Path                                                                   DBFileSize Usage
IQ_USER_MAIN         IQ_USER_MAIN_FILE_01 /sap/iq/devices/                                    1024G      48
IQ_USER_MAIN         IQ_USER_MAIN_FILE_02 /sap/iq/devices/                                    1024G      48
IQ_USER_MAIN         IQ_USER_MAIN_FILE_03 /sap/iq/devices/                                    1024G      45
IQ_USER_MAIN         IQ_USER_MAIN_FILE_04 /sap/iq/devices/                                    1024G      47
Share Button

HOWTO: SAP Sybase IQ Loading Tables with AutoIncrement/Identity with Zero as a value SOLVED

If the source table has an identity or autoincrement field AND the value starts at zero “0” then load table either server -> server or from file will not work.  IQ is hard coded to reject the rows with the zero value for identity/autoincrement using load table.  Creating a staging table with a numeric field instead of identity/autoincrement will not work with the export file because the export file has the field flagged as an identity/autoincrement field.
This isn’t documented anywhere.
1. Extract out the source table
2. Create a staging table using the same DDL as the destination table, swapping the identity/autoincrement field with numeric(10,0)
3. Load the table into the staging table
4. Turn on identity_insert for the destination table

Set temporary option identity_insert= 'destination_table';

5. Insert into destination table from staging table

insert into destination_table select * from staging_table;

6. Turn off identity_insert for the destination table

Set temporary option identity_insert = '';

7. Drop staging table
8. Rebuild any views that are invalid
FW Howto: List Invalid Views and fix the views in SAP Sybase IQ

Share Button

SAP IQ: Error: server ‘iq_mpx_1’ was started on an incorrect host ‘myhost1’: this server was created with this connection string ‘host=myhost11:5535’ SOLVED

Recently I built a SAP_IQSAP IQ Multiplex cluster and ran into a self inflicted issue. After I configured the secondary nodes I updated the coordinator node (primary node) with the private (interconnect) and public (what you connect to with an application) connection information. Problem was, I made a small typo and didn’t catch it until after I tried starting the coordinator node.

I configured the coordinator node as such:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost11' port 5535;

Upon attempting to start the coordinator node it failed to start with the following message:

MPX: startup failure message: server 'iq_mpx_1' was started on an incorrect host 'myhost1': this server was created with this connection string 'host=myhost11:5535
-- (stcxtlib/st_database.cxx 9455)
Database server shutdown due to startup error

As soon as I saw the message I swore but the fix is quite simple. First, shutdown any secondary nodes. Update your IQ configuration file (or start command line options) so it starts in single node mode and overrides the multiplex configuration:

# single node mode
-iqmpx_sn 1

#For use starting multiplex databases only. Starts the server with override to acknowledge that the write server is starting (1) on a different host, (2) with a different server name, or (3) using a different path to its catalog (.db) file. Do not start two write servers against the same database.
-iqmpx_ov 1

Start the IQ coordinator and reissue the alter multiplex command:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost1' port 5535;

Update your IQ configuration file to either remove or comment out the lines we added earlier.

Start up your coordinator. It should now start fine. Please note you will need to resync your secondary nodes before starting them.

Share Button