HOWTO: IBM DB2, ODBC, and Linux (*nix) configuration

The DB2 ODBC driver works like anything else IBM. Nonstandard to the rest of the universe. How to set up ODBC:

Unix admin needs to do:
Retrieve the config file locations:

$ odbcinst -j
unixODBC 2.2.14
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/jason/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8

Add/update the following lines in the /etc/odbcinst.ini file (or wherever the file is as reported by odbcinst -j):

[DB2]
Description = DB2 Driver
Driver = /opt/ibm/db2/db2_v9.7/lib64/libdb2o.so
fileusage=1
dontdlclose=1

The driver line should point to the full path of libdb2o.so (any *nix but aix) or libdb2o.a (aix).

DBAs need to:
Add/update the following lines to the ${IBM_DB_DIR}/sqllib/cfg/db2cli.ini file for each database that uses odbc:

[MYDB2DB]
Database=MYDB2DB
Protcol=TCPIP
Hostname=mydbhost
ServiceName=67033

Add/update the following lines in the ${HOME}/.odbc.ini file:

[MYDB2DB]
Driver=DB2

Make sure we’re using the unixODBC isql:

$ /usr/bin/isql --version
unixODBC 2.2.14

Validate using unixodbc “isql”:

$ /usr/bin/isql -v MYDB2DB testuser test_password
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL>

The DSN for the entry is simply “dbi:ODBC:MYDB2DB” if you’re using Perl. Later versions of the db2 client allow you to specify the connect options in the DSN like so:

my $odbc_connect_str = "dbi:ODBC:"
               . "DRIVER=/opt/ibm/db2_v10.5/lib64/libdb2o.so;
               . "Protocol=tcpip;"
               . "Hostname=mydbhost;"
               . "Port=67033;"
               . "DATABASE=MYDB2DB";
Share Button

HowTo: Set up iSCSI with Multipath (MPIO) in Windows 10 SOLVED

I searched high and low and wasn’t able to find a definitive answer on whether Microsoft Windows 10 supports multipath (MPIO) over iSCSI. I found many many blog posts, articles, and press releases for Windows servers but nothing for Windows 10.

The good news is that Windows 10 supports it out of the box. You don’t have to install anything extra. Of course, the see the benefits of multipath, you will need two or more network cards. You can either bond the network cards together or set up individual routes to the iSCSI network addresses. I’ll assume that you did that already. 🙂

This is only for new connections to your iSCSI targets.  You can not retrofit multipath on to existing connections.  You will need to offline the volumes, remove the targets from the favorites and then reboot.

As an administrator on your Windows 10 box:

  • Launch the iSCSI initiator (Windows key and type iscsi initiator)

  • Add both ip/ports to the iSCSI host in the Discovery -> Portal Groups tab

  • Refresh the iSCSI targets in the Targets tab.  If they aren’t shown, add them manually
  • Select the iSCSI target, click connect, and select “Add this connection to the list of favorites…” (so it will automatically connect the next time you start windows)  and “Enable MPIO”.  Click Okay

  • Select the iSCSI target, click properties.  In the Portal Groups tab you should see both ip/port paths

  • If you needed to offline the volumes, you will need to now online them.  They should retain any drive letter assignments
Share Button

SAP IQ and SQL Anywhere: Locking and blocking only for a maximum specified time

Usually in IQ, you will lock a table for a short time (dml operations) or longer when performing a data load, table/index maintenance or issuing a holdlock within your transaction. These locks can cause other processes to block or even abort with an error “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)”. It can become problematic if you have an application that doesn’t close the transaction in a reasonable timeframe.

Consider the following scenario: A process that retrieves and loads dozens of data files to load into a single table every few minutes.
Issue: Occasionally the “SQL Anywhere Error – 210: User ‘another user’ has the row in (some table)” error will occur causing the load to fail for that file(s).

What can be done? Several things really:

  1. Trap/ignore the error
  2. Retry loading the file. How many times do we attempt this before we collide with the next set of files to load?
  3. Issue holdlock (not sure if it works with load table – I haven’t tried it) causing write access to block. If the load table is stalled, it will hold the lock indefinitely
  4. Combine all the files and load that monolithic file. Potentially faster as setting up a load table connection is very expensive, but if the file fails, the entire set fails. Do we retry?
  5. Issue temporary locks, but block only for X time and only allow itself to be blocked for Y time before rolling back, erroring.

Using the temporary locks/blocking seems to be the best option for this particular scenario to me. It allows for the load tables to run in succession without me having to set up the queue and if it does error out, I can decide in the outside code whether or not to retry the load table. Basically it gives me more control over the locking/blocking.

set temporary option date_order = 'YMD';
set temporary option timestamp_format = 'yyyy/mm/dd hh:nn:ss.sss';

-- "If the blocking option is set to “On”, any transaction attempting to obtain a lock that conflicts with an existing lock held by another transaction waits until every conflicting lock is released or until the blocking_timeout is reached. If the lock is not released within blocking_timeout milliseconds, then an error is returned for the waiting transaction. If the blocking option is set to “Off”, the transaction that attempts to obtain a conflicting lock receives an error." - https://wiki.scn.sap.com/wiki/display/SQLANY/How+to+Debug+Blocking+in+SQL+Anywhere
SET TEMPORARY OPTION "blocking" = 'ON';

-- we will wait up to 10 seconds to obtain a lock before rolling back
SET TEMPORARY OPTION "blocking_timeout" = '10000';

-- allow blocking of other transactions to maximum 5 sec before rolling back
SET TEMPORARY OPTION blocking_others_timeout = '5000';

LOAD TABLE ${REPOSITORY_SCHEMA}.activity (
    ConnHandle,
    LSServer,
    Name,
    Userid
)
USING CLIENT FILE '${OUT_CSV_FILE}'
NOTIFY 1000
QUOTES ON ESCAPES OFF
FORMAT bcp
DELIMITED BY '|'
ROW DELIMITED BY '\n';

COMMIT;
Share Button

Save lives using your computer and phone with the World Community Grid

World Community Grid enables anyone with a computer, smartphone or tablet to donate their unused computing power to advance cutting-edge scientific research on topics related to health, poverty and sustainability. Through the contributions of over 650,000 individuals and 460 organizations, World Community Grid has supported 28 research projects to date, including searches for more effective treatments for cancer, HIV/AIDS and neglected tropical diseases. Other projects are looking for low-cost water filtration systems and new materials for capturing solar energy efficiently.

Help save lives today and join my team Apraxia Kids

Share Button

Video: Korn Shell A Little About Arrays/Lists

Video by bjamesm70

Share Button

HOWTO: Korn Shell / BASH: How to determine if a string is numeric or not

Occasionally we run across something that should be simple. Checking whether a string is a number or not for example.

special_CHAR ‘(‘ pattern ‘)’

The special_CHAR is a prefix that changes the number of characters expected:

‘*’ for zero or more matches
‘+’ at least one match
‘@’ for exactly one match
‘?’ for zero or one matches
‘!’ for negation

The following will match one or more digits:

+([[:digit:]])

See POSIX Character Classes regarding the use of [:digit:]

Below is an example function to determine if a string is numeric written in Korn Shell 93 but will also work in BASH.

The criteria we’re using is based on the US standard of numbers:
1
12.345
and so on

Obviously we could expand this out to handle commas or other separators without too much difficulty.

#!/bin/ksh93

function is_numeric {
    typeset TMP_STR="$1"
    typeset -i TMP_IS_NUMERIC

    if [[ "$TMP_STR" == +([[:digit:]])?(.*([[:digit:]])) ]]; then
        echo "'$TMP_STR' is numeric"
        TMP_IS_NUMERIC=1
    else
        echo "'$TMP_STR' is not numeric"
        TMP_IS_NUMERIC=0
    fi
}

for TMP_STRING in "TEST_VAR" "22" "TRUE1" "TRUE3TRUE" "12.345" "7.8.9.0"; do
    is_numeric "$TMP_STRING"
done

OUTPUT:

$ ./test_regex.ksh
'TEST_VAR' is not numeric
'22' is numeric
'TRUE1' is not numeric
'TRUE3TRUE' is not numeric
'12.345' is numeric
'7.8.9.0' is not numeric

See Finnbarr P. Murphy’s blog for more examples of using regular expressions in Korn Shell 93

Share Button

IQ Error: The multiplex server ‘iq_node_3’ is not included in the multiplex – SOLVED

When you run SAP’s IQ Multiplex cluster for a while you start finding little gotchas that will just drive you to drink. If you don’t drink, you will wish you do. (Just like with any other cluster system)

In my latest foray into the murky waters of IQ Multiplex (v16), if one of the nodes if offline for a while, the coodinator node will mark the node as excluded so the cluster carries on. Not really a big deal until you try to bring up the problem node:

I. 02/09 10:31:45. Database server stopped at Tue Feb 09 2016 10:31
DBSPAWN ERROR:  -82
Unable to start specified database: autostarting database failed
Exception Thrown from stcxtlib/st_database.cxx:10050, Err# 21, tid 2 origtid 2
   O/S Err#: 0, ErrID: 5120 (st_databaseException); SQLCode: -1013113, SQLState: 'QNA49', Severity: 14
[22016]: The multiplex server 'iq_node_3' is not included in the multiplex.
-- (stcxtlib/st_database.cxx 10050)

Error: The multiplex server 'iq_node_3' is not included in the multiplex. The multiplex server 'iq_node_3' is not included in the multiplex.
Server failed to start

2016-02-09-10:31:46 Start of IQ instance iq_mpx_cluster1 failed

Log into the coordinator node, in my case iq_node_1, and run

select server_name, status, mpx_mode, inc_state from sp_iqmpxinfo();
server_name        status   mpx_mode         inc_state
---------------------------------------------------------------
iq_node_1          included coordinator      N/A
iq_node_2          included writer           active
iq_node_3          excluded unknown           timed_out

As you can see, the iq_node_3 node is excluded due to the connection to it from the coordinator timed out. What to do? Simple, first we re-include the node (on the coordinator):

alter multiplex server iq_node_3 status included;

Next we need to resync iq_node_3:

Resync IQ secondary node

The problem node should start up just fine now.

Share Button

Microsoft OneNote for Pros – Smart Note-Taking Tips

Learn how you can use hot keys, insert and use Audio/Video, integrate Excel, and check changes/versions to a doc through the history tab to be more productive in OneNote. Work Smart brings tips/tricks from Microsoft IT that improve digital productivity and help create an all-digital culture within Microsoft.

Getting into OneNote now…..

Share Button

I was using OneNote like a neanderthal. Now I know better :)

David Smith shows how to use Microsoft’s OneNote so that it is actually useful

Share Button

HOWTO: So you want to work on production systems? Lessons I’ve learned over the years. Please learn them

1) never implement a change control without ALL the approvals (in writing documented in the change control)
2) always have a minimum of two database thresholds (if your DBMS supports threshold actions)
3) always communicate with the team what is being changed and why (document the sh*t out of it – CYA)
4) always verify the change was implemented correctly (verification should be another person if possible) and exhibits the correct behavior afterwards. Be ready to back out the change
5) production changes should only occur no less than one week after non-prod. Let it bake in and monitor it carefully
6) the change control should have all the steps:

a) explicitly state which systems are effected
b) provide snapshot of the state of the database (database or table backup, options, etc)
c) include any dml/ddl code that will take place and expected results (when performed store the output within the change control if possible)
d) implementor verification (did it work? any issues?)
e) *ANOTHER PERSON* performs another verification
f) include back out steps. How long will the back out take?

7) if the change control goes bad, there should be enough time to back out the change within the maintenance window. *always* get the manager on duty to sign off that the change had to be backed out.

We all have to learn the lessons the hard way it seems.

Share Button