SAP Sybase Replication Server ERROR -99999 Severity 5 Values exceed buffer length – SOLVED

Running SAP Sybase SAP SybaseReplication Server has always been interesting and rather frustrating that in its fragility. Today’s lesson is not exactly that clear. Take the following error message:

ERROR #1027 DSI EXEC(104(1) repdb_svr.rep_db) - /dsiutil.c(390)
    Open Client Client-Library error: Error: -99999, Severity 5 -- 'Values exceed buffer length'.
ERROR #5215 DSI EXEC(104(1) repdb_svr.rep_db) - /dsiutil.c(393)
    The interface function 'SQLPrepare' returns FAIL for database 'repdb_svr.rep_db'. The errors are retryable. The DSI thread will restart automatically. See messages from the interface function for more information.

RANT: While Replication Server says it is retryable, it never actually retries.
It is for the DSI connection but which buffer?? Replication Server doesn’t list any “buffers” for the DSI explicitly. There are a myriad of caches for the DSI connection. In the error message I see two hints to narrow it down: “Values exceed” and “SQLPrepare”. The most likely cache candidates, to me, would be the batch size (dsi_cmd_batch_size) and the dynamic sql cache (dynamic_sql_cache_size).
A simple check would be to disable dynamic SQL and see if we get the same error message:

suspend connection to repdb_svr.rep_db
go
alter connection repdb_svr.rep_db set dynamic_sql to 'off'
go
resume connection to repdb_svr.rep_db
go

Within a few seconds, I received the same message, so that wasn’t the culprit. Before we do anything else, let’s reenable the dynamic sql:

suspend connection to repdb_svr.rep_db
go
alter connection repdb_svr.rep_db set dynamic_sql to 'on'
go

That leaves the batch size as the most likely culprit. So let’s increase that and see what happens:

suspend connection to repdb_svr.rep_db
go
admin who, dsi, repdb_svr, rep_db
go
--  record Cmd_batch_size (default is 8192
--  Increase dsi_cmd_batch_size 
alter connection repdb_svr.rep_db set dsi_cmd_batch_size to '32768'
go
resume connection to repdb_svr.rep_db
go

The error message did not reoccur and I see replication moving by monitoring the admin,sqm in rep server and the rs_lastcommit table in the replicate database to ensure we’re moving.

You may ask what changed that would require increasing the batch size. Well, a very large transaction that was trying to insert data that already existed but since not all the data already existed, we needed to change INSERT into DELETE followed by INSERT:

alter connection to repdb_svr.rep_db set dsi_command_convert to 'i2di'

Why would that cause it to go boom? Well, the dsi_command_convert is applied AFTER replication server slices and dices the transactions into batches.

Share Button

SAP ASE: Using unix domain sockets for turbo bulk copying (BCP)

Cory Sane back in March SAP Sybase2013 on SCN wrote about using unix domain sockets for bulk copying of data in/out of SAP Sybase ASE 15.7.

If you don’t know what unix domain sockets are, Thomas Stover over at Tech Deviancy wrote up an excellent Demystifying Unix Domain Sockets post. Highly recommended!
As usual, Wikipedia also has an article.

What is so great about unix domain sockets? You bypass the networking layer completely. This translates into faster communication between bcp (or similar) and ASE. The downside? The bcp file(s) must be on the same host as the ASE instance and your user must have read/write access to the ‘file’.

Using unix domain sockets with a 16k packet size appears to be ideal for the bcp out (1 million rows in the test). Notice the difference of time between using normal (tcp) connection of 5K rows / second compared to 104k rows / second connecting with unix domain socket.

Unix Domain Sockets

tcp - packet size 65024
test #1  : Clock Time (ms.): total = 184821  Avg = 0 (5410.64 rows per sec.)
test #2  : Clock Time (ms.): total = 178612  Avg = 0 (5598.73 rows per sec.)

uds - packet size 2048
test #3  : Clock Time (ms.): total = 10843  Avg = 0 (92225.40 rows per sec.)
test #4  : Clock Time (ms.): total = 11012  Avg = 0 (90810.03 rows per sec.)

uds - packet size 8192
test #5  : Clock Time (ms.): total = 9823  Avg = 0 (101801.89 rows per sec.)
test #6  : Clock Time (ms.): total = 9965  Avg = 0 (100351.23 rows per sec.)

uds - packet size 12288
test #7  : Clock Time (ms.): total = 9735  Avg = 0 (102722.14 rows per sec.)
test #8  : Clock Time (ms.): total = 9745  Avg = 0 (102616.73 rows per sec.)

uds - packet size 16384
test #9  : Clock Time (ms.): total = 9587  Avg = 0 (104307.92 rows per sec.)
test #10 : Clock Time (ms.): total = 9558  Avg = 0 (104624.40 rows per sec.)

uds - packet size 32768
test #11 : Clock Time (ms.): total = 13205  Avg = 0 (75728.89 rows per sec.)
test #12 : Clock Time (ms.): total = 12961  Avg = 0 (77154.54 rows per sec.)

uds - packet size 65024
test #13 : Clock Time (ms.): total = 13254  Avg = 0 (75448.92 rows per sec.)
test #14 : Clock Time (ms.): total = 13179  Avg = 0 (75878.29 rows per sec.)

With unix domain sockets, we come close to DTU speeds without the penalty of exclusive table locks and a DTU process you can’t kill. It really is like pushing the turbo button!

Obviously, if you are bulk copying out of a view that uses a monster index you you may not see a huge difference, if any, because the majority of the time will transverse that index.

You will see a larger boost with bulk copying out of data than in of data. Why? The network layer is usually not the bottleneck when inserting of data. The bottleneck tends to be more allocating/populating pages in the database or index population if you haven’t dropped the indexes. YMMV

Setting up unix domain sockets is very simple and does NOT require a reboot. Let’s me show you how:

The format of the query master lines you would add to the interfaces file is simple:

query afunix unused //hostname/directory_you_can_write_to/file_name
master afunix unused //hostname/directory_you_can_write_to/file_name

for example:

query afunix unused //myase/dbms/sybase/ASE.socket
master afunix unused //myase/dbms/sybase/ASE.socket

If you want to start the listener without restarting, use sp_listener:

sp_listener 'start', 'afunix://hostname:/directory_you_can_write_to/file_name'

for example:

sp_listener 'start', 'afunix://myase:/dbms/sybase/ASE.socket'
Share Button

Drew Montgomery’s sp__dbsubspace for SAP Sybase ASE space usage

A few years back we were both working at a company that needed a method to obtain Sybase ASE database space usage every few minutes. The output of Ed Barlow’s sp__dbspace was fine but it was an expensive call. Drew Montgomery devised a set of stored procedures that would partially cache the results. Kudos to Drew!

With large databases, several of the calculations would overflow. I’m made only minor changes, mainly convert(numeric(19,5), … ) inside the sum()s.

sp__dbsubspace output:

1> sp__dbsubspace
2> go
 Name         Data MB       Used MB        Percent Log MB    Log Used  Log Pct Total MB
 ------------ ------------- -------------- ------- --------- --------- ------- ---------------
 mydb               4836323      4134753.0   85.49     87079  48793.15   56.03         4923402
(return status = 0)

sp_dbsubspace code:

/************************************************************************************************************************************************
 *  This process will use a persistant table called sysdbspaceinfo.  It is not a previously defined sybase system table, but  
 *  is used by the sp__dbsubspace procedure.  The database that this table resides in is defined by the "sybsystemdb" 
 *  marker.  Please replace the "sybsystemdb" marker with an actual database prior to installing this procedure.  In some    
 *  cases you may want to use something like sybsystemprocs or sybsystemdb.  You may also use tempdb, but if you do
 *  you may want to define the table in the model database as well, so it will exist when the server is restarted.  Good luck
 *  and enjoy.                                                                        Drew Montgomery
 ************************************************************************************************************************************************/

use master
go

/*  This is to set the Truncate Log on Checkpoint option for the database with the space check table on, so it doesn't fill up.
 *     This was done in case the database being used for this doesn't have its logs regularly cleaned (some sybase system
 *     databases are like that).  If the database you are going to use has a regular log cleanup procedure, you may comment 
 *     out this section.                         DM                                                                                                                       */
sp_dboption sybsystemdb, "trunc. log", true
go

use sybsystemdb
go

checkpoint
go

/* Drops the table if it already exists, as a precaution for the create */
if object_id('sysdbspaceinfo') is not null  
 drop table sysdbspaceinfo
go

create table sysdbspaceinfo 
   (dbid int, 
    DataPages numeric(19,5), 
    DataPagesUsed numeric(19,5), 
    LogPages numeric(19,5),
    LogPagesUsed numeric(19,5), 
    LogFirstPage int,
    NumberOfObjects int    )
go

/* Unique index is required for the isolation level, and makes the access faster! */
create unique index UCsysdbspaceinfo on sysdbspaceinfo(dbid)
go

/* Go to the sybsystemprocs to install the procedure */
use sybsystemprocs
go

/* Drop it first, as a precaution */
if object_id("sp__dbsubspace") is not null
  drop procedure sp__dbsubspace
go

create procedure sp__dbsubspace 
          @dont_format char(1) = NULL,          /* Flag for indicating the need to have more or less formatted information */
          @LogThreshold float = 0.01,              /* Change threshold for the amount of log "movement" before a recalculation of the data space is performed.
                                                                   The default value of 0.01 represents a 1.0% change in the amont of log space or the first page pointer of the log.
                                                                   Please note: it is theoretically possible that the log pointer could move just less than this amount and have 
                                                                   an amount of log space used that is just less than this amount, therefore it is POSSIBLE that change could be 
                                                                   two times this amount before a recalculation of the data space is performend.  Possible but not very likely.        */
           @Force varchar(10) = NULL
                                                                    
as
/* ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 *  This procedure conceived and written by Drew Montgomery - please forward any and all substantial updates to 
 *        drew_montgomery@ameritech.net (so I can make this procedure better).  Thank you.                               
 * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------*/

set nocount on
set lock nowait
set transaction isolation level 0

/* Declaration Section */
declare     @MixedFlag char,                                 /* Flag to signify if log and data are on the same device */
                @DataPages numeric(19,5),                  /* Number of data pages defined from sysusages */
                @LogPages numeric(19,5),                    /* Number of log pages defined from sysusages */
                @TotalPages numeric(19,5),                    /* Number of pages defined from sysusages */
                @DataPagesUsed numeric(19,5),           /* Number of data pages currently in use (reserved) */
                @LogPagesUsed numeric(19,5),             /* Number of log pages currently in use */
                @LogFirstPage bigint,                                /* First defined page of the transaction log */
                @NumberOfObjects int,                       /* for determining if a table has been dropped or added */
                @OldDataPages numeric(19,5),              /* Previous number of data pages defined from sysusages */
                @OldLogPages numeric(19,5),                /* Previous number of log pages defined from sysusages */
                @OldLogPagesUsed numeric(19,5),        /* Previous number of data pages being used */
                @OldDataPagesUsed numeric(19,5),       /* Previous number of log pages being used */
                @OldLogFirstPage int,                            /* Previous transaction log first page pointer */
                @OldNumberOfObjects int,                  /* Previous number of user tables in this database */
--                @LogDelta numeric(19,5),                       /* Change in the number of log pages being used */
--                @LFPDelta int,                                       /* Change in the location of the first log page */
                @scale numeric(19,5),                            /*  Multiplier used for displaying amount of space from number of pages -
                                                                                 this value is derived from the @@maxpagesize */
                @ReSize char,                                       /* Flag indicating if we need to recalculate the amount of data space being used */
                @pct_used numeric(10,2),                       /* Calculated percentage of data space used */
                @log_pct_used numeric(10,2)                  /* Calculated percentage of log space used */

set @ReSize = 'F'            /* Predefined as "No, thanks, we don't need to get a new reading of the data space being used -
                                          the old value will do just fine.   */

/* Determine the current data and log space allocated to this dabase as defined from sysusages */
select @DataPages = sum(case when segmap=4 then 0 else convert(numeric(19,5), size) end),
          @LogPages = sum(case when segmap & 4=4 then convert(numeric(19,5), size) else 0 end),
          @TotalPages = sum(convert(numeric(19,5), size))
  from master..sysusages where dbid = db_id()

/* If there are no log pages (or the value is null), then the system is defined as having Mixed data and log space -
 *   which also means that the number of potential log pages is the same as the number of data pages.              */
if isnull(@LogPages,0) = 0
begin
  set @MixedFlag = 'M'  -- Mode is Mixed
  set @LogPages=@DataPages
end
else if exists (select 1 from master..sysusages where dbid = db_id() and segmap &5 =5)
  set @MixedFlag = "C"  -- Stands for Confused
else
  set @MixedFlag = 'P'  -- Indicates Pristine

/* We are getting the number of pages being used by the syslogs table (table id = 8).
 * The first select line is "Prior to system 15 version", the second is the "System 15 or above" version-
 *   Please set the appropriate comment as to which kind of system is being used */
/*  select @LogPagesUsed = reserved_pgs(id, doampg), */
select @LogPagesUsed = reserved_pages(db_id(), id), 
          @LogFirstPage = first
  from sysindexes
 where id = 8

/* Extract the information from previous executions of sp__dbsubspace */
select @OldDataPages = DataPages, 
          @OldDataPagesUsed = DataPagesUsed,
          @OldLogPages = LogPages,
          @OldLogPagesUsed = LogPagesUsed,
          @OldLogFirstPage = LogFirstPage,
          @OldNumberOfObjects = NumberOfObjects
  from sybsystemdb..sysdbspaceinfo
where dbid = db_id()

/* Get an object count of the USER TABLES */
select @NumberOfObjects = count(*) from sysobjects where type = 'U'

/* If there are no records retrieved (first run) we need to Recalculate the size */
if @@rowcount = 0
begin
  set @ReSize = 'T'
end
else        
  if @OldDataPages != @DataPages           /* If the number of data pages changed from sysusages - Recalculate size */
  begin
    set @ReSize = 'T'
  end
  else
    if @OldLogPages != @LogPages            /* If the nubmer of log pages changed from sysusages - Recalculate size */
    begin
      set @ReSize = 'T'
    end
    else
    begin                                                   /* if the number of log pages used is greather than a percentage of the total number of log pages available - Recalc Size */
      if (@LogPagesUsed - @OldLogPagesUsed) > @LogThreshold * @LogPages
        set @ReSize = 'T'
      else
        begin                                               /* if the log's first page moved more than the threshold of the number of log pages available - Recalc Size */
        if (abs(@LogFirstPage - @OldLogFirstPage)) > @LogThreshold * @LogPages
          set @ReSize = 'T'
        else
          begin
            if (@OldNumberOfObjects != @NumberOfObjects)
              set @ReSize = 'T'
          end
        end
    end

if @Force is not null  -- This is a force option to make it update the value
  set @ReSize = 'T'

if @ReSize = 'T'                                        /* We are recalculating size, and getting back the new value of the Data Pages Used */
  exec sp__dbsubspace;2 @DataPages, @DataPagesUsed out, @LogPages, @LogPagesUsed, @LogFirstPage, @NumberOfObjects
else
begin                                                       /* or if we don't need to do that, we just use the previous value of the Data Pages Used */
  set @DataPagesUsed = @OldDataPagesUsed
end

/* Calculated scale from the maximum page size (size of the data pages, usually 2k, 4k, 8k, or 16k) */
set @scale = @@maxpagesize / 1024

/* "Borrowed" this calculation from sp__dbspace of the percentages used*/
if @MixedFlag = 'M' 
begin  /* Please note, if the mode is mixed, we will apparently have no Log Percentage Used as it is part of the data space */
  set @pct_used = convert(numeric(10,2), ((@DataPagesUsed + @LogPagesUsed) * 100) / @DataPages)
  set @log_pct_used = convert(numeric(10,2), 0)
end
else
begin
  set @pct_used = convert(numeric(10,2), (@DataPagesUsed * 100) / @DataPages) 
  set @log_pct_used = convert(numeric(10,2), (@LogPagesUsed * 100 )/(@LogPages) ) 
end

/* The @dont_format is from the sp__dbspace command - and provides the option for an abbreviated display of information */
if @dont_format is not null
begin            /* Provide the results based on the local variables - formatted first */
select  Name             = db_name(),
        "Data MB"  = str((@DataPages*@scale)/1024, 16, 0),
        "Used MB"  = str(((@DataPagesUsed + case @MixedFlag when 'M' then @LogPagesUsed else 0 end)*@scale)/1024, 16, 1),
        Percent    = str(@pct_used, 7, 2),
        "Log MB" = str((@LogPages*@scale)/1024, 12, 0),
        "Log Used"   = str(((case @MixedFlag when 'M' then 0 else @LogPagesUsed end)*@scale)/1024, 12, 2),
        "Log Pct"  = str(@log_pct_used, 7, 2),
        "Total MB" = str((@TotalPages*@scale)/1024, 18, 0)
end
else
begin          /* And unformatted */
select  Name             = convert(char(12),db_name()),
        "Data MB"  = str((@DataPages*@scale)/1024, 13, 0),
        "Used MB"  = str(((@DataPagesUsed + case when @MixedFlag = 'M' then @LogPagesUsed else 0 end)*@scale)/1024, 14, 1),
        Percent    = str(@pct_used, 7, 2),
        "Log MB"   = str((@LogPages*@scale)/1024, 9, 0),
       "Log Used" = str(((case when @MixedFlag='M' then 0 else @LogPagesUsed end)*@scale)/1024, 9, 2),
        "Log Pct"  = str(@log_pct_used, 7, 2),
        "Total MB" = str((@TotalPages*@scale)/1024, 15, 0)
end

/*  And Vola' we are done!  */
go

/* This is a subordinate procedure that gets the information from the sysindexes (or sysobjects for system 15+) and stores the information into the sysdbspaceinfo table */
create procedure sp__dbsubspace;2
  @DataPages numeric(19,5),                           /* See comments above about variables */
  @DataPagesUsed numeric(19,5) output,         /* NOTE: this value is returned to the calling procedure */
  @LogPages numeric(19,5),
  @LogPagesUsed numeric(19,5),
  @LogFirstPage bigint,
  @NumberOfObjects int

with recompile
as

  /* Prior to system 15 version */
  /* select @DataPagesUsed = sum(reserved_pgs(id, doampg) + reserved_pgs(id, ioampg))
     from sysindexes
    where id != 8
  */
/* System 15 and above version */    
   select @DataPagesUsed = sum(convert(numeric(19,5), reserved_pages(db_id(), id)))
     from sysobjects
    where id != 8
    
    
 /* Update the information in the table */
      update sybsystemdb..sysdbspaceinfo 
           set  DataPages = @DataPages,
                 DataPagesUsed = @DataPagesUsed,
                 LogPages = @LogPages,
                 LogPagesUsed = @LogPagesUsed,
                 LogFirstPage = @LogFirstPage,
                 NumberOfObjects = @NumberOfObjects
         where dbid = db_id()                 

/* if the update is not for any rows, then add a row */
        if @@rowcount = 0
          insert into sybsystemdb..sysdbspaceinfo values (db_id(), @DataPages, @DataPagesUsed, @LogPages, @LogPagesUsed, @LogFirstPage, @NumberOfObjects)
          
return  /* and we are done with this one */
go

/* You may want to grant some permissions (execute, perhaps) if the users are not sa's */
Share Button

SAP Sybase ASE 16.0 major features

Sybase .. er… SAP will be releasing Adaptive Server Enterprise 16 within in the next few months (currently expected in Q2 2014). SAP has made the ASE 16.0 manuals available.

Kevin Sherlock sums up the major new features quite well:

  • create or replace functionality
  • multiple triggers
  • monitoring threshold based events
  • configuration tracking history
  • partition level locking
  • log space usage tracking
  • CIS to HANA

While the number of major features may be a bit lacking on first glance to justify being a major release, Jeff Tallman of SAP provides a bit of reasoning on what was really changed:

Hidden under the covers of ASE 16 is a lot of rewrites to spinlock areas of code – so, while you are seeing what looks to be a scattering of features, the main work was done in scaling and eliminating contention – both on high core counts as well as lower core counts – the later especially with proc cache and ELC configuration – as well as log semaphore contention and eliminating the problem of buffer unpinning. Some of these changes required linking in machine code – which means only supporting current releases of certain platforms/OS’s – which by itself often dictates a new platform number. However, there are a number of new features – if you read the NFG, you can see a laundry list – one of which may or may not be mentioned there is HADR mode – which more tightly integrates ASE & SRS – not only is there a synchronous RepAgent (requires an new SRS SP to be launched later), standby ASE is readonly for normal users (ASE actually detects it is standby – and unless you are a privileged user such as RS maint or sa_role, writes are blocked), but ASE also now supports client failover from primary to standby ASE without OpenSwitch – in short term, available for Business Suite – later this year (perhaps) custom apps.

However, with regard to Full Database Encryption…..from a data security standpoint, you can think of it as filling a gap between column level encryption and standard MAC/DAC controls – especially with predicated permissions in the mix. Remember, in column level encryption, we decrypted data at the materialization stage (and encrypted it in normalization) which meant that the data was encrypted both in memory as well as on disk. This was important, because, when you have database users with different access requirements – and especially if you want to keep DBA’s from seeing the data, you need to encrypt the data in memory as well as on disk – and with different users/different requirements, you also need to be able to encrypt different columns with different keys. As a result of encryption, some common database performance techniques – such as leaf/range scans on encrypted cols – were penalized as the index was sorted by the encrypted value (otherwise, it would be security hole) – and no real secure encryption techniques exist that would preserve the lexigraphical sequence. As a result, often times a different index was used for the query or if that index was selected, it was a full leaf scan followed by decryption & sorting – quite a bit of overhead compared to the unencrypted leaf scan. Of course, Encrypted Columns took a bit of effort to set up as someone had to go through and identify every column of sensitive data, determine which Column Encryption Key to use and who should have access – some planning.

Encrypted Columns = data at rest and in memory fully encrypted – and only select designated users could see the data – others saw a default literal value.

Full Database Encryption is intended to solve the problem of ensuring the data at rest is encrypted, but sort of assumes that all legitimate users of the database have the same access rights to the data. Since all users have the same access rights, there is no need to encrypt in memory, use different keys for different columns, etc. As a result, the encryption happens just prior to being written to disk – or just after being read from disk – and on a page basis vs. individual column basis. As a result, index key values, etc. are in their normal sorted order – meaning there is no penalty for leaf scans/range scans any more. Yes, the PIOs may take a slight bit longer but I would be willing to wager we could encrypt the data far faster than traditional disk-based storage can either write it to disk or read it from disk. The time aspect may be very very slightly noticeable on large physical read driven queries. Of course, encryption does use CPU – that might be more noticeable – depending on how much physical IO you are doing. However, since most apps operate on 95%+ cache hit rates, it might not be that noticeable. Remember as well, for write intensive apps, it is often not your SPID doing the writes – it is the HK Wash, checkpoint, someone else pushing your page through wash marker, etc. Keep in mind that one of the drivers for this was SAP ERP applications – where performance is extremely critical due to the way the applications tend to operate (a lot of client side joins to avoid temp tables due to vendor incompatibilities with respect to tempdb). As a result, performance was a key consideration. Level of effort for implemenation is minimal – set up your keys and encrypt the database. Voila!

Full Database Encryption = data at rest fully encrypted – all legitimate users have access.

Hopefully, this not only addresses the speed question, but also the differences. — Jeff Tallman in response to ASE 16: When and what major features?

SAP has overhauled ASE bringing it up to modern performance and scalability. It’s far too early to determine whether the rebuilt engine will live up to the our expectations.

Share Button

FW: ASE 15.7: Create a Remote Server to SAP IQ 16.0

ASE 15.7: Create a Remote Server to SAP IQ 16.0
March 12, 2014

Create the remote server entry in the interfaces file; sql.ini (Windows) or interfaces (UNIX).

[myIQServer]
master=TCP,myIQHost,4091
query=TCP,myIQHost,4091

Read more at ARBIME
syLogo

Share Button

SAP Sybase ASE: Unable to start ASE when getting “No unlocked logins with permission ‘…’ were found”? SOLVED

Have you received SAP Sybasethe error “No unlocked logins with permission ‘…’ were found” when attempting to start ASE?

00:0002:00000:00001:2014/03/14 08:51:04.87 server  Database 'master' is now online.
server  The transaction log in the database 'master' will use I/O size of 2 Kb.
server  No unlocked logins with permission '200' were found.
kernel  ueshutdown: exiting
kernel  Main thread performing final shutdown.
kernel  Network and device connection limit is 65519.
kernel  Encryption provider initialization succeeded on engine 1.
kernel  Thread 3 (LWP 41877545) of Threadpool syb_default_pool online as engine 1
kernel  Blocking call queue shutdown.
08:51:05.03 kernel  SySAM: Checked in license for 2 ASE_CORE

This message is the result of specifying the “-n” in the RUN server file for ASE.

-n system_privilege, –permission-logins – specifies the system privilege name so that a list of login accounts with this system privilege is printed into log file. – From Logging in to a locked-out Adaptive Server

When the “-n” is specified, it acts very similar to “-p” in the sense that ASE will make the change and shutdown. In this case, it would print the logins that have the system privilege of “200” then shutdown.

Share Button

SAP Sybase IQ – dbcc for IQ? Yes! Just use sp_iqcheckdb

If you’re coming from Microsoft SAP SybaseSQL Server or Sybase ASE, SAP’s IQ doesn’t have the slew of dbcc commands you might expect. It rolls up all the pertinent dbccs into a single stored procedure. sp_iqcheckdb has four operational modes with a granularity at the database, table or index level:

  1. In check mode, sp_iqcheckdb performs an internal consistency check on all IQ indexes and checks that each database block has been allocated correctly. All available database statistics are reported. This mode reads all data pages and can detect all types of allocation problems and most types of index inconsistencies. Check mode should run considerably faster than verify mode for most databases.
  2. In verify mode, sp_iqcheckdb performs an intra-index consistency check, in addition to internal index consistency and allocation checking. All available database statistics are reported. The contents of each non-FP index is verified against its corresponding FP index(es). Verify mode reads all data pages and can detect all types of allocation problems and all types of index inconsistencies.
  3. In allocation mode, sp_iqcheckdb checks that each database block is allocated correctly according to the internal physical page mapping structures (blockmaps). Database statistics pertaining to allocation are also reported. This mode executes very quickly. Allocation mode, however, does not check index consistency and cannot detect all types of allocation problems.
  4. When the Sybase IQ server runs in single-node mode, you can use dropleaks mode with either a database or dbspace target to reset the allocation map for the entire database or specified dbspace targets. If the target is a dbspace, then the dropleaks operation must also prevent read-write operations on the named dbspace. All dbspaces in the database or dbspace list must be online.

No changes are actually performed with the exception of the dropleaks mode. If check or verify detects an issue with allocation, you will need to run sp_iqcheckdb in the dropleaks mode while the database is in single user mode. For any issues with indexes, use sp_rebuildindex on both the index and possibly the affected column of the table.

By default output of sp_iqcheckdb will go to the (instance_name).iqmsg log file but you can redirect it to its own file for easier reading:

set temporary option dbcc_log_progress='on';
sp_iqcheckdb 'verify database' ># filename

Please note that the file name may not contain spaces or special symbols and the path will be local to the IQ box. So, unless you want to hunt for your checkdb output file, please specify the full path. No, you can not put the filename in any type of quotes and you can’t escape interesting characters.

Unless you’re under a serious time constraint, always run sp_iqcheckdb in verify mode to determine structural issues with your IQ databases.

Share Button

SAP Sybase ASE 15.7: Unquiesce / quiesce not working

Typically there is a set of systems thatSAP are flashed from one system to another. The basic operation is: quiesce the databases in the source Sybase ASE instance, make a copy of the disk volume groups at the SAN level, move that volume group copy to the destination system, vary on the devices, start Sybase ASE and unquiesce the databases.

A set of devices were added to the source system but the volume group definition on the destination system wasn’t updated so a number of (raw) devices weren’t available to the destination Sybase ASE instance. Well, Sybase ASE did complain that the devices weren’t available but the automated unquiesce job called quiesece database.. release anyways. So it should have reported an error and done nothing further… right? Nope, the quiesce command removed the quiesce tag that is used to release the quiesce.

The destination volume group was refreshed and the devices were made available.

sp_helpdb showed:

1> sp_helpdb
2> go
 name           db_size       owner dbid  created      durability  lobcomplvl inrowlen status                                                               
 -------------- ------------- ----- ----- ------------ ----------- ---------- -------- -------
....
 goober_db          456.0 MB sa       13 Apr 30, 2012 full              NULL     NULL offline, quiesce database
....

Can we online it? Nope:

1>online database goober_db
2> go
Msg 921, Level 14, State 1:
Server 'super_duper_db_of_doom', Line 1:
Database 'goober_db' has not been recovered yet - please wait and try again.

Let’s try to find out what the quiesce tag is. We can check in monOpenDatabases or in dbcc resource. monOpenDatabases is a whole lot easier to read:

1> select DBID, QuiesceTag from master..monOpenDatabases where DBID = 13
2> go
 DBID        QuiesceTag
 ----------- ------------------------------
          13 NULL

No luck there, so let’s see what dbcc resource says that the status is. Lots and lots of output but it says that the database is recovering under spid 18:

1>dbcc traceon(3604)
2>go
1>dbcc resource
2>go
....
dbid = 13
Parallel thread spid: 18
Status: 0x30 ((0x00000020 (REC_ITEM_ONL_IMMEDIATELY), 0x00000010 (REC_ITEM_RECOVERING)))
....

I didn’t see spid 18 about in sysprocesses but that isn’t a guarantee that it isn’t in some state of release. Let’s kick out a stacktrace for this spid just in case. I would have been surprised if there was a stacktrace:

1> dbcc stacktrace(18)
2> go
Msg 3659, Level 16, State 3:
Server 'super_duper_db_of_doom', Line 1:
The spid 18 does not exist.

At this point, I changed the status and status2 columns of master..sysdatabases to 0 and restarted the instance. At worst, it should put the database into suspect mode. Well, the database was back in not recovered / recovering with quiesce mode.

Thinking I might have to reflash the set of devices I rebooted the database thinking perhaps we can have it rescan the dbtable page (or is it dbinfo? After nearly twenty years I still get the two names mixed up)

1>dbcc dbreboot(reboot, goober_db)
2>go

---------- Shutting Down Database 'goober_db' ----------
---------- Re-starting Database 'goober_db' With Recovery ----------
Recovering database 'goober_db'.
Started estimating recovery log boundaries for database 'goober_db'.
Database 'goober_db', checkpoint=(2154102095, 15), first=(2154102095, 15), last=(2154102095, 15).
Completed estimating recovery log boundaries for database 'goober_db'.
Started ANALYSIS pass for database 'goober_db'.
Completed ANALYSIS pass for database 'goober_db'.
Started REDO pass for database 'goober_db'. The total number of log records to process is 1.
Completed REDO pass for database 'goober_db'.
Recovery of database 'goober_db' will undo incomplete nested top actions.
Started filling free space info for database 'goober_db'.
Completed filling free space info for database 'goober_db'.
Started cleaning up the default data cache for database 'goober_db'.
Completed cleaning up the default data cache for database 'goober_db'.
Recovery complete.
Database 'goober_db' is now online.
---------- Operation on Database 'goober_db' Completed Successfully --------- 

What??!? It’s online?

1> sp_helpdb
2> go
 name           db_size       owner dbid  created      durability  lobcomplvl inrowlen status                                                               
 -------------- ------------- ----- ----- ------------ ----------- ---------- -------- -------------------------------------------------------------------------------------------
....
 goober_db          456.0 MB sa       13 Apr 30, 2012 full              NULL     NULL no options set                    
....

Obviously dbcc reboot is doing something different in prep for recovery that restarting the instance does. My guess is that the dbtable is examined and updated/refreshed with the dbreboot dbcc where it is not for a reboot. I’ve opened a ticket with SAP about this. I’ll update this when some sort of answer is available

Share Button

Sybase ASE: Adding log to a completely log full database – errors 1105 and 3475 “There is no space available in SYSLOGS” – SOLVED

When SAP Sybasea sybase database’s log is completely full, you won’t be able to add any log space to it. Attempting to add to the log produces a 3475 error:
 

00:0006:00000:00001:2014/01/08 09:03:09.09 server  ERROR: Can't get a new log page in db 4. num_left=17 num_in_plc=17.
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Error: 1105, Severity: 17, State: 7
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Can't allocate space for object 'syslogs' in database 'mydb' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslo
gs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Error: 3475, Severity: 21, State: 7
00:0006:00000:00001:2014/01/08 09:03:09.09 server  There is no space available in SYSLOGS to log a record for which space has been reserved in database 'mydb' (id 4). This process will retry at interval
s of one minute.

 
So what to do? If you separate your data and log segments, you will need to temporarily add the log segment to a data device so the database can recover. Once it recovers, we can add space to the log and remove the log segment from the data device. For good measure, we run dbccs to correct any allocation issues that may be contributing to the out of log space.
 
Add the log segment to a data device (use sp_helpdb dbname to determine which data device has space):

exec sp_configure "allow updates", 1
go
update sysusages set segmap = 7 where dbid = 4 and lstart = 1492992
go
checkpoint
go
shutdown with nowait
go

Add space to the log:

alter database mydb log on mydevicel001 = 500
go

 
Before we do anything else, let’s run dbccs. Of course, you will want to run the dbccs without the fix option to identify if there are other issues prior to running with the fix:

exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec sp_dboption mydb, 'dbo use', true
exec sp_dboption mydb, 'single user', true
dbcc traceon(3604)
dbcc checkdb(mydb, fix_spacebits)
dbcc checkalloc(mydb, fix)
exec sp_dboption mydb, 'dbo use', false
exec sp_dboption mydb, 'single user', false
go

If no lingering issues, we can remove the log segment from the data device:

exec sp_dboption mydb, 'single user', true
go
use mydb
go
exec sp_dropsegment logsegment, mydb, mydeviced005
go
use master
go
exec sp_dboption mydb, 'single user', false
go

SAP is fixing Bug CR 756957 in ASE 15.7 SP110 that may be the root cause of the 3475 error:

In certain circumstances, databases, including system databases, can incorrectly get into LOG SUSPEND mode, issuing message: “Space available in the log segment has fallen critically low in database ‘ < dbname > ‘. All future modifications to this database will be suspended until the log is successfully dumped and space becomes available.” This may happen even though there is much unreserved space in the database. The problem may also manifest in 3475 errors: “There is no space available in SYSLOGS to log a record for which space has been reserved in database < dbname > .”

Share Button

SAP Sybase ASE MDA Tables: Why is the OwnerName and OwnerID omitted in most of the MDA tables?

If you look SAP Sybaseat the MDA poster, you will find the object names, the database names and the like but not the owner_id or owner_name. Let’s pick on monOpenObjectActivity, one of the most heavily used mda tables

Having monOpenObjectActivity is great but what if I have two tables named the same in the same database with a different owner? Such a scenario is not uncommon in a development system when having a separate database for each developer is not practical.

A row in monOpenObjectActivity could be mydb.john.table_a or mydb.jane.table_a. While I do have the object id, there is no owner_name(DBID, OBJID) function, so I need to go into each individual database and query sysobjects. What a hassle! ASE already knows, or should know, which owner id the object belongs to so why need I have to create a dynamic query for each and every row in monOpenObjectActivity? Sure, I could create a look up table but it wouldn’t be able to handle temporary objects (e.g. #table) very well.

The following tables have Owner something in the column names:

1> select convert(varchar(30), object_name(id)) as "Object Name", convert(varchar(30), name) as "Column Name" from syscolumns where lower(name) like "%owner%"
2> go
Object Name                    Column Name
------------------------------ ------------------------------
monProcessObject               OwnerUserID
monCachedProcedures            OwnerUID
monCachedProcedures            OwnerName
monProcessProcedures           OwnerUID
monSpinlockActivity            OwnerPID
monSpinlockActivity            LastOwnerPID
monMemoryUsage                 PoolOwnerKPID
monProcessProcedures           OwnerName
monCachedObject                OwnerUserID
monCachedObject                OwnerName
sp_namecrack                   @owner

I think we need two functions: owner_name(DBID, OBJID) and owner_id(DBID, OBJID)

We also need the OwnerID and preferrably OwnerName added to the appropriate MDA tables.

What do you think?

Share Button