SAP Sybase ASE 16.0 major features

Sybase .. er… SAP will be releasing Adaptive Server Enterprise 16 within in the next few months (currently expected in Q2 2014). SAP has made the ASE 16.0 manuals available.

Kevin Sherlock sums up the major new features quite well:

  • create or replace functionality
  • multiple triggers
  • monitoring threshold based events
  • configuration tracking history
  • partition level locking
  • log space usage tracking
  • CIS to HANA

While the number of major features may be a bit lacking on first glance to justify being a major release, Jeff Tallman of SAP provides a bit of reasoning on what was really changed:

Hidden under the covers of ASE 16 is a lot of rewrites to spinlock areas of code – so, while you are seeing what looks to be a scattering of features, the main work was done in scaling and eliminating contention – both on high core counts as well as lower core counts – the later especially with proc cache and ELC configuration – as well as log semaphore contention and eliminating the problem of buffer unpinning. Some of these changes required linking in machine code – which means only supporting current releases of certain platforms/OS’s – which by itself often dictates a new platform number. However, there are a number of new features – if you read the NFG, you can see a laundry list – one of which may or may not be mentioned there is HADR mode – which more tightly integrates ASE & SRS – not only is there a synchronous RepAgent (requires an new SRS SP to be launched later), standby ASE is readonly for normal users (ASE actually detects it is standby – and unless you are a privileged user such as RS maint or sa_role, writes are blocked), but ASE also now supports client failover from primary to standby ASE without OpenSwitch – in short term, available for Business Suite – later this year (perhaps) custom apps.

However, with regard to Full Database Encryption…..from a data security standpoint, you can think of it as filling a gap between column level encryption and standard MAC/DAC controls – especially with predicated permissions in the mix. Remember, in column level encryption, we decrypted data at the materialization stage (and encrypted it in normalization) which meant that the data was encrypted both in memory as well as on disk. This was important, because, when you have database users with different access requirements – and especially if you want to keep DBA’s from seeing the data, you need to encrypt the data in memory as well as on disk – and with different users/different requirements, you also need to be able to encrypt different columns with different keys. As a result of encryption, some common database performance techniques – such as leaf/range scans on encrypted cols – were penalized as the index was sorted by the encrypted value (otherwise, it would be security hole) – and no real secure encryption techniques exist that would preserve the lexigraphical sequence. As a result, often times a different index was used for the query or if that index was selected, it was a full leaf scan followed by decryption & sorting – quite a bit of overhead compared to the unencrypted leaf scan. Of course, Encrypted Columns took a bit of effort to set up as someone had to go through and identify every column of sensitive data, determine which Column Encryption Key to use and who should have access – some planning.

Encrypted Columns = data at rest and in memory fully encrypted – and only select designated users could see the data – others saw a default literal value.

Full Database Encryption is intended to solve the problem of ensuring the data at rest is encrypted, but sort of assumes that all legitimate users of the database have the same access rights to the data. Since all users have the same access rights, there is no need to encrypt in memory, use different keys for different columns, etc. As a result, the encryption happens just prior to being written to disk – or just after being read from disk – and on a page basis vs. individual column basis. As a result, index key values, etc. are in their normal sorted order – meaning there is no penalty for leaf scans/range scans any more. Yes, the PIOs may take a slight bit longer but I would be willing to wager we could encrypt the data far faster than traditional disk-based storage can either write it to disk or read it from disk. The time aspect may be very very slightly noticeable on large physical read driven queries. Of course, encryption does use CPU – that might be more noticeable – depending on how much physical IO you are doing. However, since most apps operate on 95%+ cache hit rates, it might not be that noticeable. Remember as well, for write intensive apps, it is often not your SPID doing the writes – it is the HK Wash, checkpoint, someone else pushing your page through wash marker, etc. Keep in mind that one of the drivers for this was SAP ERP applications – where performance is extremely critical due to the way the applications tend to operate (a lot of client side joins to avoid temp tables due to vendor incompatibilities with respect to tempdb). As a result, performance was a key consideration. Level of effort for implemenation is minimal – set up your keys and encrypt the database. Voila!

Full Database Encryption = data at rest fully encrypted – all legitimate users have access.

Hopefully, this not only addresses the speed question, but also the differences. — Jeff Tallman in response to ASE 16: When and what major features?

SAP has overhauled ASE bringing it up to modern performance and scalability. It’s far too early to determine whether the rebuilt engine will live up to the our expectations.

Share Button

IBM Tivoli Storage Manager (TSM) – How to determine management class and retention using dsmc

If your company uses IBM’s Tivoli Storage Manager, you may one day be asked to determine what the management class or retention is for the backups you’re performing. Unless you are using the Windows GUI front end, you may be at a loss to obtain this information. Never fear, as long as you have access to the dsmc command line, this is easy to get.

tsm> query mgmtclass -detail

In the output, we can see that the default management class being used in “DFLT_MC”. Unless your backup job explicitly sets the management class, you will use “DFLT_MC”. The retention is 45 days.

Domain Name               : EXAMPLE
Activated Policy Set Name : EXAMPLE_POL
Activation date/time      : 01/19/2008 15:22:23
Default Mgmt Class Name   : DFLT_MC
Grace Period Backup Retn. : 45 day(s)
Grace Period Archive Retn.: 45 day(s)

MgmtClass Name                  : 90DAY_STOR
Description                     :
Space Management Technique      : None
Auto Migrate on Non-Usage       : 0
Backup Required Before Migration: YES
Destination for Migrated Files  : TAPEPOOL
Copy Group
        Copy Group Name........: STANDARD
        Copy Type..............: Archive
        Copy Frequency.........: Cmd
        Retain Version.........: 90 day(s)
        Copy Serialization.....: Shared Static
        Copy Mode..............: Absolute
        Retain Initiation......: Create
        Retain Minimum.........: 65534 day(s)
        Copy Destination.......: TAPEPOOL
        Lan Free Destination...: NO
        Deduplicate Data.......: NO

MgmtClass Name                  : DFLT_MC
Description                     :
Space Management Technique      : None
Auto Migrate on Non-Usage       : 0
Backup Required Before Migration: YES
Destination for Migrated Files  : BACKUPPOOL
Copy Group
        Copy Group Name........: STANDARD
        Copy Type..............: Backup
        Copy Frequency.........: 0 day(s)
        Versions Data Exists...: 6 version(s)
        Versions Data Deleted..: 45 version(s)
        Retain Extra Versions..: 45 day(s)
        Retain Only Version....: 70 day(s)
        Copy Serialization.....: Shared Static
        Copy Mode..............: Modified
        Copy Destination.......: BACKUPPOOL
        Lan Free Destination...: NO
        Deduplicate Data.......: NO

        Copy Group Name........: STANDARD
        Copy Type..............: Archive
        Copy Frequency.........: Cmd
        Retain Version.........: 45 day(s)
        Copy Serialization.....: Shared Static
        Copy Mode..............: Absolute
        Retain Initiation......: Create
        Retain Minimum.........: 65534 day(s)
        Copy Destination.......: TAPEPOOL
        Lan Free Destination...: NO
        Deduplicate Data.......: NO
Share Button

FW: A Faster Perl Runtime in Tiny Steps by Steffen Müller

Booking.com perluses the Perl programming language heavily across its entire technical infrastructure. At the size of our infrastructure, even small performance improvements can translate into hefty savings, not to mention the exciting new features that we will implement with those spare CPU cycles. For this reason, we are very happy to announce that we are now funding Dave Mitchell, prominently known for his many years of high-quality contributions to the Perl language implementation, to improve the Perl runtime’s performance one small step at a time.

Read more at A Faster Perl Runtime in Tiny Steps by Steffen Müller

Share Button

FW: ASE 15.7: Create a Remote Server to SAP IQ 16.0

ASE 15.7: Create a Remote Server to SAP IQ 16.0
March 12, 2014

Create the remote server entry in the interfaces file; sql.ini (Windows) or interfaces (UNIX).

[myIQServer]
master=TCP,myIQHost,4091
query=TCP,myIQHost,4091

Read more at ARBIME
syLogo

Share Button

SAP Sybase ASE: Unable to start ASE when getting “No unlocked logins with permission ‘…’ were found”? SOLVED

Have you received SAP Sybasethe error “No unlocked logins with permission ‘…’ were found” when attempting to start ASE?

00:0002:00000:00001:2014/03/14 08:51:04.87 server  Database 'master' is now online.
server  The transaction log in the database 'master' will use I/O size of 2 Kb.
server  No unlocked logins with permission '200' were found.
kernel  ueshutdown: exiting
kernel  Main thread performing final shutdown.
kernel  Network and device connection limit is 65519.
kernel  Encryption provider initialization succeeded on engine 1.
kernel  Thread 3 (LWP 41877545) of Threadpool syb_default_pool online as engine 1
kernel  Blocking call queue shutdown.
08:51:05.03 kernel  SySAM: Checked in license for 2 ASE_CORE

This message is the result of specifying the “-n” in the RUN server file for ASE.

-n system_privilege, –permission-logins – specifies the system privilege name so that a list of login accounts with this system privilege is printed into log file. – From Logging in to a locked-out Adaptive Server

When the “-n” is specified, it acts very similar to “-p” in the sense that ASE will make the change and shutdown. In this case, it would print the logins that have the system privilege of “200” then shutdown.

Share Button

Preliminary REVIEW: SAP’s HANA – In memory database with local/remote data stores

Recently I attended a quite well done presentation regarding SAP’s HANA data storage product and scoured the various websites mentioning HANA. I’m not going to go over the that as all that information is really on SAP’s website.

I was inspired to get access to it last night from a friend so this preliminary review is only from a few hours of hands-on playing with it. Perhaps if time allows I’ll perform an in depth analysis of it.

HANA is a memory resident hybrid OLTP/OLAP/DSS system with aspects of being able to quickly load data from a myriad of sources into memory.

You’re currently limited to 2TB of memory for the data AND any memory required to run HANA including sorting tables. If you’re using SAP business applications, this limit is 4TB. The systems have to be certified by SAP and the software will be installed by the hardware vendor unless you’re certified. There are ways to determine exactly how much memory you’re using but like Sybase ASE’s MDA tables, getting actual utilization requires a lot of patience and hair pulling.

The platform is RedHat Linux on Intel hardware from HANA certified hardware vendors (currently nine vendors).

What happens if the box goes down? There is local storage (or SAN type storage if you prefer) that the memory resident database loads initially from and writes to periodically. When I killed HANA it took a few minutes to recover but most of this was reading from the local storage directly into memory. At this point, I’m not certain as to what extent, if any, that HANA checks the data on load. It seems to perform of checksum on the log records on recovery but I wasn’t able to verify in the time I had available.

In SAP’s marketing literature it says that CPU caches are specifically targeted for high performance of certain types of operations whereas other DBMS systems do not. Again, I didn’t have time to determine how HANA implements it. There are a couple methods that can be done to perform this:

  1. Run part of a process in kernel space, think kernel module, but this can be quite risky not only from a stability aspect but also security wise. In the early days of Linux, this wasn’t uncommon, now days it is quite rare.
  2. Write parts of your application, the really time sensitive parts, in assembly language utilizing the capabilities to control what goes where on the cpu to some extent. You’ll find this often with multimedia or cpu intensive applications.

SAP HANA isn’t unique in any of these aspects as all of these have been around for a number years before SAP developed HANA but it is unique that it implements all of them. Is it worth the cost of the hardware and software licensing? It depends.

If you care about performance and willing to pay for it, HANA may be a very good fit for your company.

Share Button

SAP Sybase IQ – dbcc for IQ? Yes! Just use sp_iqcheckdb

If you’re coming from Microsoft SAP SybaseSQL Server or Sybase ASE, SAP’s IQ doesn’t have the slew of dbcc commands you might expect. It rolls up all the pertinent dbccs into a single stored procedure. sp_iqcheckdb has four operational modes with a granularity at the database, table or index level:

  1. In check mode, sp_iqcheckdb performs an internal consistency check on all IQ indexes and checks that each database block has been allocated correctly. All available database statistics are reported. This mode reads all data pages and can detect all types of allocation problems and most types of index inconsistencies. Check mode should run considerably faster than verify mode for most databases.
  2. In verify mode, sp_iqcheckdb performs an intra-index consistency check, in addition to internal index consistency and allocation checking. All available database statistics are reported. The contents of each non-FP index is verified against its corresponding FP index(es). Verify mode reads all data pages and can detect all types of allocation problems and all types of index inconsistencies.
  3. In allocation mode, sp_iqcheckdb checks that each database block is allocated correctly according to the internal physical page mapping structures (blockmaps). Database statistics pertaining to allocation are also reported. This mode executes very quickly. Allocation mode, however, does not check index consistency and cannot detect all types of allocation problems.
  4. When the Sybase IQ server runs in single-node mode, you can use dropleaks mode with either a database or dbspace target to reset the allocation map for the entire database or specified dbspace targets. If the target is a dbspace, then the dropleaks operation must also prevent read-write operations on the named dbspace. All dbspaces in the database or dbspace list must be online.

No changes are actually performed with the exception of the dropleaks mode. If check or verify detects an issue with allocation, you will need to run sp_iqcheckdb in the dropleaks mode while the database is in single user mode. For any issues with indexes, use sp_rebuildindex on both the index and possibly the affected column of the table.

By default output of sp_iqcheckdb will go to the (instance_name).iqmsg log file but you can redirect it to its own file for easier reading:

set temporary option dbcc_log_progress='on';
sp_iqcheckdb 'verify database' ># filename

Please note that the file name may not contain spaces or special symbols and the path will be local to the IQ box. So, unless you want to hunt for your checkdb output file, please specify the full path. No, you can not put the filename in any type of quotes and you can’t escape interesting characters.

Unless you’re under a serious time constraint, always run sp_iqcheckdb in verify mode to determine structural issues with your IQ databases.

Share Button

Linux, BTRFS and kernel crashing / hanging when balancing.. SOLVED

If you’re balancing your BTRFS file system and Ubuntureceive a stacktrace in your os errorlog (typically /var/log/syslog on Linux) like the one below, then don’t fear! It’s been fixed in the 3.14 Linux kernel! You can either wait until it is included in your distribution or get it from kernel.org.

In the case of Ubuntu and derivatives, you can always download the latest kernel from kernel.ubuntu.com. I’m currently using 3.14.0-031400rc5-generic.

kernel: [  451.005400] btrfs: disk space caching is enabled
kernel: [  463.141547] BTRFS debug (device sdl): unlinked 1 orphans
kernel: [  463.170345] btrfs: continuing balance
kernel: [  489.303180] btrfs: relocating block group 3402792697856 flags 129
kernel: [  520.839804] btrfs: found 75 extents
kernel: [  530.404740] ------------[ cut here ]------------
kernel: [  530.404767] Kernel BUG at ffffffffa02e2167 [verbose debug info unavailable]
kernel: [  530.404775] invalid opcode: 0000 [#1] SMP
....
kernel: [  530.404960] task: ffff88032894aee0 ti: ffff88030a1c0000 task.ti: ffff88030a1c0000
kernel: [  530.404966] RIP: 0010:[<ffffffffa02e2167>]  [</ffffffffa02e2167><ffffffffa02e2167>] build_backref_tree+0x277/0x11d0 [btrfs]
kernel: [  530.404996] RSP: 0018:ffff88030a1c19f8  EFLAGS: 00010246
kernel: [  530.405002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8802cdb7b8d0
kernel: [  530.405007] RDX: ffff88030a1c1a68 RSI: ffff88030a1c1a68 RDI: ffff8802cdb7b8c0
kernel: [  530.405013] RBP: ffff88030a1c1ad8 R08: ffff88030970cf00 R09: ffff88030970cc80
kernel: [  530.405019] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8802cdb7b8d0
kernel: [  530.405025] R13: ffff88030ec1ccf0 R14: ffff88032a5bf000 R15: ffff88030970cb80
kernel: [  530.405031] FS:  0000000000000000(0000) GS:ffff88033fc60000(0000) knlGS:0000000000000000
kernel: [  530.405038] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
kernel: [  530.405043] CR2: 00007f4244bd9000 CR3: 0000000001c0e000 CR4: 00000000000007e0
kernel: [  530.405049] Stack:
kernel: [  530.405052]  ffff88032a5bf588 ffff88030970cc80 ffff8802cdb7bc60 ffff88030970cf00
kernel: [  530.405064]  ffff88030970cf00 ffff88032a5be000 ffff88030ec1ccf0 ffff88030970cf40
kernel: [  530.405075]  ffff88030ec1cd80 ffff88032a5bf120 ffff88030970cc80 ffff88032a5bf020
kernel: [  530.405086] Call Trace:
kernel: [  530.405107]  [<ffffffffa02e3478>] relocate_tree_blocks+0x1d8/0x630 [btrfs]
kernel: [  530.405128]  [<ffffffffa02e4108>] ? add_data_references+0x248/0x280 [btrfs]
kernel: [  530.405148]  [<ffffffffa02e4d00>] relocate_block_group+0x280/0x690 [btrfs]
kernel: [  530.405169]  [<ffffffffa02e52af>] btrfs_relocate_block_group+0x19f/0x2e0 [btrfs]
kernel: [  530.405190]  [<ffffffffa02bc77f>] btrfs_relocate_chunk.isra.30+0x6f/0x730 [btrfs]
kernel: [  530.405204]  [<ffffffffa0272b89>] ? btrfs_set_path_blocking+0x39/0x80 [btrfs]
kernel: [  530.405219]  [<ffffffffa0277940>] ? btrfs_search_slot+0x380/0x940 [btrfs]
kernel: [  530.407979]  [<ffffffffa02b3289>] ? release_extent_buffer+0x69/0xd0 [btrfs]
kernel: [  530.408001]  [<ffffffffa02b8c8f>] ? free_extent_buffer+0x4f/0xa0 [btrfs]
kernel: [  530.408021]  [<ffffffffa02c0859>] btrfs_shrink_device+0x1e9/0x420 [btrfs]
kernel: [  530.408042]  [<ffffffffa02c0b5c>] __btrfs_balance+0xcc/0x800 [btrfs]
kernel: [  530.408052]  [<ffffffff816e9932>] ? down_read+0x12/0x30
kernel: [  530.408071]  [<ffffffffa0294563>] ? btrfs_calc_num_tolerated_disk_barrier_failures+0x133/0x180 [btrfs]
kernel: [  530.408093]  [<ffffffffa02c164b>] btrfs_balance+0x3bb/0x670 [btrfs]
kernel: [  530.408113]  [<ffffffffa02c1970>] balance_kthread+0x70/0x80 [btrfs]
kernel: [  530.408133]  [<ffffffffa02c1900>] ? btrfs_balance+0x670/0x670 [btrfs]
kernel: [  530.408143]  [<ffffffff810847b0>] kthread+0xc0/0xd0
kernel: [  530.408150]  [<ffffffff810846f0>] ? kthread_create_on_node+0x120/0x120
kernel: [  530.408158]  [<ffffffff816f516c>] ret_from_fork+0x7c/0xb0
kernel: [  530.408165]  [<ffffffff810846f0>] ? kthread_create_on_node+0x120/0x120
kernel: [  530.408170] Code: f6 43 71 10 0f 85 18 0f 00 00 48 8d 43 40 48 89 c6 48 89 85 58 ff ff ff 48 8b 43 40 48 39 c6 74 0c 48 3b 43 48 0f 84 f5 08 00 00 &lt;0f> 0b 48 c7 85 78 ff ff ff 00 00 00 00 0f 1f 40 00 e8 93 87 40
kernel: [  530.409046] RIP  [<ffffffffa02e2167>] build_backref_tree+0x277/0x11d0 [btrfs]
kernel: [  530.409068]  RSP <ffff88030a1c19f8>
kernel: [  530.419453] ---[ end trace 5cd3ef9b0a742c83 ]---
kernel: [  648.985201] perf samples too long (3067 > 2500), lowering kernel.perf_event_max_sample_rate to 50000</ffff88030a1c19f8></ffffffffa02e2167></ffffffff810846f0></ffffffff816f516c></ffffffff810846f0></ffffffff810847b0></ffffffffa02c1900></ffffffffa02c1970></ffffffffa02c164b></ffffffffa0294563></ffffffff816e9932></ffffffffa02c0b5c></ffffffffa02c0859></ffffffffa02b8c8f></ffffffffa02b3289></ffffffffa0277940></ffffffffa0272b89></ffffffffa02bc77f></ffffffffa02e52af></ffffffffa02e4d00></ffffffffa02e4108></ffffffffa02e3478></ffffffffa02e2167>
Share Button

IBM DB2 LUW: What are my database configuration parameters?

IBM’s DB2 (Linux, Unix, Windows) is an db2logoodd beast. Well, if you’re used to any other DBMS on the planet, DB2 is odd and weird. But then, if you’re used to DB2 LUW, then the rest of the planet is smoking stuff. 😉

DB2’s configuration largely comes in two chunks: database manager configuration and database configuration. It might be easiest to think of the database manager as the parent entity that handles connections and the like (e.g. Oracle listener) and the database configuration as the instance configuration. There is a bit more to it than that but let’s focus shall we? Damn it all the helm! I left the Girl Scout cookies on the fridge at home where the kids couldn’t see them!

To pull the database manager configuration, connect to your DB2:

$ db2
db2 => connect to GOOBER

   Database Connection Information

 Database server        = DB2/AIX64 9.7.6
 SQL authorization ID   = DB2
 Local database alias   = GOOBER

db2 => GET DATABASE MANAGER CONFIGURATION

          Database Manager Configuration

     Node type = Database Server with local and remote clients

 Database manager configuration release level            = 0x0d00

 CPU speed (millisec/instruction)             (CPUSPEED) = 2.834065e-07

 Max number of concurrently active databases     (NUMDB) = 5
 Federated Database System Support           (FEDERATED) = YES
....
 No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC(1024)
 No. of int. communication channels   (FCM_NUM_CHANNELS) = AUTOMATIC(512)
 db2start/db2stop timeout (min)        (START_STOP_TIME) = 10

To pull the database configuration it’s almost identical, connect to your DB2:

$ db2
db2 => connect to GOOBER

   Database Connection Information

 Database server        = DB2/AIX64 9.7.6
 SQL authorization ID   = DB2
 Local database alias   = GOOBER

db2 => GET DATABASE CONFIGURATION

       Database Configuration for Database

 Database configuration release level                    = 0x0d00
 Database release level                                  = 0x0d00
....
 SMTP Server                               (SMTP_SERVER) =
 SQL conditional compilation flags         (SQL_CCFLAGS) =
 Section actuals setting               (SECTION_ACTUALS) = NONE
 Connect procedure                        (CONNECT_PROC) =
Share Button

SAP Sybase ASE 15.7: Unquiesce / quiesce not working

Typically there is a set of systems thatSAP are flashed from one system to another. The basic operation is: quiesce the databases in the source Sybase ASE instance, make a copy of the disk volume groups at the SAN level, move that volume group copy to the destination system, vary on the devices, start Sybase ASE and unquiesce the databases.

A set of devices were added to the source system but the volume group definition on the destination system wasn’t updated so a number of (raw) devices weren’t available to the destination Sybase ASE instance. Well, Sybase ASE did complain that the devices weren’t available but the automated unquiesce job called quiesece database.. release anyways. So it should have reported an error and done nothing further… right? Nope, the quiesce command removed the quiesce tag that is used to release the quiesce.

The destination volume group was refreshed and the devices were made available.

sp_helpdb showed:

1> sp_helpdb
2> go
 name           db_size       owner dbid  created      durability  lobcomplvl inrowlen status                                                               
 -------------- ------------- ----- ----- ------------ ----------- ---------- -------- -------
....
 goober_db          456.0 MB sa       13 Apr 30, 2012 full              NULL     NULL offline, quiesce database
....

Can we online it? Nope:

1>online database goober_db
2> go
Msg 921, Level 14, State 1:
Server 'super_duper_db_of_doom', Line 1:
Database 'goober_db' has not been recovered yet - please wait and try again.

Let’s try to find out what the quiesce tag is. We can check in monOpenDatabases or in dbcc resource. monOpenDatabases is a whole lot easier to read:

1> select DBID, QuiesceTag from master..monOpenDatabases where DBID = 13
2> go
 DBID        QuiesceTag
 ----------- ------------------------------
          13 NULL

No luck there, so let’s see what dbcc resource says that the status is. Lots and lots of output but it says that the database is recovering under spid 18:

1>dbcc traceon(3604)
2>go
1>dbcc resource
2>go
....
dbid = 13
Parallel thread spid: 18
Status: 0x30 ((0x00000020 (REC_ITEM_ONL_IMMEDIATELY), 0x00000010 (REC_ITEM_RECOVERING)))
....

I didn’t see spid 18 about in sysprocesses but that isn’t a guarantee that it isn’t in some state of release. Let’s kick out a stacktrace for this spid just in case. I would have been surprised if there was a stacktrace:

1> dbcc stacktrace(18)
2> go
Msg 3659, Level 16, State 3:
Server 'super_duper_db_of_doom', Line 1:
The spid 18 does not exist.

At this point, I changed the status and status2 columns of master..sysdatabases to 0 and restarted the instance. At worst, it should put the database into suspect mode. Well, the database was back in not recovered / recovering with quiesce mode.

Thinking I might have to reflash the set of devices I rebooted the database thinking perhaps we can have it rescan the dbtable page (or is it dbinfo? After nearly twenty years I still get the two names mixed up)

1>dbcc dbreboot(reboot, goober_db)
2>go

---------- Shutting Down Database 'goober_db' ----------
---------- Re-starting Database 'goober_db' With Recovery ----------
Recovering database 'goober_db'.
Started estimating recovery log boundaries for database 'goober_db'.
Database 'goober_db', checkpoint=(2154102095, 15), first=(2154102095, 15), last=(2154102095, 15).
Completed estimating recovery log boundaries for database 'goober_db'.
Started ANALYSIS pass for database 'goober_db'.
Completed ANALYSIS pass for database 'goober_db'.
Started REDO pass for database 'goober_db'. The total number of log records to process is 1.
Completed REDO pass for database 'goober_db'.
Recovery of database 'goober_db' will undo incomplete nested top actions.
Started filling free space info for database 'goober_db'.
Completed filling free space info for database 'goober_db'.
Started cleaning up the default data cache for database 'goober_db'.
Completed cleaning up the default data cache for database 'goober_db'.
Recovery complete.
Database 'goober_db' is now online.
---------- Operation on Database 'goober_db' Completed Successfully --------- 

What??!? It’s online?

1> sp_helpdb
2> go
 name           db_size       owner dbid  created      durability  lobcomplvl inrowlen status                                                               
 -------------- ------------- ----- ----- ------------ ----------- ---------- -------- -------------------------------------------------------------------------------------------
....
 goober_db          456.0 MB sa       13 Apr 30, 2012 full              NULL     NULL no options set                    
....

Obviously dbcc reboot is doing something different in prep for recovery that restarting the instance does. My guess is that the dbtable is examined and updated/refreshed with the dbreboot dbcc where it is not for a reboot. I’ve opened a ticket with SAP about this. I’ll update this when some sort of answer is available

Share Button