SOLVED: SSH and Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password)

OpenSSHI ran across the error “Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).” while ssh’ing to another server today:

$ ssh myhost
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Usually this means that the permissions of ~/.ssh, ~/.ssh/authorized_keys or your home directory on the other box isn’t setup right The permissions should look like so:

  1. -rwx——. /home/jason
  2. -rwx——. /home/jason/.ssh
  3. -rw——-. /home/jason/.authorized_keys

You would fix with:

$ chmod 0700 ~
$ chmod 0700 ~/.ssh
$ chmod 0600 ~/.ssh/authorized_keys

In my case, the permissions were correct. I ran the ssh command with extra verbose (-v -v)

$ ssh -v -v myhost
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /home/jason/.ssh/config
debug1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: /etc/ssh/ssh_config line 62: Deprecated option "RhostsAuthentication"
debug2: ssh_connect: needpriv 0
debug1: Connecting to myhost [192.168.12.6] port 22.
debug1: Connection established.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/jason/.ssh/id_rsa type 1
debug1: identity file /home/jason/.ssh/id_rsa-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/jason/.ssh/id_dsa type 2
debug1: identity file /home/jason/.ssh/id_dsa-cert type -1
debug1: identity file /home/jason/.ssh/id_ecdsa type -1
debug1: identity file /home/jason/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0
debug1: match: OpenSSH_6.0 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 154/256
debug2: bits set: 520/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
Warning: Permanently added 'myhost,192.168.1.66' (RSA) to the list of known hosts.
debug2: bits set: 525/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/jason/.ssh/id_rsa (0x7ff594d8ecb0)
debug2: key: /home/jason/.ssh/id_dsa (0x7ff594d90550)
debug2: key: /home/jason/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug1: No more authentication methods to try.
Permission denied (publickey,password,keyboard-interactive).

I didn’t see a reason why I wasn’t getting a password prompt but I do see it reading my ssh_config file. A real quick override of the ssh_config showed me that my ssh_config was the culprit:

$ ssh -F /dev/null myhost
jason@myhost's password:

So what is in my ~/.ssh/config file?

ServerAliveInterval 240
BatchMode yes
TCPKeepAlive = yes

Neither ServerAliveInterval or TCPKeepAlive have anything to do with authentication but BatchMode does. From the ssh_config man page:

BatchMode

The argument must be yes or no. If set to yes, passphrase/password querying will be disabled. This option is useful in scripts and other batch jobs where you have no user to supply the password.

So, if my public ssh key is not in the ~/.ssh/authorized_keys, the connection will fail with a permission denied. Let’s verify but removing BatchMode from the ~/.ssh/config file:

ServerAliveInterval 240
TCPKeepAlive = yes
$ ssh -F /dev/null myhost
___$

Success 🙂

Share Button

OpenVPN & Network Manager: selecting a random VPN target each time you start the Virtual Private Network (UNIX/Linux) SOLVED

I sometimes perform some IT work for a nonprofit organization. They use OpenVPN for their network but since they reside in different locations, they have multiple OpenVPN servers set up rather than just one point of entry. The problem I’ve noticed is that at times one or another will be slower. While I don’t have a mechanism to identify which is faster, I can roll the dice and have my vpn start script pick a random server instead of me having to randomly pick one myself.

#!/bin/bash

# If the network card is unavailable, we're not going to bring up the vpn
REQUIRED_CONNECTION_NAME="enp0s8"

# VPN_LIST is just a simple array
declare -a VPN_LIST

# BASH arrays start with index 0
i=0

# read the vpn list into an array
while read TMP_VPN; do
    VPN_LIST[$i]="$TMP_VPN"
    ((i++))
done < vpns.txt 

# if the vpns.txt is NOT empty
if (( i >= 0 )); then
    # Choose a random VPN index from the TMP_VPN array
    if (( i > 0 )); then
        ((i - 1))
        ((RANDOM_VPN = $RANDOM % $i))
    else
        RANDOM_VPN=$i
    fi

    # We set the VPN_CONNECTION_NAME to the VPN we chose
    VPN_CONNECTION_NAME=${VPN_LIST[$RANDOM_VPN]}

    DEFAULT_CONNECTION=$( nmcli con show --active |grep "${REQUIRED_CONNECTION_NAME}" )
    VPN_CONNECTION=$( nmcli con show --active | grep "${VPN_CONNECTION_NAME}" )

    # Make sure that the vpn connection isn't already up
    if [[ "${DEFAULT_CONNECTION}" != "${VPN_CONNECTION}" ]]; then
        echo -n "Connecting to ${VPN_CONNECTION_NAME} ... "

        # The credentials are stored in my Gnome keyring so I run the nmcli command as jason
        su - jason -c "nmcli con up id \"${VPN_CONNECTION_NAME}\""

        RC=$?

        if (( RC == 0 )); then
            echo "SUCCESS"
        else
            echo "FAILED"
        fi
    else
        echo "configuration mismatch"
        RC=1
    fi
fi

exit $RC

The file vpns.txt is simply a text file with the names of the VPNs as they are listed in OpenVPN (see /etc/NetworkManager/system-connections for the list of defined VPNs). One VPN per line.

vpn-east.example.org
vpn-west.example.org
vpn-europe.example.org
vpn-tokyo.example.org
Share Button

IQ Error: The multiplex server ‘iq_node_3’ is not included in the multiplex – SOLVED

When you run SAP’s IQ Multiplex cluster for a while you start finding little gotchas that will just drive you to drink. If you don’t drink, you will wish you do. (Just like with any other cluster system)

In my latest foray into the murky waters of IQ Multiplex (v16), if one of the nodes if offline for a while, the coodinator node will mark the node as excluded so the cluster carries on. Not really a big deal until you try to bring up the problem node:

I. 02/09 10:31:45. Database server stopped at Tue Feb 09 2016 10:31
DBSPAWN ERROR:  -82
Unable to start specified database: autostarting database failed
Exception Thrown from stcxtlib/st_database.cxx:10050, Err# 21, tid 2 origtid 2
   O/S Err#: 0, ErrID: 5120 (st_databaseException); SQLCode: -1013113, SQLState: 'QNA49', Severity: 14
[22016]: The multiplex server 'iq_node_3' is not included in the multiplex.
-- (stcxtlib/st_database.cxx 10050)

Error: The multiplex server 'iq_node_3' is not included in the multiplex. The multiplex server 'iq_node_3' is not included in the multiplex.
Server failed to start

2016-02-09-10:31:46 Start of IQ instance iq_mpx_cluster1 failed

Log into the coordinator node, in my case iq_node_1, and run

select server_name, status, mpx_mode, inc_state from sp_iqmpxinfo();
server_name        status   mpx_mode         inc_state
---------------------------------------------------------------
iq_node_1          included coordinator      N/A
iq_node_2          included writer           active
iq_node_3          excluded unknown           timed_out

As you can see, the iq_node_3 node is excluded due to the connection to it from the coordinator timed out. What to do? Simple, first we re-include the node (on the coordinator):

alter multiplex server iq_node_3 status included;

Next we need to resync iq_node_3:

Resync IQ secondary node

The problem node should start up just fine now.

Share Button

SAP IQ: Error: server ‘iq_mpx_1’ was started on an incorrect host ‘myhost1’: this server was created with this connection string ‘host=myhost11:5535’ SOLVED

Recently I built a SAP_IQSAP IQ Multiplex cluster and ran into a self inflicted issue. After I configured the secondary nodes I updated the coordinator node (primary node) with the private (interconnect) and public (what you connect to with an application) connection information. Problem was, I made a small typo and didn’t catch it until after I tried starting the coordinator node.

I configured the coordinator node as such:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost11' port 5535;

Upon attempting to start the coordinator node it failed to start with the following message:

MPX: startup failure message: server 'iq_mpx_1' was started on an incorrect host 'myhost1': this server was created with this connection string 'host=myhost11:5535
-- (stcxtlib/st_database.cxx 9455)
Database server shutdown due to startup error

As soon as I saw the message I swore but the fix is quite simple. First, shutdown any secondary nodes. Update your IQ configuration file (or start command line options) so it starts in single node mode and overrides the multiplex configuration:

# single node mode
-iqmpx_sn 1

#For use starting multiplex databases only. Starts the server with override to acknowledge that the write server is starting (1) on a different host, (2) with a different server name, or (3) using a different path to its catalog (.db) file. Do not start two write servers against the same database.
-iqmpx_ov 1

Start the IQ coordinator and reissue the alter multiplex command:

alter multiplex server ip_mpx_1 database '/sybase_iq/iq_mpx.db' PRIVATE HOST 'node1-clu' PORT 5535 HOST 'myhost1' port 5535;

Update your IQ configuration file to either remove or comment out the lines we added earlier.

Start up your coordinator. It should now start fine. Please note you will need to resync your secondary nodes before starting them.

Share Button

SAP Sybase Replication Server ERROR -99999 Severity 5 Values exceed buffer length – SOLVED

Running SAP Sybase SAP SybaseReplication Server has always been interesting and rather frustrating that in its fragility. Today’s lesson is not exactly that clear. Take the following error message:

ERROR #1027 DSI EXEC(104(1) repdb_svr.rep_db) - /dsiutil.c(390)
    Open Client Client-Library error: Error: -99999, Severity 5 -- 'Values exceed buffer length'.
ERROR #5215 DSI EXEC(104(1) repdb_svr.rep_db) - /dsiutil.c(393)
    The interface function 'SQLPrepare' returns FAIL for database 'repdb_svr.rep_db'. The errors are retryable. The DSI thread will restart automatically. See messages from the interface function for more information.

RANT: While Replication Server says it is retryable, it never actually retries.
It is for the DSI connection but which buffer?? Replication Server doesn’t list any “buffers” for the DSI explicitly. There are a myriad of caches for the DSI connection. In the error message I see two hints to narrow it down: “Values exceed” and “SQLPrepare”. The most likely cache candidates, to me, would be the batch size (dsi_cmd_batch_size) and the dynamic sql cache (dynamic_sql_cache_size).
A simple check would be to disable dynamic SQL and see if we get the same error message:

suspend connection to repdb_svr.rep_db
go
alter connection repdb_svr.rep_db set dynamic_sql to 'off'
go
resume connection to repdb_svr.rep_db
go

Within a few seconds, I received the same message, so that wasn’t the culprit. Before we do anything else, let’s reenable the dynamic sql:

suspend connection to repdb_svr.rep_db
go
alter connection repdb_svr.rep_db set dynamic_sql to 'on'
go

That leaves the batch size as the most likely culprit. So let’s increase that and see what happens:

suspend connection to repdb_svr.rep_db
go
admin who, dsi, repdb_svr, rep_db
go
--  record Cmd_batch_size (default is 8192
--  Increase dsi_cmd_batch_size 
alter connection repdb_svr.rep_db set dsi_cmd_batch_size to '32768'
go
resume connection to repdb_svr.rep_db
go

The error message did not reoccur and I see replication moving by monitoring the admin,sqm in rep server and the rs_lastcommit table in the replicate database to ensure we’re moving.

You may ask what changed that would require increasing the batch size. Well, a very large transaction that was trying to insert data that already existed but since not all the data already existed, we needed to change INSERT into DELETE followed by INSERT:

alter connection to repdb_svr.rep_db set dsi_command_convert to 'i2di'

Why would that cause it to go boom? Well, the dsi_command_convert is applied AFTER replication server slices and dices the transactions into batches.

Share Button

Perl: Sourcing a profile or bashrc or other shell script SOLVED

Everyone has worked at Perla place where they do things slightly different than what you’re used to. In this case we need to source a shell script file that houses the environment variables that we need to import. Unfortunately, the shell script file may or may not call other scripts/programs or it may use string manipulation to populate the environment variables. This means you can’t just read a the file in perl with simple key/value pairings.

In the Unix/Linux shell scripting world, if you export an environment variable it will be available in any child process.

# Here we export the variable so it will show up in Perl's %ENV hash:
export MYVAR="woohoo"

If we don’t explicitly export the environment variable, it will not be available to a child process.

# We don't export the variable so it will not show up in Perl's %ENV hash:
NOTEXPORTED_VAR="too bad"

So how do we handle the non-exported environment variables so Perl can use them? Each shell that is POSIX compliant in one way or another will have the set builtin command that will produce output of the environment variables regardless of whether they’ve been exported. Fortunately for us, it is in key/value pairs with an equals sign “=” as the delimiter. Be warned, you will get everything.

In the example code below we’re going to use the BASH shell to source the /somedir/.env file. You can replace it with the shell of your choice. Setting an environment variable with Perl’s %ENV hash will automatically export it making it available for any child processes of the Perl process.

 BEGIN {
     # you will need to include the "&& set" *IF* you have an shell file
     #  that doesn't export the variables.  
     if ( -f '/somedir/.env' && -x '/somedir/.env') {
         open(my $PS, 'bash -c ". /somedir/.env && set" |') or die 'Cannot execute bash built-in set');

         while (< $PS>) {
             # we need to strip extended ASCII characters
             #  and any lines without an "="
             if (/=/ && /[^\x20-\x7F]/) {
                 chomp;
                 my ($key, $value) = split /=/;
                 $ENV{$key} = $value;
             }
         }

         close $PS;
     }
Share Button

Linux & selinux: xauth timeout in locking authority file .Xauthority SOLVED

Error:

Last login: Tue Jan 20 14:17:19 2015 
/usr/bin/xauth:  timeout in locking authority file /home/jason/.Xauthority

Attempting to manually generate a new .Xauthority file results in the same error:

$ xauth generate :0 .trusted
xauth:  timeout in locking authority file /home/jason/.Xauthority

If the SELinux configuration is set to enforcing then we need to make sure the home directories are set in the correct context:

[root@localhost selinux]# egrep -e '^SELINUX=' /etc/selinux/config
SELINUX=enforcing

Taking a look at the SELinux settings for the home directories (use Z with the ls command):

[root@localhost ~]# ls -aslZ /home/
total 36
drwxr-xr-x. root    root    system_u:object_r:home_root_t:s0 .
drwxr-xr-x. root    root    system_u:object_r:root_t:s0      ..
drwx------. 55 unconfined_u:object_r:home_root_t:s0 jason users   4096 Jan 20 14:22 jason

The context for my home directory (jason) should be unconfined_u:object_r:user_home_dir_t:s0 and not unconfined_u:object_r:home_root_t:s0 as it is a home directory and not part of the root file system per se.

The easiest thing to do is just reset (restore) the context using restorecon as root

[root@localhost ~]# restorecon /home/jason

Verify that the context was changed:

[root@locahost ~]# ls -aslZ /home/
total 36
drwxr-xr-x. root    root    system_u:object_r:home_root_t:s0 .
drwxr-xr-x. root    root    system_u:object_r:root_t:s0      ..
drwx------. jason users   unconfined_u:object_r:user_home_dir_t:s0 jason

Verify fix:
Verify with a new ssh connection (with X11 Forwarding enabled):

Last login: Tue Jan 20 14:19:15 2015 
/usr/bin/xauth:  creating new authority file /home/jason/.Xauthority
$ xeyes

Capture

Share Button

SAP Sybase IQ: How many connections are in use? SOLVED

Very simple question. Very simple answer.

select 
    @@max_connections as 'max_connections', 
    count(*) as 'active_connections', 
    (1 - (@@max_connections - count(*)) / convert(numeric, @@max_connections)) * 100 as 'percent_active' 
  from sp_iqconnection();

Output:

 max_connections active_connections percent_active
 --------------- ------------------ ---------------------
             350                 68               19.4286

@@max_ connections: For the network server, the maximum number of active clients (not database connections, as each client can support multiple connections). For Adaptive Server Enterprise, the maximum number of connections to the server. — Sybase IQ > Reference: Building Blocks, Tables, and Procedures > SQL Language Elements > Variables

Share Button

HOWTO: stty: tcgetattr: Not a typewriter Shell scripting SOLVED

If you connect to a remote system or run a script through a cron like scheduler, you may encounter an error message from the stty or some such program:

stty: tcgetattr: Not a typewriter

The error is raised because your script is being run in a non-interactive mode and the stty program is expecting to have access to a terminal (ptty / tty). If your script isn’t explicitly calling stty, check any scripts that you’re sourcing and you will find code similar to the following:

set -o vi
stty erase ^H

So, how do you work around this? Easily, simply check if the script is running in interactive mode.

if [[ $- = *i* ]]; then
    set -o vi
    stty erase ^H
fi

The shell special variable $- will list the shell modes that are active.

echo $-
ism
Share Button

Sybase ASE: Adding log to a completely log full database – errors 1105 and 3475 “There is no space available in SYSLOGS” – SOLVED

When SAP Sybasea sybase database’s log is completely full, you won’t be able to add any log space to it. Attempting to add to the log produces a 3475 error:
 

00:0006:00000:00001:2014/01/08 09:03:09.09 server  ERROR: Can't get a new log page in db 4. num_left=17 num_in_plc=17.
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Error: 1105, Severity: 17, State: 7
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Can't allocate space for object 'syslogs' in database 'mydb' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslo
gs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
00:0006:00000:00001:2014/01/08 09:03:09.09 server  Error: 3475, Severity: 21, State: 7
00:0006:00000:00001:2014/01/08 09:03:09.09 server  There is no space available in SYSLOGS to log a record for which space has been reserved in database 'mydb' (id 4). This process will retry at interval
s of one minute.

 
So what to do? If you separate your data and log segments, you will need to temporarily add the log segment to a data device so the database can recover. Once it recovers, we can add space to the log and remove the log segment from the data device. For good measure, we run dbccs to correct any allocation issues that may be contributing to the out of log space.
 
Add the log segment to a data device (use sp_helpdb dbname to determine which data device has space):

exec sp_configure "allow updates", 1
go
update sysusages set segmap = 7 where dbid = 4 and lstart = 1492992
go
checkpoint
go
shutdown with nowait
go

Add space to the log:

alter database mydb log on mydevicel001 = 500
go

 
Before we do anything else, let’s run dbccs. Of course, you will want to run the dbccs without the fix option to identify if there are other issues prior to running with the fix:

exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec kill_user_connections mydb
exec sp_dboption mydb, 'dbo use', true
exec sp_dboption mydb, 'single user', true
dbcc traceon(3604)
dbcc checkdb(mydb, fix_spacebits)
dbcc checkalloc(mydb, fix)
exec sp_dboption mydb, 'dbo use', false
exec sp_dboption mydb, 'single user', false
go

If no lingering issues, we can remove the log segment from the data device:

exec sp_dboption mydb, 'single user', true
go
use mydb
go
exec sp_dropsegment logsegment, mydb, mydeviced005
go
use master
go
exec sp_dboption mydb, 'single user', false
go

SAP is fixing Bug CR 756957 in ASE 15.7 SP110 that may be the root cause of the 3475 error:

In certain circumstances, databases, including system databases, can incorrectly get into LOG SUSPEND mode, issuing message: “Space available in the log segment has fallen critically low in database ‘ < dbname > ‘. All future modifications to this database will be suspended until the log is successfully dumped and space becomes available.” This may happen even though there is much unreserved space in the database. The problem may also manifest in 3475 errors: “There is no space available in SYSLOGS to log a record for which space has been reserved in database < dbname > .”

Share Button