Splitting a GNU tar archive across multiple files

Create tar archive files of no larger than 31 GBytes:

tar -cv -M -L 32505856 -f backup.tar ~jfroebe
  • -c create tar archive
  • -v verbose output
  • -M enable multi-volume handling (multple tapes or files)
  • -L size of file in kilobytes
  • -f name of first tar archive file
  • ~jfroebe directory that I wish to backup
  • when 31 GBytes is reached, it will ask you to insert the next file
  • n backup.tar2 <ENTER>Y<ENTER>
  • n next ‘word’ is the name of the new file (or tape drive)
  • Y<ENTER> tar will ask you to change the file ‘backup.tar2’ and confirm tar is to proceed. Since it is a new file, go right ahead and tell it to proceed

Restore the contents of a multi-volume tar file:

tar -xv -M -f backup.tar -f backup.tar2
  • tar recognizes multiple files in the restore with the only criteria being that they are in order. Meaning, tar won’t be able to restore all of the data if you do:
tar -xv -M -f backup.tar2 -f backup.tar

You can also do this with compression:

  • Example:
  • tar -cv -z -M -L 32505856 -f backup.tar.gz ~jfroebe
    tar -xv -z -M -f backup.tar.gz -f backup.tar2.gz
  • There are three methods of compressing with the GNU tar, each requires that the programs are installed
    • bzip2
    • -j parameter
    • bzip2 software package (bzip2)
  • zlib
    • -z parameter
    • gzip software package (gzip/gunzip)
  • compress
    • -Z parameter
    • ncompress software package (compress/uncompress)
  • While bzip2 typically provides the best compression, gzip is far more common in corporate environments.
  • ‘compress’ provides the worst compression and is the slowest but is guaranteed to be on all commercial unix boxes
  • GNU Tar for windows can be obtained GNU Tar and work just the same as the unix versions. Using tar is a great way for tranferring a large file or a whole bunch of files to/from windows while not having to worry about changes in the file names that can sometimes happen with filenames having unicode and/or extended characters.

    Share Button

    Stacktrace can be produced when the log of a database is completely full

    It is understood that the transaction log filled up but that shouldn’t cause a stacktrace to be produced. Whenever a stacktrace is produced, people can get a bit excited about it. (not good)

    sequence of events:

    1. repserver goes down
    2. database tran log fills up
    3. thousands of “uppause: No free alarms available.” messages
    4. thousands of stacktraces

    I’ve opened a case with Sybase so ASE won’t produce the stacktrace as it can confuse people as to thinking that the stacktrace is the problem not the fact that the log is filled up.
    So, ignore this particular stacktrace when the transaction log is full.

    00:00000:01527:2006/09/26 10:53:11.36 kernel uasetalarm: no more alarms available
    01:00000:01527:2006/09/26 10:53:11.36 kernel uppause: No free alarms available.
    01:00000:01527:2006/09/26 10:53:11.36 kernel ************************************
    01:00000:01527:2006/09/26 10:53:11.36 kernel SQL causing error : insert into table_x (err_date, 01:00000:01527:2006/09/26 10:53:11.36 kernel ************************************
    01:00000:01527:2006/09/26 10:53:11.36 server SQL Text:
    01:00000:01527:2006/09/26 10:53:11.36 kernel curdb = 4 tempdb = 2 pstat = 0x10100
    01:00000:01527:2006/09/26 10:53:11.36 kernel lasterror = 3475 preverror = 0 transtate = 1
    01:00000:01527:2006/09/26 10:53:11.36 kernel curcmd = 195 program =
    01:00000:01527:2006/09/26 10:53:11.40 kernel pc 0x88027ed ucbacktrace+0x89(0x0,0x1,0x93674d0,0x56e869d4,0x0)
    01:00000:01527:2006/09/26 10:53:11.40 kernel pc 0x81b3f99 terminate_process+0xbd1(0x0,0xffffffff,0x93674d0,0x40974e34,0x835ea5f)
    01:00000:01527:2006/09/26 10:53:11.40 kernel pc 0x820c0a5 close_network+0x19(0x93674d0,0x1cc,0x40974ee0,0x83aceed,0xfffffffc)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x835ea5f do_pause_on_fatal_xls_error+0x9b(0xfffffffc,0x1,0x7,0x10f,0x93674d0)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x83aceed plc__flush+0x775(0x4f6a7608,0x130000,0x0,0x93674d0,0x38b2eb20)
    01:00000:01527:2006/09/26 10:53:11.42 kernelpc 0x83ac29e xls_preflush+0xda(0x5e52f034,0x130000,0x0,0x93674d0,0x5e52f034)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x8373695 finishlog+0x241(0x5e52f034,0x2,0x93674d0,0x1,0x56e869d4)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x832162exact__endxact+0x1a2(0x5e52f034,0x2b,0x93674d0,0x5e52f034,0x56e869d4)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x8321270 xact__commitxact+0x3c4(0x5e52f034,0x93674d0,0x8327dd4,0x1,0x84ea52a)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x8320e36 xact__commit_local+0xea(0x93674d0,0x1,0x4097568c,0x84dde11,0x40975380)
    01:00000:01527:2006/09/26 10:53:11.42 kernel pc 0x8327dd9 xact_commit+0x41(0x40975380,0x93674d0,0x56e869d4,0x3,0x20202020)
    01:00000:01527:2006/09/26 10:53:11.44 kernel pc 0x84dde11 s_execute+0x539d(0x93674d0,0x0,0x56e86901,0x0,0x61645f72)
    01:00000:01527:2006/09/26 10:53:11.44 kernel pc 0x84f2e85 sequencer+0xf79(0x6c3c0000,0x93674d0,0x0,0x56e86901,0x81c749e)
    01:00000:01527:2006/09/26 10:53:11.44 kernel pc 0x81e33e5 tdsrecv_language+0x2ed(0x0,0x0,0x0,0x0,0x0)
    01:00000:01527:2006/09/26 10:53:11.44 kernel pc 0x81f2ec5 conn_hdlr+0x2809(0x3a6,0x40975b88,0x895eed31,0x0,0x0)
    01:00000:01527:2006/09/26 10:53:11.44 kernel pc 0x8244c47 ex_cleanup(0x0,0x0,0x0,0x3c770900,0x420)
    01:00000:01527:2006/09/26 10:53:11.45 kernel pc 0x895eed31 init_dummy+0x809584b1(0x0,0x3c770900,0x420,0x1,0x5374616b)
    01:00000:01527:2006/09/26 10:53:11.45 kernel end of stack trace, spid 1527, kpid 4719647, suid 6578
    Share Button

    How to Resync a replicated database using Sybase ASE and Replication Server

    This is just one way to resync the replicated database in Sybase
    Continue reading “How to Resync a replicated database using Sybase ASE and Replication Server”

    Share Button

    Perl DBD::Sybase and signal handling

    There appears to a bug with DBD::Sybase or perhaps Sybase OpenClient ctlib (threaded) that causes custom signal handlers to segfault.  This tripped up a monitoring script that I wrote.

    I’ve asked the perl module maintainer, Michael Peppler, whether this is a DBD::Sybase bug or an Openclient bug.


    looks like this is a DBD::Sybase bug not an OpenClient ctlib as the example $SYBASE/$SYBASE_OCS/sample/ctlibrary/multthrd.c with an added signal handler works fine:
    $ diff multthrd.c $SYBASE/$SYBASE_OCS/sample/ctlibrary/multthrd.c

    < #include <signal.h>
    < 217,223d214
    < void leave(int sig);
    < void leave(int sig) {
    <     printf("caught SIGINT\n");
    <     exit(-1);
    < }
    <       (void) signal(SIGINT,leave);
    <        for(;;) {
    <         printf("Ready...\n");
    <         (void)getchar();
    <     }
    &#91;text gutter="false"&#93;
    Thread_2:All done processing rows - total 116.
    caught SIGINT&#91;/text&#93;
    <h3>Bug text:
    use strict;
    use DBI;
    use DBD::Sybase;
    $SIG{'INT'} = sub {print "hi there\n";exit();};
    print "go\n";
    while (1) {

    If I run this and then type ^C I get a segmentation fault. If I comment out

    the ‘use DBD::Sybase;’ line, it works fine


    Build DBD::Sybase against the nonthreaded (libct.so not libct_r.so) openclient libraries.

    Share Button

    Finding suspect indexes in Sybase ASE

    Problem: ASE does not use an index on a table because it is marked suspect

    Index id 2 on table id 864003078 cannot be used in the optimization of a query as it is SUSPECT. Please have the SA run DBCC REINDEX on the specified table.
    Index id 2 on table id 864003078 cannot be used in the optimization of a query as it is SUSPECT. Please have the SA run DBCC REINDEX on the specified table.
    Index id 2 cannot be used to access table id 864003078 as it is SUSPECT. Please have the SA run the DBCC REINDEX command on the specified table.

    Solution:  Use sp_indsuspect to identify the suspect indexes in a database and run dbcc reindex(<tablename>) tables.

    dbcc reindex can only be executed on a single table at a time in a database so if time is short, you may be better off dropping and creating the indexes in parallel.

    Share Button

    HOWTO: Fixing a raw device misconfiguration

    If you were observent in the Mapping Linux LVM and Raw partitions blog post, you probably noticed that there are two raw devices pointing to the same logical volume

    raw -qa
    /dev/raw/raw1:  bound to major 253, minor 7
    /dev/raw/raw2:  bound to major 253, minor 8
    /dev/raw/raw3:  bound to major 253, minor 9
    /dev/raw/raw4:  bound to major 253, minor 10
    /dev/raw/raw5:  bound to major 253, minor 12
    /dev/raw/raw6:  bound to major 253, minor 13
    /dev/raw/raw7:  bound to major 253, minor 15
    /dev/raw/raw8:  bound to major 253, minor 15
    /dev/raw/raw10: bound to major 253, minor 16
    /dev/raw/raw11: bound to major 253, minor 17

    Correcting this misconfiguration is easy but can be painful if the devices have been put to use (maybe as a database device).  Since we’ve already done the mapping (see Mapping Linux LVM and Raw partitions), we know the devices that they should be mapped to.

    Let’s assume that no one has starting using either raw device and fix it the easy way (as root):

    raw /dev/raw/raw7 /dev/dbvg/rawdatavol07
    /dev/raw/raw7:  bound to major 253, minor 14
    raw /dev/raw/raw8 /dev/dbvg/rawdatavol08
    /dev/raw/raw8:  bound to major 253, minor 15

    We have one more step, we need to update whatever script is run at start up to configure the raw devices to make sure that the mapping is retained after we reboot:

    On RedHat and derived distributions, we modify the /etc/sysconfig/rawdevices:

    /dev/raw/raw1 /dev/dbvg/rawdatavol01
    /dev/raw/raw2 /dev/dbvg/rawdatavol02
    /dev/raw/raw3 /dev/dbvg/rawdatavol03
    /dev/raw/raw4 /dev/dbvg/rawdatavol04
    /dev/raw/raw5 /dev/dbvg/rawdatavol05
    /dev/raw/raw6 /dev/dbvg/rawdatavol06
    /dev/raw/raw7 /dev/dbvg/rawdatavol07
    /dev/raw/raw8 /dev/dbvg/rawdatavol08
    /dev/raw/raw10 /dev/dbvg/rawdatavol10
    /dev/raw/raw11 /dev/dbvg/rawdatavol11

    RedHat provides the script file /etc/init.d/rawdevices that will read the /etc/sysconfig/rawdevices and while we could use it to correct the raw device mappings…. It is my understanding that remapping of the raw devices that are in use may allow for loss of data at the instant that the remapping takes place.  So, we avoid the whole situation and run the raw command on only the devices that are mismapped.

    Share Button

    Mapping Linux LVM and Raw partitions


    We need to determine what device a raw partition resides on and the size of the partition.  We know that this box uses the Linux Volume Manager (LVM).


    Getting this information is easy if you are root or have the /usr/bin/raw binary set with the SUID bit.

    Continue reading “Mapping Linux LVM and Raw partitions”

    Share Button

    HOWTO: Running multiple processes in a single Perl script using POE

    The perl module POE is best known for simulating multiple user threads in a perl application.

    POE is also very useful for managing the forking and managing of child processes.  By using POE to handle this, the programmer can stop worrying about correctly handling the forking.

    Set the number of child processes with MAX_CONCURRENT_TASKS.  For example, to allow for three child processes that do the work:

    sub MAX_CONCURRENT_TASKS () { 3 }

    The child process executes the do_stuff() function, so put whatever you need to run in parallel there.

    Continue reading “HOWTO: Running multiple processes in a single Perl script using POE”

    Share Button

    stderr, local block and redirection

    I’m running into an annoyance that I find rather purplexing. Maybe it is the fact that the closer I get to my wedding, the more my brain is shutting down that I can’t seem to see the problem. :-p

    I open the STDERR descriptor in a local block and redirect it to a variable. The first time I call it, it works fine. The second time, I get an uninitiallized value error

    Use of uninitialized value in open at ./test_stderr line 10.

    use strict;
    use warnings;
    sub test_stderr {
         my $output;
         open local(*STDERR), ‘>’, $output or die $!;
         print $output if $output;

    The solution apparently is to initialize the $output variable with an empty (non undef) value:

    use strict;
    use warnings;
    sub test_stderr {
       my $output = "";
       open local(*STDERR), ‘>’, $output or die $!;
       print $output if $output;

    At this time, I’m not sure as to why this is necessary, but I’ve asked on Perlmonks.org as to an explanation.


    Fletch from Perlmonks.org explained why:

    Although it’s not explicitly explained that I can find, my guess would be that under the hood it needs an actual not-undef SvPV* into which the data is written. Otherwise it’s going to be trying to append things to the single global undefined value &PV_sv_undef which is what throws the error.

    Share Button

    the syb_flush_finish parameter in DBD::Sybase

    When I was looking up the syntax for a parameter on Michael Peppler’s DBD::Sybase perl module, I ran across his explanation of the syb_flush_finish parameter.  I just had to explain what was being reported to him 🙂

    syb_flush_finish (bool)

    If $dbh->{syb_flush_finish} is set then $dbh->finish will drain any results remaining for the current command by actually fetching them. The default behaviour is to issue a ct_cancel(CS_CANCEL_ALL), but this appears to cause connections to hang or to fail in certain cases (although I’ve never witnessed this myself.)

    What was being reported to Michael Peppler was most likely the result of how cancels were performed in TDS version 4.x. In TDS 4, the cancel was performed using the TCP expedited flag (similar to the OUT OF BAND flag. Which caused ASE to cancel whatever operation it was currently doing and in some cases, this would involve the connection going into an unknown state.

    Continue reading “the syb_flush_finish parameter in DBD::Sybase”

    Share Button