I’m not trying to raise any ire in anyone, so please bear with me….
The issue of using isql or some other tool (such as Perl or Java) was brought up in the sybase-l mailing list
A benefit of handling each batch separate entity instead of all the batches as a whole in isql, we have the ability to isolate each batch. That means, the we can place logic around each batch or run them in parallel or a number of other reasons. This doesn’t actually require Perl and you could definitely do this in Java, but you would probably want to shy away from using isql for executing each batch. You may have hundreds or thousands of batches and starting isql for each batch would be very time consuming.
I have no problems with using awk/sed/whatever for maintenance scripts. I do have a problem of relying too heavily on the isql program from Sybase or Microsoft. Why? isql does not handle errors very well and often can fail due to bugs in the isql/openclient or network issues. sqsh is a bit better but I wouldn’t trust either in a production environment.
Yeah, I know lots of people run isql/sqsh in production environments but when isql fails, are you able to identify the problem and workaround it without having your script fail?
For example, if you were using isql to import a SQL file of say 10,000 batches (table ddls, etc), would you really want to restart the whole job if the 998th batch failed because of a deadlock? Wouldn’t you rather just retry the 998th batch?
In many cases, you might not be able to restart the whole job, so you would have to create a new SQL file copy/paste it from the original.
Sure, you could pipe the output of isql/sqsh and parse it but they don’t show all the errors and a general disconnect error message doesn’t really help.
I’m lazy by nature, so doing something manually when it can be scripted to be have more error handling is pretty much a good thing.