I don't know what the permissions mean, but if you're running this on a PC, I used Active Perl and it works fine. The unix side of me says your umask setting is wrong and the .pl script can't write the files to those directories. It's a shame we're cluttering up the original post, next time you shou...
If you're going to copy data across a network using DBLINKs, I hope your volume is low. INSERT INTO TABLE (SELECT * FROM TABLE@remotedatabase) should be avoided on large tables. Your network traffic, restartability, parallelism, and performance will suffer greatly. Your solution is easy, it's just c...
ODBC or OCI? If the command line for the database is available, consider a system command call to run a silly SQL statement and just check if it works. SELECT SYSDATE FROM DUAL is sufficient for Oracle using sqlplus. ISQL/OSQL are options for SQL-Server and SYBASE, dbaccess for Informix, and the DB2...
Never, but consider that your file is empty at the beginning of processing, thus the backup is empty and the APPEND occurs to an empty file. I suspect there's a job somewhere before this one that is clearing the file.
Write a multi-instance job that selects your data and spools to a text file. Use a parameter as part of the output text file name, and use a partitioning WHERE clause that will return a subset of your data. Run as many copies of the job as allowed to extract as much data in parallel as possible. Con...
Lots of ways. There's the reliable CALL DSLogFatal("I'm dying", "Help") API which blows up a job when called after logging your message into the job log. That can be done from within a routine. There's also an SDK version of this (UtilityAbortToLog), but you can write your own as well.
The two underlying files (DATA.30 and OVER.30) can each get to 2.2GB separately giving the illusion that there's more than 2.2GB total, but not likely because the file will dynamically grow and shift data from OVER.30 to DATA.30. The OS limitation is an old situation, probably doesn't apply to most ...
Validate the entire path, make sure all directories and mount points are still named correctly. Try doing an "ls -l /DSIU_02/Dstage/data/VCAMS/hash/dm_pgm" on the hashed file directory.