Page 1 of 2

Shell script execution

Posted: Wed Dec 21, 2011 1:51 pm
by wahi80
Hi,
I have a job where a shell script is executed at end of job. The job works perfectly on one server but fails on the other server.
But when I run the shell script individually on other server it works perfectly.

Can anyone let me know what could be wrong with the job?

Posted: Wed Dec 21, 2011 3:12 pm
by ray.wurlod
Check how arguments are passed to the script on each system.

Posted: Wed Dec 21, 2011 9:04 pm
by pandeesh
What's the error you are getting exactly so that we can advise what's the root caue.
My wild guess is the directory path may exist in one server and not in another.
Make sure everything is perfect.

Posted: Thu Dec 22, 2011 9:17 am
by wahi80
Whn the script fails in DataStage, I just go into the Director and copy the command from the log and paste it on command line, and everything works.

But same command fails from within DataStage on one server but passes on other server.

Posted: Thu Dec 22, 2011 9:21 am
by chulett
You've given us absolutely no information to work with, just "it works here" and "doesn't work there". Help us help you, post the actual error that you are getting, please. Unedited.

Posted: Thu Dec 22, 2011 9:35 am
by wahi80
Here you go:


DataStage Log from server where command executes correctly

/apps/Ascential/DataStage/Projects/Scripts/ftp_to_nt.sh uaxxx21 /apps/Ascential/DataStage/Projects/Target/ p.txt "10.xx.xx.xxx" H_Files/Directory p.txt ascii
Reply=0
Output from command ====>


DataStage Log from server where command execution fails

/apps/Ascential/DataStage/Projects/Scripts/ftp_to_nt.sh uaxxx31 /apps/Ascential/DataStage/Projects/Target/ p.txt "10.xx.xx.xxx" H_Files/Directory p.txt ascii
Reply=1
Output from command ====>

If the above command is copied into command line of server uaxxx31, it executes perfectly

Posted: Thu Dec 22, 2011 10:06 am
by qt_ky
Telnet to your server as the same ID that executes the DataStage job, and then test the ftp script. Just a wild guess if you're using default credential mapping, then you could be testing with a personal ID but not the mapped ID, like dsadm. Depending on what your script does, it may be looking for hidden files in the dsadm home directory that don't exist there, but do exist in your personal home directory. If that doesn't help, then please show what's inside the script.

Posted: Thu Dec 22, 2011 10:58 am
by wahi80
The FTP script executes perfectly when I log in as dsadm and run the script in Telnet.

But when I run the job as dsadm it fails, I added some more logging and this what I see this in the log:

Name (10.xx.xx.xxx:dsadm): User ascii cannot log in.
Login failed.

It is taking ascii as the user login id in DataStage job , but the same command in Telnet works perfectly.

I'm thinking that when I run in DataStage the profile is not being read correctly, any other ideas?

Posted: Thu Dec 22, 2011 11:06 am
by qt_ky
I don't think DataStage will ever read your profile. Any required environment variables belong in the dsenv file. If it's dependent on that, then compare your servers' dsenv files. Changes there require a restart of the DataStage server process.

Why could it be taking the ascii command as the user ID? I would double check your job and script across servers, but it could be related to your profile too. Are you able to run the script from telnet under a different user account, perhaps one with a default profile?

Posted: Thu Dec 22, 2011 11:42 am
by PaulVL
Could it be that you are in the wrong default directory?

When you open a command sequencer your default directory is your project dir.

Hop to that path on the command line and try your same command.

The ENV settings in a command sequencer are also a factor. Those are modified by your dsenv settings, and also your COLUMNS setting based upon the window size when you bounced your engine last.

add some debug statements to your ftp script.
echo $ENV > /tmp/my_ftp_env_settings.txt

stuff like that.
Also turn on verbose on your ftp so that you get more debug info.

Posted: Thu Dec 22, 2011 3:14 pm
by wahi80
Checked dsenv file across servers they are identical.
Checked scripts, they too are identical.

Currently only dsadm is setup on the box (it is a new box)

So script runs fine under dsadm in telnet but still failing in Datastage.

Any other ideas to try out?

Posted: Thu Dec 22, 2011 3:40 pm
by ray.wurlod
What user ID do your DataStage jobs run under?

Posted: Thu Dec 22, 2011 3:59 pm
by wahi80
Logged in as dsadm in Director and executed job.

Posted: Thu Dec 22, 2011 4:35 pm
by ray.wurlod
That doesn't mean your job runs under that ID. Use ExecSH as a before-job subroutine and execute the id command to determine the actual user.

Posted: Fri Dec 23, 2011 7:35 pm
by qt_ky
Another thing to check: I ran into a similar problem but with FTPS when using the FTP Enterprise stage to call the ftp -s command on AIX. It turned out that the HOME environment variable, as found in the detail job log, was pointed to the ID that we had used to sudo to root to run the server install with, many months before. In that case, I had installed certificates under /home/dsadm, because jobs were executed under the dsasdm uer. The FTP Enterprise stage only worked after I put the certificates into the other user's home directory (the one that $HOME pointed to by default). I could have changed HOME to point to /home/dsadm to make it work also.