Please describe how you use 'built-in' Job Parms...

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Please describe how you use 'built-in' Job Parms...

Post by eostic »

Hi Everyone...

I'm in the midst of a research project and could use your help.

How often do you use Job Parameters such as #DSHostName#, #DSProjectName#, #DSJobName#, and how? Typically I see it for defining things like flat files and datasets, which is particularly important for this line of research.........but let me know what other creative ways you are using these and others, such as #DSJobStartTimestamp#, etc.

(any platform, any job type, any release)...

Thanks in advance!

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Not particularly creative, but these (or their DSMacro equivalents) are very useful in error processing streams to enrich the error data with "location" information - job name, start date, project, etc. I tend to encapsulate that processing in a shared container for re-use.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Thanks Ray...anyone else?

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

I could write a book on that...

Some examples on when / where to use those parms:

1) Multi instance jobs that have a Teradata Mload process using a named pipe. The process to create the name of the pipe is flawed IMHO. It is based on the lnk name but does nor factor in your multi instance aspect. So you need to modify your code to specifically name the pipe in a unique fashion, or use a different subdirectory (per invocation) to contain your logs / named pipes.

2) In a multi project environment, you have different teams that don't talk to eachother. The common work space could result in filename headaches. If every project has it's own workspace based upon project name, you avoid headaches.

3) The Grid Enablement Toolkit now breaks up your dynamic apt file creation into a subdirectory within grid_job_dir by it's jobname. This was to overcome an issue in Linux. (Ernie: call me, I can explain it better over the phone)

4) IMHO the operational Metadata (XML Files) that get generated should be categorized under a project and jobname subdirectory structure. Not just lumped all together in one path. (hint, improvement request coming down the pipe)

5) Project and job name help when extracting job log information (dsjob -logdetails) and externalizing them to an archive file server. The target system will benefit from having a categorized directory structure based on those values.

Those are just off the top of my head.
fmou
Participant
Posts: 124
Joined: Sat May 28, 2011 9:48 pm

Post by fmou »

I can't help answering any of your questions but I got one more.
How often do you use Job Parameters such as #DSHostName#, #DSProjectName#, #DSJobName#,
Can these default Job Parameters be used to initiate other user defined Job Parameters? If yes, how?

Thanks
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

EtlStats uses these heavily at least my current version. Not sure if the old one does. Hostname is critical to combining DEV, TEST and PROD metadata. A lot of times the project names are the same. The original EtlStats had only project name as part of the key. If all of these are in one database then it is easy to compare runtimes acros environments.

I have seen start time used a few times but not much good unless you have end time and row counts. If you are monitoring jobs then you need start time of the job you are monitoring and not the job running.
Mamu Kim
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Thanks everyone! Great feedback....

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

I would also use the built in project name within the Dynamicgrid.sh script (Grid Enablement Toolkit) to pass in to my Grid Resource Management job submission to properly tag jobs to their associated projects. That way we could do research as to how many jobs Project X submitted to the grid in a given timeframe.
Post Reply