Hi Everyone...
I'm in the midst of a research project and could use your help.
How often do you use Job Parameters such as #DSHostName#, #DSProjectName#, #DSJobName#, and how? Typically I see it for defining things like flat files and datasets, which is particularly important for this line of research.........but let me know what other creative ways you are using these and others, such as #DSJobStartTimestamp#, etc.
(any platform, any job type, any release)...
Thanks in advance!
Ernie
Please describe how you use 'built-in' Job Parms...
Moderators: chulett, rschirm, roy
Please describe how you use 'built-in' Job Parms...
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Not particularly creative, but these (or their DSMacro equivalents) are very useful in error processing streams to enrich the error data with "location" information - job name, start date, project, etc. I tend to encapsulate that processing in a shared container for re-use.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks Ray...anyone else?
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
I could write a book on that...
Some examples on when / where to use those parms:
1) Multi instance jobs that have a Teradata Mload process using a named pipe. The process to create the name of the pipe is flawed IMHO. It is based on the lnk name but does nor factor in your multi instance aspect. So you need to modify your code to specifically name the pipe in a unique fashion, or use a different subdirectory (per invocation) to contain your logs / named pipes.
2) In a multi project environment, you have different teams that don't talk to eachother. The common work space could result in filename headaches. If every project has it's own workspace based upon project name, you avoid headaches.
3) The Grid Enablement Toolkit now breaks up your dynamic apt file creation into a subdirectory within grid_job_dir by it's jobname. This was to overcome an issue in Linux. (Ernie: call me, I can explain it better over the phone)
4) IMHO the operational Metadata (XML Files) that get generated should be categorized under a project and jobname subdirectory structure. Not just lumped all together in one path. (hint, improvement request coming down the pipe)
5) Project and job name help when extracting job log information (dsjob -logdetails) and externalizing them to an archive file server. The target system will benefit from having a categorized directory structure based on those values.
Those are just off the top of my head.
Some examples on when / where to use those parms:
1) Multi instance jobs that have a Teradata Mload process using a named pipe. The process to create the name of the pipe is flawed IMHO. It is based on the lnk name but does nor factor in your multi instance aspect. So you need to modify your code to specifically name the pipe in a unique fashion, or use a different subdirectory (per invocation) to contain your logs / named pipes.
2) In a multi project environment, you have different teams that don't talk to eachother. The common work space could result in filename headaches. If every project has it's own workspace based upon project name, you avoid headaches.
3) The Grid Enablement Toolkit now breaks up your dynamic apt file creation into a subdirectory within grid_job_dir by it's jobname. This was to overcome an issue in Linux. (Ernie: call me, I can explain it better over the phone)
4) IMHO the operational Metadata (XML Files) that get generated should be categorized under a project and jobname subdirectory structure. Not just lumped all together in one path. (hint, improvement request coming down the pipe)
5) Project and job name help when extracting job log information (dsjob -logdetails) and externalizing them to an archive file server. The target system will benefit from having a categorized directory structure based on those values.
Those are just off the top of my head.
EtlStats uses these heavily at least my current version. Not sure if the old one does. Hostname is critical to combining DEV, TEST and PROD metadata. A lot of times the project names are the same. The original EtlStats had only project name as part of the key. If all of these are in one database then it is easy to compare runtimes acros environments.
I have seen start time used a few times but not much good unless you have end time and row counts. If you are monitoring jobs then you need start time of the job you are monitoring and not the job running.
I have seen start time used a few times but not much good unless you have end time and row counts. If you are monitoring jobs then you need start time of the job you are monitoring and not the job running.
Mamu Kim
Thanks everyone! Great feedback....
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
I would also use the built in project name within the Dynamicgrid.sh script (Grid Enablement Toolkit) to pass in to my Grid Resource Management job submission to properly tag jobs to their associated projects. That way we could do research as to how many jobs Project X submitted to the grid in a given timeframe.