haha..
Actually, our client has made this rule of not using the Sequences.
Search found 353 matches
- Wed Dec 13, 2017 5:18 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using Parameters in the File in User defined SQL
- Replies: 11
- Views: 7133
- Tue Dec 12, 2017 10:15 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TRIM() vs LTRIM(RTRIM())
- Replies: 7
- Views: 9179
I couldn't agree with you more Chulett. I know that it is better and efficient to use TRIM instead of nested RTRIM and LTRIM functions if need to remove the spaces from a column. I just asked this because my client says that nested one would work better. I just wanted to know who's right, I or my cl...
- Tue Dec 12, 2017 8:00 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TRIM() vs LTRIM(RTRIM())
- Replies: 7
- Views: 9179
- Tue Dec 12, 2017 5:12 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TRIM() vs LTRIM(RTRIM())
- Replies: 7
- Views: 9179
TRIM() vs LTRIM(RTRIM())
Hi, I have been asked to remove the spaces from all the columns of a table. So I simply added the TRIM function in the SELECT SQL which worked fine. But my Client asked me to change it and use LTRIM(RTRIM)) function. I said that they both will work fine as long as we are removing the spaces. But he'...
- Tue Dec 12, 2017 4:08 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using Parameters in the File in User defined SQL
- Replies: 11
- Views: 7133
Are you sure you really need to use a SQL File for this, that it is an appropriate solution? This was our first approach for loading this particular table, since the INSERT SQLs were not finalized so we thought to have a file containing all the SQLs. In case, those needed to be changed, we'll simpl...
- Thu Dec 07, 2017 12:07 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using Parameters in the File in User defined SQL
- Replies: 11
- Views: 7133
Using Parameters in the File in User defined SQL
Hi, I am trying to load data in my Teradata table using a Teradata Connector stage. I have used the User-Defined SQL functionality available there, I am using a File containing all the Insert SQLs. Now I have used parameters in that SQL and defined those parameters in the job properties of the job. ...
- Tue Oct 31, 2017 3:02 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: XML Stage - Reject Link Error
- Replies: 3
- Views: 2810
- Mon Oct 30, 2017 7:29 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: XML Stage - Reject Link Error
- Replies: 3
- Views: 2810
XML Stage - Reject Link Error
Hi, I am trying to load XML Messages to Teradata database using the XML stage. The XML messages are first loaded in teradata tables with MQ Connector stage and then parsed using XML stage. The job looks like TD Conn ---> XML Stage---> Tfr ----> TD Conn | | (reject link) File Now there's a requiremen...
- Thu Jul 21, 2016 4:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vertical Pivot
- Replies: 6
- Views: 4701
Vertical Pivot
Dear Team, We are facing issue while doing vertical pivoting in datastage . More description as follows: Job : db2 ---> Pivot (vertical pivot) ----->fixedwidth File job functionality : We have requirement of below output (after vertical Pivot) . COL1|COL2|COL3 A|1|1|1 B|2|2|2 C|3|3|3 But if db2 quer...
- Tue Apr 14, 2015 1:05 am
- Forum: General
- Topic: How to configure ODBC DSN for Teradata database
- Replies: 2
- Views: 2760
How to configure ODBC DSN for Teradata database
Dear Team, Kindly suggest the steps for configuration of ODBC DSN for Teradata database. OS : WINDOWS. We have done the configuration but still DSN is not available for import the metadata. Note : We want to above configuration to import the table definitions. Please suggest details steps. Thanks
- Tue Mar 03, 2015 1:11 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Unable to obtain DB2 partition information
- Replies: 4
- Views: 5907
- Sun Mar 01, 2015 12:58 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Unable to obtain DB2 partition information
- Replies: 4
- Views: 5907
Unable to obtain DB2 partition information
Hi Team , We have migrated the db2 database and while pointing ETl to new database we are facing below error The connector was not able to obtain partitioning information for the table <TABLE NAME> in the database <DATABASE NAME> The method sqlugtpi returned reason code 0, SQLCODE -551. Ensure that ...
- Wed Dec 31, 2014 3:52 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Need information change data capture (DS8.7)
- Replies: 3
- Views: 3263
Hey , Thanks for the reply. We cannot perform the truncate and reload always because Next dependent job will fetch the complete data again and de-duplication can be an issue .We want to pass the only change records to the end state. Question : 1>One file stage contain 150 files ,While reading (750gb...
- Wed Dec 31, 2014 1:11 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Need information change data capture (DS8.7)
- Replies: 3
- Views: 3263
Need information change data capture (DS8.7)
Dear Team, We have 150 files . Each files contains 5GB data which is readable by flat file. Target table contain data about 50 cr. We are getting full table dump data in the form of flat file (150 files) We need to update the target table with change records only. We have develop job as follows: fil...
- Fri Nov 14, 2014 3:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DS8.7:Need info about DATASET
- Replies: 3
- Views: 3743
DS8.7:Need info about DATASET
Dear Team, Need one information : I am writing data into one Dataset (say :Dataset A) . At a same time Can I read the data from Dataset A and write into table. Scenario : Job 1: Table A ---->Dataset A Job 2 Dataset A------->Table B Can I run both Job at a time.Here Dataset A is common . Thanks.