Search found 186 matches

by srini.dw
Tue Feb 04, 2014 1:34 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Lookup output coming as Null
Replies: 4
Views: 2465

Thanks for the replies.

The lookup stage is a case sensitive.

Last_Update_Date and Last_UpDate_Date was having letter case difference, hence the records were not flowing to the target.

Thanks,
by srini.dw
Mon Feb 03, 2014 8:13 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Lookup output coming as Null
Replies: 4
Views: 2465

Lookup output coming as Null

Hi Guys, Please need your help to fix the lookup issue. The job design DataSet | v ODBC connector -> Lookup -> DataSet (SQL server) Lookup Reference is Entire partitioned, and input is hash partioned by a columnname Field1. The join/lookup key is primary.Field1= reference.col1. Both the below output...
by srini.dw
Tue Jan 21, 2014 11:55 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Source and target values not getting matched
Replies: 10
Views: 5213

Thanks for reply. Dyanmic PLSQL is used in the 1st job, Its a implicit conversion here. In the 2nd job, trying to do explicit conversion to get the orignal value, with the below code. if IsNull(Value_Id) then SetNull() else AsInteger(NullToEmpty(Value_Id)) Tried with AsDouble and Asfloat, here the v...
by srini.dw
Tue Jan 21, 2014 11:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Source and target values not getting matched
Replies: 10
Views: 5213

Thanks for reply.

If I don't use as ASInteger, Iam getting the values as below from 1st job target table.

1.12267e+007
1.54196e+007

I need to convert this.

Dynamic PLSQL, is which parameters are explicitly identified.

Thanks,
by srini.dw
Tue Jan 21, 2014 10:21 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Source and target values not getting matched
Replies: 10
Views: 5213

Thanks for reply. My mistake. The job design is as follows 1st Job (pivot job) ODBC Connector -> Transformation -> Oracle Connector 2nd Job Oracle Connector -> Transformer -> CDC -> Target (Oracle) In the first job, we have used dynamic SQL to insert into the database, hence cannot use DecimalToStri...
by srini.dw
Tue Jan 21, 2014 12:09 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Source and target values not getting matched
Replies: 10
Views: 5213

Thanks for the replies.

1st Job (pivot job)
ODBC Connector -> Transformation -> DataSet

2nd Job
Dataset -> Transformer -> CDC -> Target (Oracle)

@Sanjay Have you tried the conversion function, any specific function I need to try.

Value_Id datatype is Varchar in Oracle.

Thanks,
by srini.dw
Tue Jan 21, 2014 8:06 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Source and target values not getting matched
Replies: 10
Views: 5213

Source and target values not getting matched

Hi Guys, Please need your help, source column and target columns values are getting mismatched. In the 1st job, source column i.e Value_Id(datatype Integer) is getting mapped to Varchar. In the 2nd job, for the same column Iam using the below logic to format it again. if IsNull(Value_Id) then SetNul...
by srini.dw
Wed Jan 15, 2014 3:30 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Implementing Pivot
Replies: 1
Views: 1033

Implementing Pivot

Hi, Iam trying to do the implementation of Pivot with the help of Sort stage. EmployeeID ArchiveID Empno FieldName DataValue Version DateCreated 11 81 111 First_name RAM 1 2013-05-31 05:33:32.710 22 92 111 First_name RAM 1 2013-04-16 10:06:32.710 33 71 111 First_name RAM 1 2013-05-31 07:40:51.663 Ne...
by srini.dw
Mon Jan 06, 2014 1:55 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Key and PK columns
Replies: 5
Views: 3774

Thanks for the input.

Will change the partitioning of the jobs to SAME.

Thanks,
Naveen
by srini.dw
Fri Jan 03, 2014 10:06 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Key and PK columns
Replies: 5
Views: 3774

Thanks for the reply. Hash partitioning in copy stage because based on 2 fields, records would be distributed. Is there any issue with this. Reg. the duplicate, I have checked, there are no columns like this, my mistake. I will re-phrase my scenerio. Job 1. SQL Server -> Copy -> DataSet Job 2. DataS...
by srini.dw
Fri Jan 03, 2014 12:40 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Key and PK columns
Replies: 5
Views: 3774

Key and PK columns

Hi, Have 1 doubt, can anyone please clarify. Suppose I have 5 columns A,B,C,D,E. I got 2 jobs Job 1. SQL Server -> Copy -> DataSet Job 2. DataSet -> Column_Generator -> Oracle. A,B are PK columns in source(SQL Server) and column A is Pk column in Target Table(Oracle). In the copy stage, done Hash pa...
by srini.dw
Fri Jan 03, 2014 12:09 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job getting failed for 1 node
Replies: 7
Views: 3382

Thanks for the replies, will try out and let you know.
by srini.dw
Wed Jan 01, 2014 11:14 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job getting failed for 1 node
Replies: 7
Views: 3382

Thanks for the reply. My mistake, 1node.apt is RAP Twonode.apt { node "node1" { fastname "RAP" pools "" resource disk "/apps/IBM/dataset1" {pools ""} resource scratchdisk "/apps/IBM/scratch" {pools ""} } node "node2" { f...
by srini.dw
Mon Dec 30, 2013 6:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job getting failed for 1 node
Replies: 7
Views: 3382

Job getting failed for 1 node

Hi, Please need your help to resolve the below issue. Job Design ODBC Connector(SQL server) -> Copy Stage -> DataSet. The job is running fine in the 2 node environment. But when I run the job in 1 node, its getting failed with the below error. Parallel job reports failure (code 139) Below are the 2 ...
by srini.dw
Mon Dec 23, 2013 4:30 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Partition Issue
Replies: 2
Views: 1310

Thanks for the reply.

In the Oracle connector, update query was not correct, have corrected it and re-run the job. Its working fine now.

Thanks,