Page 1 of 1

Datastage Bulk Load to Oracle DB - JSON data

Posted: Tue Sep 04, 2018 1:37 pm
by DSFreddie
Hi All -

I am facing issue in one Datastage job that reads the data ( Has fields with JSON data, defined as BLOB datatype in the Oracle table) and performs BULK load to the Oracle table. Note that this table doesnt have any constraints defined, so ideally its supposed to perform the load run faster.

The job takes 15 minutes to load 5 Million records. The commit count/Array size gets defaulted to 1 due to the LongVarBinary field. Can someone pls suggest a better way to handle this scenario so that the loads run faster ?


Flow :

Sequential File ----> Oracle Load ( using Oracle Connector)
JSON data ---> Defined as LongVarBinary in Datastage

Thanks
Freddie

Posted: Tue Sep 04, 2018 4:26 pm
by ray.wurlod
JSON is usually pure text, so could be defined as CLOB and treated by DataStage as Long VarChar data type. Then you could probably boost your granularity.

Posted: Tue Sep 11, 2018 1:14 pm
by DSFreddie
Thanks for your reply Ray. One of the reason why it is BLOB is to bring in more flexibility in terms of what we would store in the future.

Are there any other ways in which we can perform a Bulk Load operation against Oracle Exadata table with BLOB fields in it.

Pls help.

Thanks

Posted: Thu Sep 13, 2018 6:58 pm
by ray.wurlod
Probably, but not through DataStage. BLOB is not a supported data type for DataStage.