Hmm... sorry for shoving you back in the minefield Arnd. Hopefully it's not a issue as (you know) it will complicate things greatly without an FD that exactly matches what is being received.
Exactly. It lists the maximum number of concurrent users allowed. Now, that can be a little tricky to define but from what I understand, up to 10 connections to the server from the same IP address are considered to be 1 user. I've heard of issues with certain versions where it can get 'confused' and...
Ok. There's nothing stopping you from running this query without error, just wanted to make sure that was really what you wanted to do. To reiterate, you need to make sure that: * You are selecting three values in your query so you need three columns defined in the DRS lookup. Since this is a User D...
A limitation? No, just the way it works. I'd say that any solution will require a certain amount of work as all jobs will need to be edited in some fashion. There's no 'magic bullet', no switch to turn that functionality off. If you want to leave your current Reject functionality in place, it sounds...
Welcome! The answer is deceptively simple - don't check the Reject Row box. When it is checked, a Warning with the total rejected row count is written to the log. If you want the records written out without the warning, don't check the box but instead logically send the 'rejected' records down the l...
If you don't have an ORDER BY clause in your SQL then you shouldn't be asserting the data is sorted in the Aggregator. Unless there is a Sort stage involved... Why not let the database do the grouping for you? Unless you are grouping on fields derived in the job, your source database should be able ...
So by checking the order in the sort stage and the properties box where we manually provide like Col1 ASC, Col2 ASC the error can be resolved. Yes - as long as they match! Keep in mind your ORDER BY clause will need to do the NVL as well if you are sorting on that column. Thats a long way too as I ...
Sure. Proper sorting (and marking of same in the Aggregator) would make the aggregation go faster. It becomes more of an issue on larger data sets, perhaps when you have 100x that amount of data, as 'too much' unsorted data can crash the Aggregator stage.
More specifically, the way you delete a hash depends on how it was created. For hash files in an Account created via CREATE.FILE, then yes - use DELETE.FILE to delete them. Pathed Hash files are created using the mkdbfile command and are deleted from the operating system using your command of choice...
Nope. Sort stage does sorting. Grouping is a function of the Aggregator.
What you've noticed is one of the joys of leveraging a common GUI across multiple stages - all the bits don't always apply to all of the stages you see the bits in.
In my SQL Query I am using NVL(column,NULL) I'm sorry, but what would be the point of doing that? NVL substitutes another value for nulls in the field you are querying and you are telling Oracle that, when your column is null, to pass the null. Might as well not have the NVL function in your query ...