You have 11 digits to the left of the "." and zero digits to the right of the ".". Therefore, the internal binary representation is of a Decimal(11,0) number.
That's how it works. Live with it. It is a valid representation of the decimal number.
Forget UV.ACCOUNT for now. VERIFY.SQL has told you something you were able to establish by other means - that there is no record for the project in UV_SCHEMA (and therefore none in other system tables). Try specifying the pathname of the project. VERIFY.SQL SCHEMA C:\path\projectdir This is still a ...
No. The extraneous files in the hashed file directory will continue to prevent its being used as a hashed file. And those files presumably are records that it was intended should be written to the hashed file. Hashed files are not internal to jobs, they are external objects. Of course you could comp...
Before I start explaining the issue with the hashed files, I would like somebody to confirm that there is no way to extract a value from a text file and store it into a job parameter within the job itself? Confirmed. Use just a Sequential File stage. Use stage variables in the Transformer stage to ...
1. Create a .Type30 file. This needs to be an empty file. echo > .Type30 2. Create a "directory file" in DataStage. CREATE.FILE TempDirX 19 This creates a directory that is a subdirectory in your project directory. 3. Using an operating system MOVE command move all the illegal files from the hashed ...
SOMEONE has put another file in the hashed file directory (possibly by specifying the hashed file directory in the pathname in a Sequential File stage), or removed the hidden file .Type30 from the hashed file directory. A hashed file directory must contain precisely the three files DATA.30, OVER.30 ...
Premium membership is not expensive, at less than 30c (Rs 12) per day. Premium membership is one of the ways that the hosting and bandwidth costs of DSXchange are defrayed. If Craig, or any of the five premium posters, were to accede to your request, then this would set a precedent that would under...
I prefer to use Usage Analysis on the table definition imported from the file. But, then, I am rigorous in preserving the links between the table definitions and the jobs that use them. Are you?
At one level it's correct because it works. However you have not ascertained why what you tried earlier does not work. Knowing this would perhaps allow you to create more efficient jobs in future.
This can cause problems if one file has more rows than the other. The Link Collector waits for the never-to-arrive next row, and eventually takes a timeout error. Better is to use a filter command in your sequential file stage that creates a stream of all lines from the two files. I prefer TYPE as t...
No idea. Where are the bottlenecks? What (more precisely than "medium complexity") is in the job design? Lots of Build or Transformer stages? These take longer to compile because of the need to create C++ source code and compile and link that in a way that is callable from the main step (job) flow. ...