Japanese Data in SQL Server 2000
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
Japanese Data in SQL Server 2000
Hi,
I am moving data from one table to another in SQL server 2000 database. The collation for the database is SQL_Latin1_General_CP1_CI_AS. When I am moving the data in nVarchar(4000) column the Japanese data is getting garbled. My NLS is disabled and the collation of the source and target are same. I have not tried yet in changing the OS environment variables since I would like to use it as my lot option. Is there anything I can try?
Thanks,
Anand.
I am moving data from one table to another in SQL server 2000 database. The collation for the database is SQL_Latin1_General_CP1_CI_AS. When I am moving the data in nVarchar(4000) column the Japanese data is getting garbled. My NLS is disabled and the collation of the source and target are same. I have not tried yet in changing the OS environment variables since I would like to use it as my lot option. Is there anything I can try?
Thanks,
Anand.
You haven't explained how you are moving the data (and how DataStage is involved). If you are reading from and then writing to SQL Server with the same settings and no NLS it should work.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
Hi ArndW,ArndW wrote:You haven't explained how you are moving the data (and how DataStage is involved). If you are reading from and then writing to SQL Server with the same settings and no NLS it should work.
I have an ODBC stage followed by a Transformer and ODBC stage. Both the connections are pointing to the same server and Database. I am just reading from one table and then writing it to the other table with out any transformations. Only thing is for my nVarChar column I made it in to VarChar type with double the width. I tried by casting into VarBinary too but it didn't work.
Thanks,
Anand
Then it would seem you do need to set your character set differently so that no conversion occurs in the ODBC stages.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
What character set is in effect in your job's environment? Double-click on the second log entry for a complete list of all environment variables in that run. If that is different than your databases, then from what I've seen, conversions will be going on 'in flight'.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
ArndW and Chulett,chulett wrote:What character set is in effect in your job's environment? Double-click on the second log entry for a complete list of all environment variables in that run. If that is different than your databases, then from what I've seen, conversions will be going on 'in flight'.
Thanks for your replies as chulett said I looked into the environmental variables I found the entry for NLS_Lang as follows:
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
Now how do I change this so that it will be equivalent to SQL_Latin1_General_CP1_CI_AS of SQL server 2000. In one of the Ray's posts he mentioned to create OS environmental variable NLS_LANG from the control panel and enter the appropriate value, can I proceed with that solution? If so what should be the variable value? Once agian thanks for your help
Anand.
Create a User Defined Environment variable in the Administrator for NLS_LANG. Set the value to $ENV so that it defaults to the current 'environment' value. Use the special button on the job's Parameter tab to add this new Environment Variable to your job and then supply the new, override value that should be in effect just for this job run.
You may also need to do something similar to LC_CTYPE or whatever that other pesky variable is.
You may also need to do something similar to LC_CTYPE or whatever that other pesky variable is.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
No, it doesn't. NLS is only necessary if you need to modify or otherwise manipulate data. In fact, having NLS and not doing it correctly is what usually causes the conversion errors you are seeing.
Try this - read the data in DataStage and write it to a sequential file - is the Japanese data still correct or has it been converted?
Try this - read the data in DataStage and write it to a sequential file - is the Japanese data still correct or has it been converted?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 55
- Joined: Tue Feb 24, 2004 8:17 am
Hi ArndW,ArndW wrote:No, it doesn't. NLS is only necessary if you need to modify or otherwise manipulate data. In fact, having NLS and not doing it correctly is what usually causes the conversion errors you are seeing.
Try this - read the data in DataStage and write it to a sequential file - is the Japanese data still correct or has it been converted?
Yes the Data is being converted to "?" in the sequential file. I directly opened it in wordpad, notepad as well as in word.
Thanks,
Anand.