database - Difference between NVARCHAR in Oracle and SQL Server? -
we migrating data sql server oracle. columns defined nvarchar
in sql server started creating nvarchar
columns in oracle thinking them similar..but looks not.
i have read couple of posts on stackoverflow , want confirm findings.
oracle varchar2 supports unicode if database character set al32utf8 (which true our case).
sqlserver varchar
does not support unicode. sqlserver explicitly requires columns in nchar/nvarchar
type store data in unicode (specifically in 2 byte ucs-2 format)..
hence correct sql server nvarchar columns can/should migrated oracle varchar2 columns ?
yes, if oracle database created using unicode character set, nvarchar
in sql server should migrated varchar2
in oracle. in oracle, nvarchar
data type exists allow applications store data using unicode character set when database character set not support unicode.
one thing aware of in migrating, however, character length semantics. in sql server, nvarchar(20)
allocates space 20 characters requires 40 bytes in ucs-2. in oracle, default, varchar2(20)
allocates 20 bytes of storage. in al32utf8
character set, potentially enough space 6 characters though handle more (a single character in al32utf8
requires between 1 , 3 bytes. want declare oracle types varchar2(20 char)
indicates want allocate space 20 characters regardless of how many bytes requires. tends easier communicate trying explain why 20 character strings allowed while other 10 character strings rejected.
you can change default length semantics @ session level tables create without specifying length semantics use character rather byte semantics
alter session set nls_length_semantics=char;
that lets avoid typing char
every time define new column. possible set @ system level doing discouraged nls team-- apparently, not scripts oracle provides have been thoroughly tested against databases nls_length_semantics
has been changed. , few third-party scripts have been.
Comments
Post a Comment