I have been developing software for years but this year is the first time that I have had the opportunity to work with an Oracle database (on Linux no less). Now, I am no DBA so most of the architectural and philosophical differences are lost on me but there were several small things that reminded there are indeed nuances within Microsoft SQL Server and its associated T-SQL scripting language that I take for granted.
So if I could go back six months these are the differences I would highlight for myself to smooth the transition.
Identity and SEQ.Nextval
When creating tables in SQL Server one of the easiest ways to create a primary key is the use of an Identity property. When scripting a table you have the opportunity to define the very first value (seed) and the incremented value between each each new row. In Oracle this accomplished by using the CREATE SEQUENCE command.
First you would define a table:
CREATE TABLE SOMETABLE (
COL1 NUMBER NOT NULL,
COL2 NVARCHAR2(10) NOT NULL ENABLE,
COL3 NVARCHAR2(256) NOT NULL ENABLE,
COL4 RAW(20) NOT NULL ENABLE
Then you apply sequence, by convention we use TABLENAME_SEQ as the sequence naming pattern:
CREATE SEQUENCE SOMETABLE_SEQ
MINVALUE 1 MAXVALUE 9999999999999999999999999999
INCREMENT BY 1 START WITH 1
NOCACHE NOORDER NOCYCLE
Your insert statement would use the sequence you created earlier, which is literally tracking what the next value should be (note in SQL server you actually omit the seed column completely).
INSERT INTO SOMETABLE
(COL1, COL1, COL2, COL3)
(SOMETABLE_SEQ.nextval, '00', 'Ants', 'Pants')
In order to get system dates, in SQL Server you would perform GETDATE, and in Oracle its doppelganger is the SYSDATE. Now dates are annoying to deal with in any environment but T-SQL seems to do a decent job of guessing your intent from a string. I have personally found that Oracle is not nearly as forgiving (which may ultimately be to my advantage) and I have taken to using the TO_DATE function to more explicitly define the date. Here is an example:
TO_DATE('14-OCT-2015 06:30 PM', 'dd-mon-yyyy hh:mi PM')
In Oracle PL/SQL, RAW is a data type used to store binary data, or data which is byte oriented (for example, graphics or audio files). One of the things to note about RAW data is that it can only be queried or inserted; RAW data cannot be manipulated. The RAW 16 data type, then, lends itself most appropriately to the storage of GUIDs (a 16 byte/128 bit data type).
The most important thing to remember when dealing with GUIDs across platforms like this is that the format of the GUIDs are different, a simple inspection of the following identical GUIDs (Oracle and SQL) highlights how the bytes are flipped:
If you are reading the RAW bytes from oracle the following code will work:
Guid dotNetConvertGuid = new Guid(rawBytesFromTheOracle).FlipEndian();
When writing GUIDs back to Oracle you are simply writing the native .NET GUID to a byte array as follows:
Oracle SQL Developer
I am going to be honest the thing I miss the most in this transition is SQL Server Management Studio (SSMS), because the worst part about working with Oracle is not the subtle syntax differences, but the really poor IDE that looks like a relic of a forgotten age (think VB 6). Everything about the Oracle SQL Developer feels wrong … and old. It is so bad that I generally using LINQ pad as my go to database inspector.
On a more practical note by default transactions remain uncommitted and you have to explicitly commit UPDATEs and INSERTs, this feature continues to provide me with endless opportunities for errors.