As explained a bazillion times before on the net, the timestamp column in sql server is not a datetime or anything to do with time. It is a binary(8) column which contains a big number which is incremented on every edit of any row of any table in the database. It is also identical to the rowversion column.
Now for my gripe. I’ve used strongly typed datatables in past projects and with some manipulation, you can get the datatable to treat the timestamp column as a bitint (Int64). Why does this matter? Because I’ve used it for some simple version tracking in the past, and this requires I perform range checking on these versions. A simplified explanation:
-Use the largest timestamp in my cached table
-Query the database table for any timestamps larger than this one
-Retrieve these new or updated items to be merged back into the original dataset
This normally is not useful for 2 or N tier apps, but in my instance I am transferring data over the net via a webservice so would like to only retrieve the new or changed records to the client.
Gripe time: why are these columns Binary? The only way I was historically able to query on them is to force sql server to cast them as a bigint, through some magic in the typed datatable. But now Im using Linq as my data access layer. I love linq so far, but I have tried every trick I can think of and it will not make this work- Linq will always fail with a illegal cast type error when it attempts to load the record.
So, whyt exactly are these binary? Wouldnt it be much easier to make use of rowversion and timestamps if they were represented as a real datatype like int64/bigint? I know I wouldnt have wasted half a day yesterday on this if this were the case.
So now it looks like Ill switch to using a datetime column that I hope is udpated with every edit or insert. Should work ok, but not my preference.