dashCommerce shut down. Now what?

Looks like the asp.net based project dashCommerce has shut it’s doors… no warning that I know of, and the whole website(s) have been replaced with a simple notice that the project no longer exists. Must have been some strange circumstances, since I’m sure a lot of people would have been interested in taking over the project instead of seeing it just (suddenly) close the doors. I have a store running the dashCommerce source, so this is a bit of concern.

So now that the pool of ecommerce solutions for asp.net has been reduced by one more, what is a .net ecommerce type to do?

I’m actually in the process of porting my dashcommerce store over to nopCommerce, which seems to now be the big boy in the space (if not the only player?).  Take a visit to www.nopcommerce.com to find out more.

What I’d really like to see is a .net ecommerce store based on the new MVC framework. The projects I’ve seen in existence seem to get hung up on tuning architecture and have no real intention of ever releasing something useful. With this in mind, I started www.mvccommerce.com some time back… no content yet at time of writing this. The project is registered with codeplex  ( http://mvccommerce.codeplex.com) and will be looking for devs. If you just want to keep an eye on it, follow it at twitter – http://twitter.com/mvccommerce

I’m also a fan (and a dev for) MVCCMS, which has ecommerce capabilities coming soon, so you may want to take a look at it as well, at www.mvccms.com .

But for getting your (open source asp.net) store open asap, looks like nopCommerce will be the way to go for a while.   Add your comments if you have any other recommendations I may have overlooked.


wpf printing with xps – avoiding rasterization

I’ve had a collection of headaches getting printing to work correctly in a wpf app that displays long vertical graphs of geological data.  Paging is one topic Ill cover later (as Im still trying some things to determine which works best), but a recent headache has been apparently random rasterizing of parts of the printout. I normally print using acrobat to a pdf for testing purposes (don’t wanna waste paper!), and I’d noticed that some parts of the graph seem to get rasterized while others stay in vector fomat. You probably already know this but rasterized output will typically not look as good on most printers, plus it can rapidly increase your data size sent to the printer (or pdf). And, if you are using a pdf, you can zoom in on the file when viewing it.. so any rasterized parts will get all jaggie and ugly.

So, my lessons learns so far in avoiding this:

-Dont use opacity settings. So far, I only had opacity levels set for some background fills inside a curve line. This would result in rasterizing both the curve line, fill, and apparently anything drawn behind it- which in my case included the entire background grid.

-(I havent verified this one, but-) dont draw geometry that extends past a clipped edge. For example, my curve lines would draw past the edge of the canvas they were displayed on, and the canvas had clipping turned on. Again, not verified, but i think this was causing rasterization.

– Don’t dump an oversized visual into a page that it doesnt fit in. This is another one I havent confirmed, but it seemed that the longer the data I shoved into a page (and which continued off the end of the page), the more likely some parts wound up growing jaggies.

If I find more, Ill add it here. Now I’m off to figure out efficient ways to chop up my graph into pages.

My favorite projects

These are a few of my current favorite programmer/nerdy type projects on the web-

ReactOS – I’ve been watching this one a long time and it seems to be building steam. This is a free/open source clone of Windows… While still in alpha stage, and crashes are frequent, it is truly amazing how much windows functionality is working. Even Firefox 2.0 runs in it. I’ve been contributing some bug reports and compatibility report, and hope to get deeper involved in the future.

Cosmos – A .Net based OS… ok that’s a bit confusing, but basically you write .Net code and this project is able to compile the .Net into direct assembly language. Also a bit early, but also progressing nicely. I’m envisioning using this on some small and/or embedded systems in the future.

Phalanger – This is an awesome project which compiles PHP code into CLR… so you can run all those php apps in an IIS/.Net environment, and even develop in visual studio. Unfortunately it has gone a bit stale, the rumor I heard is one of the leads was hired by MS and doesnt have the time to put into the project any longer. I’m hoping the best for the project.

Debug Windows Mobile with webservices with Fiddler – part 2

 The day following my posting the original “using fiddler” info (see this ), I got to try my same setup using a windows mobile 5 pda… and failed.

I won’t go into all the things I tried to get this to work again, but FINALLY found the right combo to allow a locally-connected PDA (via activesync) to connect to a webservice running in the visual studio development webserver.

WM5+ apparently no longer likes to use the ppp_peer name for accessing the locally connected pc. I’ve read there is a replacement called dtpt_peer that should work, but so far this one is batting zero. Instead, you can use an IP that gets configged automatically by activesync. I wish there were a name to use in place of the ip, but if there is one, i havent found it.

Basically:  is the IP given to the desktop pc. is the ip given to the PDA.

Why not just use this IP in place of ppp_peer and get busy? Well windows on the desktop also treats these ip’s differently from the normal local ip, and when a connection is made through them, they are treated as remote connections.

Step 1. Let’s configure Fiddler to accept external IP’s. Note: the method I used will allow connections from *anywhere*, which is a bit dangerous as it could allow others to use your proxy and do bad-people stuff through it. You should really configure fiddler to only allow remote connections from the 169.254.2.* range.

Check this box:

enable remote connections

enable remote connections

Restart fiddler and your personal firewall will likely warn that fiddler wants to accept connections on port 8888, tell it yes that’s cool. Really, the personal firewall is a better place to go set some rules regarding the allowable ip range mentioned above.

Step 2. Add some script code to fiddler. Open the script editor and add this to the event called OnBeforeRequest, replacing the port 1234 with the appropriate one (the one web development server dynamically assigns):

if (oSession.host==”″){

This entry will take the “remote” connection from the 169 ip and make it look like a local connection that the dev server will be happy with accepting.

Step 3. Setup the connection on the pda to use a proxy, and put the for the proxy ip. For the http proxy portion, specify port 8888.

Step 4. Now open a browser on the PDA and connect to: http://localhost.:1234 , again replacing the port 1234 with your own. Yes, that is a “.” between localhost and the “:”. Go read on the fiddler website about it, I dont understand it either. If things work, fiddler should show your connection attempt in its log, and your mobile browser should show the locally hosted web content.

Now, you can use this same url scheme with your webservice calls from the PDA.

Warning! I also tried this same setup today with a compact framework 1.0 app. It *almost* worked, but the older CF.Net trimmed off the port number portion and thus was trying to hit port 80… and failed. As soon as I upgraded to cf.net 3.5, everything worked fine, and Im sure cf.net 2.0 is ok as well, though I should know better than to be sure of anything at all with winmo and CF.net dev 😉

Nullable Datatables – another gripe

This is sorta old news, but my last Asp.net gripe put me in the mood to continue with another longer-term gripe I’ve had. Don’t misunderstand, I love .Net technology, but the few annoyances I have with the technology seem to drive me nuts.

So this time, I’m talking about datatables, and specifically strongly typed datatables via the new-as-of-2.0  DataTableAdapters.

Strongly typed datatables work fine when bound to .Net UI components such as asp.net or winform grids, because the binding mechanism knows how to check for null values in the table without blowing things up. This is accomplished via calling a IsColumnNameNull() method for each column in the table row, and then only accessed the column value if it is not null.

The new DataTableAdapters allow attaching sql queries directly to the strongly typed datatable, so these become a quick-n-dirty data access method for things beyond just binding to visual components. I’ve built numerous apps that work with these adapterd in business object layers that never see the UI. But the pain being that any time in code I wish to reference a column value, I am back to having to write code like this:

int? MyValue = SomeTableRow.IsSomeColumnNull()? (int?)null:SomeTableRow.SomeColumn;

or alternatively:

int MyValue = 0; //assign a default value
    MyValue = SomeTableRow.SomeColumn;

Let’s do some math. If your table has 20 columns that are non-nullable (varchar/strings can be configured to not need this check), and your datatable is referenced in 10 places in your app, this is 200 places you have to write this mindnumbing code. My last database had about 10 tables to work with, so I could estimate I wrote this about 2000 times. And each one is unique, so you can’t cut an paste it very well (thank God for intellisense making this just a bit easier).

The bigger problem with this… the *HUGE* problem actually… is that if you don’t put this code in all the right places, you will produce an application that will compile fine but could have horrible bugs that may not show up for years. The following code compiles without problem:

int MyValue = SomeTableRow.SomeColumn;

SomeColumn is an int property on the datatable’s datarow, and thus the compiler sees no problem with this line of code. And if your database always has data in SomeColumn and never encounters a null, this will run fine.

However, if the column does allow nulls in the actual database, and a null winds up in there, this will throw an exception at runtime when this tried to access the null value.

With the release of Linq to Sql, we now have a easily generated object representation of the database tables similar to what the typed datatables used to provide. The linq versions support nullable fields, so that any database field is represented by a nullable version of the base type it represents. So now the equivalent code as above in using a linq table is:

int? MyValue = SomeTableRow.SomeColumn;

This sample uses a nullable int for MyValue, so that it will handle a null value just fine. But even if you want to keep the original non-nullable MyValue variable, you can still very easily do this:

int MyValue = SomeTableRow.SomeColumn.GetValueOrDefault();  

And this will use the nullable default method to return a default int value if it is null.

Not only are both of these nullable methods much cleaner and easier to use, the (again HUGE) benefit is that they are safe – if it compiles, it will not throw an exception at runtime… for nulls, anyway.

I had hoped that MS would add nullable column handling to these datatables in Visual Studio 2008 when it was released.. unfortunately not. And I’ve already had another project since 2008 release where I had to cuss my way through a huge pile of IsColumnNull()’s and the subsequent app blowups from the ones I managed to miss. I know developers have been asking Microsoft to add this feature for a long time, and it’s not a difficult thing to do, PLUS it adds so much safety to the compiled code… If this is good enough for Linq to Sql, why not for out trusty old DataTables? I don’t understand why this has dragged on so long.

ASP.Net formview – gripe of the day

I upgraded an asp.net project today to switch from a DetailsView control to a FormView control. Obviously I needed editing capabilities, thus the reason.

The upgrade reminded me of a major gripe I have with the FormView control. When you associate the control with a datasource, it will auto-generate a tabular form view of your data, in a list of Name: Value format. Really what it is doing is just creating a quicky view of the fields in your datasource and throwing the generated fields into the 3 templates needed (ItemTemplate, EditTemplate, and InsertTemplate) so you can have them handy for all that extra formatting you are expected to want to perform.

One formatting difference between the auto-generated code this control produces versus the format of the simpler DetailsView is, the DetailsView produces table elements so that all the “Name” elements show up in a single table column followed by the “Values” column. This is a nice formatted and is usually pretty close to what you would want to create in a list of values.

The FormView however assumes you are going to do a lot of editing of the templates, and so doesnt include any layout except the basics- Just the Name, “:” , and the bound feld value.  This results in the Name and Value getting bunched together up on the left side and results in a truly ugly form.

Not too difficult to format these into a table if you want to, right? Well sure… but after building a few apps using this, it can drive you nuts. Some of the apps Ive worked on have, say, an average of 30 fields to present on the screen. After I generate the FormView from the datasource, I then have to go add table TR/TD code around every element… 3 times. So those 30 elements are tripled to 90 elements. Each of the 90 elements has to have an opening TR/TD, then a middle TD/TD, then a finished TD/TR. So we are now looking at having to cut n paste about 270 items and try to make sure you didnt fubar anything.

To make things worse, if you make a schema change in your database (and who doesnt), you pretty much need to regen the thing and start over. 

AND: This is just ONE FormView. One app I recently finished had at least 20 FormViews in it. Neglecting any rebuilds, this is around 5000 elements I had to hand edit. Deadline? You talk funny.

I understand MS’es thinking in this, and I would probably be much worse off if I were in the other boat and had to *remove* all these unwanted table elements every time a Formview is generated… but why not offer an option to include a basic table formatting? Or even allow for custom formatting of the generated code so we can include our own tags or whatever markup is desired. I even researched to see if it’s possible to create my own generator for this to override or replace the one Visual Studio uses… found some obscure stuff but nothing really useful.

I’d like to hear if anyone has any pointers on this.

Linq and Sql Server Timestamp Columns – Why Binary?

As explained a bazillion times before on the net, the timestamp column in sql server is not a datetime or anything to do with time.  It is a binary(8) column which contains a big number which is incremented on every edit of any row of any table in the database. It is also identical to the rowversion column.

Now for my gripe. I’ve used strongly typed datatables in past projects and with some manipulation, you can get the datatable to treat the timestamp column as a bitint (Int64). Why does this matter? Because I’ve used it for some simple version tracking in the past, and this requires I perform range checking on these versions. A simplified explanation:

-Use the largest timestamp in my cached table
-Query the database table for any timestamps larger than this one
-Retrieve these new or updated items to be merged back into the original dataset

This normally is not useful for 2 or N tier apps, but in my instance I am transferring data over the net via a webservice so would like to only retrieve the new or changed records to the client.

Gripe time: why are these columns Binary? The only way I was historically able to query on them is to force sql server to cast them as a bigint, through some magic in the typed datatable. But now Im using Linq as my data access layer. I love linq so far, but I have tried every trick I can think of and it will not make this work- Linq will always fail with a illegal cast type error when it attempts to load the record.

So, whyt exactly are these binary? Wouldnt it be much easier to make use of rowversion and timestamps if they were represented as a real datatype like int64/bigint? I know I wouldnt have wasted half a day yesterday on this if this were the case.

So now it looks like Ill switch to using a datetime column that I hope is udpated with every edit or insert. Should work ok, but not my preference.