# Wednesday, 03 June 2009

Microsoft-Commerce-Server-LogoBack in 2007 I was faced with designing a multi-currency catalog solution for Commerce Server. I knew from previous experience that we would need a general purpose solution, which we could employ for many clients.

Back then Commerce Server 2007 was new thus performance characteristics were new territory as well. I ended up designing a solution based solely on virtual catalogs. Basically one catalog for each price group/currency. You can read the details on multi-currency/price groups, which we’ve employed successfully numerous times since then.

Since 2007 we’ve gotten a new version of Commerce Server though no significant new features were added in the catalog system, and we’ve gained valuable new knowledge on both the pricing scenarios and limitation of the Commerce Server catalog system. With that information in hand I’m going to try my hand at redesigning the original multi-currency catalog structure from 2007 to both address performance issues and increased flexibility.

Virtual Catalogs are Slow

Virtual catalogs are slow and so they make a poor choice to expressing essentially a single piece of data. You might say that virtual catalogs perform perfectly well if you materialize them and you would be right. You gain in the order of 10x performance by materializing a virtual catalog but lose the ability to make any changes in them in the process.

Basing my original multi-currency solution on virtual catalogs seemed to make perfect sense, but with the added requirement of changing the category hierarchy two levels of modifiable virtual catalogs were needed, thus the ability to materialize was lost.

To make matters worse we rarely deal with clients who are inclined to go with an Enterprise license for their Commerce Server solution, so we don’t see very many full fledged staged solutions, which would take care of the challenge handily.

Virtual Catalogs Two Levels Deep

Virtual catalogs are by nature limited to two levels, i.e. you can have a total of three levels of catalogs, one with base catalogs and two with virtual catalogs. Are we to use one of these levels of virtual catalogs for pricing we lose it for other purposes, and gain very little other than the ability to store another price group per catalog.

Pricing is a Separate Issue

What I’ve come to realize over the years is that product pricing is a completely separate issue from the product itself. While the two might seem like one and the same; in reality they aren’t. Sure the customer needs to be told a certain price, but more often than not the price is determined by context surrounding the product, rather than the product itself.

An example would be a seasonal business, which is highly dependant on calendar time, the product would probably sell for a higher rate at specific times of the year and lower rates the rest of the year, e.g. ChristmasTreesOnline.com. The context in this particular instance is time.

Now for business to business scenarios the context might be especially convoluted as you might go for pricing granularity, which allow organizations, groups within the organization, even individual people in the organization to have specific prices, e.g. memberships to the gym provided by your company or framework agreements made between supplier and customer. The context deciding the price is who you are in this case, not the product itself.

Pricing is a Service

To handle the separate issue of pricing we need something akin to a service in domain driven design parlance. The service is responsible for looking up the right price based on whichever context is present for a given customer request.

Of course we need some sort of structure to maintain the pricing for individual price groups, and catalogs come in handy to solve this as I’ll show you next.

The Product Catalog

Our product catalog would in the new scheme continue to exist as we know and love it with one minor exception. The list price of the product is either to be ignored or used only to get an idea of what the pricing is like. The list price will not be picked up from the product catalog, which contains the marketing data for a product. Please note that I define marketing data in this instance as data used to display to potential customers, but other than that serve no purpose to the system.

The Pricing Catalog

In Commerce Server we have the notion of different types of catalogs, i.e. the product catalog and the inventory catalog. I’m going to introduce a third kind called the Pricing Catalog. As you might imagine a pricing catalog concerns itself only with price and as such contains only the bare minimum of data to identify a product.

The pricing catalog will have metadata to indicate that it is in fact a pricing catalog. Each pricing catalog reflects a single price group such as “Internet Users”, “Gold Customers”, whatever makes sense for the particular scenario.

Having pricing split out like this means that we can price products based on the calendar time context as a standard Commerce Server catalog has dates associated with it to allow us to display it within only a given period of time.

For a pricing catalog these fields are used to determine whether a price is valid or not, so you could have the seasonal pricing expressed as two different pricing catalogs for our ChristmasTreesOnline.com, one for holiday pricing and one for the rest of the year. The pricing service would then grab the pricing from the proper Pricing Catalog and display it to the customer.

Pricing Definitions

Finally I propose a new kind of definition called the Pricing Definition. What this is is a specialized Product Definition used for creating Pricing in the Pricing Catalogs for advanced scenarios, e.g. complex pricing matrices defined in external systems such as an ERP.

Products, i.e. Prices, created based on a Pricing Definition would contain at least the SKU, name, description, and of course the list price. These specialized products go into a Pricing Catalog as we discussed in the previous paragraph.

Tying it All Together

Another context we discussed in a previous section is the organizational context, which might also influence product pricing. Fortunately Commerce Server comes with CatalogSets as a neat way of bundling catalog together. CatalogSets leveraged with our Product Catalogs and Pricing Catalogs would allow us to do multi-currency and, incidentally, a bunch of even more interesting scenarios.

Imagine if you will a scenario where our online retail outlet would like to give Internet customers access to only currencies, which make sense for their particular region, e.g. here in Denmark Euro and our national currency Kroner would make sense, while UK customers should be enabled to shop in either Pounds or Euro.

Simple! Create two catalog sets one called Denmark and one called UK. For the Denmark catalog set select our one product catalog containing all products or the one which reflects the range available in Denmark and select the two Pricing Catalogs Kroner and Euro. For the UK catalog set select Euro and Pounds Pricing Catalogs.

By way of the metadata on the catalogs we’re now able to display the same range for both UK and Danish customers, but in three different currencies with Pound being available to the British and Kroner available only to the Danish customers.

posted on Wednesday, 03 June 2009 14:21:57 (Romance Daylight Time, UTC+02:00)  #    Comments [2] Trackback
# Monday, 30 March 2009

So I've been migrating my life over to the Mac that I bought late last year and briefly mentioned in my summary post of 2008. Mostly I'm there but one aspect keeps tripping me up: Which blogging tool to use for posting to the couple of blogs I maintain?

On Windows I'm very happy with Windows Live Writer and I figured that with all the creative writing people of the Mac it wouldn't be an issue at all to find a nice comparable tool on the other side.

Boy was I ever wrong in assuming that. For some reason there isn't really a very good tool which has feature parity with Live Writer on the Mac. The most prevalent tool out there is MarsEdit, which to me doesn't fit the bill. It does everything right in the technical department but lacks in one key area: The editor.

Over the years I've grown accustomed to having a couple of features which really help out in the process of writing a new post:

  • The tool must be desktop based. Web interfaces are handy but too cumbersome to work with
  • WYSIWYG editor
  • Auto creation of image thumbnails with links to the original
  • Image formatting tools like alignment and custom margins
  • Support for BlogEngine.NET and DasBlog (categories, upload images via MetaWeblog API)
  • Ideally rich image formatting features like drop shadow

MarsEdit 2
I don't know about you but I expect to be able to edit my posts in the WYSIWYG interface, which might occasionally require me to drop into HTML view to do some of the more tricky stuff (read: I've done this maybe four time in the five years I've kept a blog). MarsEdit, however, is built on the notion that the writer should have complete control of the HTML and thus provides nothing but raw HTML editing, even billing it as a feature, not a bug. I'm sorry but in 2009 I expect so much more from a tool like that. A tool which even requires me to spend $29,95.

I read a review which describes MarsEdit as being very windowy. I think you'll agree when you take a look at the screenshot below. Basically you've got a window for displaying previous posts, a windows for the raw HTML editor, and a preview window to display what your HTML looks like. Nastylicous!

Qumana
Qumana was my second attempt at reaching a blogging solution on par with what I have on Windows. It was even free so I was off to a great start. Qumana looks like Live Writer enough that I thought I was home free and stopped looking any further. Qumana is a pretty good tool which gets the job done. However, it lacks polish which turned me away from it in the end. No support for picture thumbnails was a huge point for me.

As far as windoyness it's far better than MarsEdit and it does provide a WYSIWYG editor, which was sorely lacking from MarsEdit 2. To sum up Qumana comes close but lacks thumbnail support.

Blogo
Now Blogo is a relatively new tool on the Mac as I understand it. I came across Blogo while listening to Leo Laporte's excellent Macbreak Weekly podcast in which he's got a segment where the panel picks their favorite tool. Blogo was in there and I decided to check it out.

My first encounter with Blogo was a nice one to a certain point when it failed one of my requirements miserably. Read on to find out how.

Hopes were not exactly high when I started using the tool the first time around but that quickly changed as I set out to create my first blog post. Sure image preview was sort of a strange feature in the sense that you get a little standard placeholder which shows you that an image is there. As for actual image preview you're out of luck.

Unfortunately Blogo doesn't support BlogEngine.NET 1.4.5 fully. It seems like it's almost there but posting doesn't happen when categories are in the mix. Editing a post after it's posted to BlogEngine.NET also presents some problems. Blogo "sees" the post but when it's pulled down no content is present inside it. Too bad it's really the only piece of puzzle missing for me to start using Blogo on the Mac instead of Live Writer inside my Fusion virtualized Windows 7 install.

A particularly nice feature of Blogo is its fullscreen editing, which basically allows Blogo to take over the entire screen to focus your attention on the blog post and nothing else. Love it!

All in all I'm not quite there yet. I'm hoping for support for BlogEngine.NET in a future release of Blogo, although I'm not holding my breath in that one. I already contacted the good folks at Drink Brain Juice (yeah I know :)) but nothing has happened as of yet. Crossing fingers and toes as it would see and end to that particular dilemma of my migration.


posted on Monday, 30 March 2009 12:37:30 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Wednesday, 28 January 2009

feed-icon-96x96In case you’re wondering why you’re not receiving any updates from my blog in your favorite feed reader, wonder no more. First a little background.

Google last year acquired FeedBurner without much fanfare and everything has pretty much been quiet since then with the minor exception that some paid feature became free.

This all changed recently when the great FeedBurner migration onto the Google platform started, which screwed people up in a number of interesting ways.

My first attempts at migration were unsuccessful due to the fact that one of my feeds did get migrated in the first go, not completely, mind you, just a little bit, leaving me with my feed both at the old FeedBurner site and at the new Google FeedBurner site.

Of course it’s gets quite tricky to determine automatically what to do when someone tries to migrate a feed onto a new platform where another feed with the same name exists. Luckily I figured out what was going on, and being in control of both ends of the equation I removed the duplicate feed and tried again…

Now for the reason why you’re not receiving anything from this blog in your feed reader. The second time around the migration was successful, only Google for some reason can’t access my original feed URL at my ISP, which means that the FeedBurner URL gets a nice HTTP 502 error whenever you, dear reader, tries to access it.

Until this gets resolved between my ISP and Google (like in a million years) I’ve turned FeedBurner off for the site. Once it’s fixed you will automatically received the new feed. The down side is that you’ll have to update your reader to use the old feed URL in the meantime: http://www.publicvoid.dk/SyndicationService.asmx/GetRss

I think this illustrates nicely why you should be weary of the cloud computing trend. Indeed it’s a fine proposition; supposing that everything works as it should. It’s quite another matter when the cloud turns out to be filled with hot air and starts failing. All you can really do is sit back and wait for someone, somewhere to do something.

posted on Wednesday, 28 January 2009 15:27:26 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Tuesday, 27 January 2009

http://www.developer.com/img/2002/09/18/OOP01.gifToday I read a nice post by Brian Rasmussen in which he describes how to set up Visual Studio to generate class definitions which are sealed by default. I had to post my own point of view in the matter although it is going to be awkward. Not in the teenage, “define me”-sense but in my choice of language as I can’t really quote him effectively, so you’ll have to make do with me paraphrasing his post :)

Now I’d like to put myself in the I-could-not-disagree-more camp. The default choice in my humble opinion should be to leave classes open and have all members be virtual if you want to take it to the extreme. This would leave the system open for change just as the SOLID principles state. Java got it right in my opinion.

To be able to make the decision on whether a class should be open for inheritance you’d have to travel to the future to see what the class might be used for. If you’re anything like me you’re probably challenged in the time travelling department, and so I postulate that you can’t really make a good decision in the matter. More often than not closing the system for change will be the wrong choice as requirements and environments change.

I do agree with Brian’s statement that sealing a class would take away options thus creating a simpler API. I would, however, also like to state that there are better ways of achieving a simple API. How about not exposing the type all? Why not create a simple interface, which exposes only what is needed for the task at hand?

Please don’t make the default choice for your classes sealed. Go with open classes and live a happy life with a system, which is open for change. Trust me I’ve seen systems, which adopted a closed stance and it wasn’t pretty. The team kept hitting the wall in the changes they wanted to make, simply due to the fact that the original developer had no time machine, which enabled him to foresee the changes, which future members of the team needed to implement.

posted on Tuesday, 27 January 2009 12:19:30 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Sunday, 25 January 2009

logoThis is going to be the last post in which I mention Twitter… seriously. In fact I’m going to start right now by not talking about Twitter but instead I’m going to focus on a side effect of Twitter: Corporate Tweeting. (You would in fact be correct if you assume that I just made that term up :))

The Vertical Niche

Like Google Twitter has got the market for short public messages pretty much sewn up. Does that mean that there isn’t a market for short public messages anymore? As Google so clearly has shown sewing up the market does mean that others can’t compete in that same market. It’s all about the vertical niche, baby!

http://www.geocities.com/glendalelandmark/IMG_3947.JPG

Yammer is the New Black

What IMDB is for Google. Yammer is for Twitter. Before I dive into what Yammer is let me start out with a challenge we have at Vertica: As we spread to different geographical locations how do we keep the company spirit going strong? How do we make the departments one coherent company with the same values and a sense of collectiveness?

image

We spent a couple of meetings debating that very issue and of course the good old ones like doing company outings, shared social events, wax eachother’s backs all came up but for me the most interesting one, aside from waxing eachother’s backs, was to try and use Twitter and also allow for the usual private chit chat which goes on inside a company. Some jokes are best kept inside the company… like you know that waxing one. You get my point right?

Yammer

image Yammer has set up shop with a Twitter clone which is ideally suited for running private Twitter-like networks. Bascailly all you need are e-mail addresses on the same domain and you’re golden. Sign up is stupid easy: Enter your e-mail and you’re good to go.

From there is smooth sailed with a nice Adobe AIR client (surprise Adobe AIR is not just for Twitter clients!) which gives you the ease of posting new messages that you’re familiar with from that other netwokr which I won’t mention from here on in.

At Vertica Yammer is quickly turning into a questions and answer service which translates directly into increased productivity because A) You don’t have to know who knows what, you just ask the question and someone will chime in, and B) You don’t interrupt people who don’t want to be interrupted because if they’re not looking they won’t answer.

Now whether or not it will actually serve its original purpose remains to be seen. The new offices in Zealand is still under a month old and quite small so I guess we’ll just have to wait and see. What’s interesting though is that people at the first office were very quickly to adopt Yammer.

posted on Sunday, 25 January 2009 07:00:00 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 19 January 2009

Twitter.com

In a previous post I wrote about Twitter and what it means to the Danish developer community. The real value of Twitter however does not come by visiting the site from time to time. You have to participate actively to keep the conversation going and that’s where the Twitter clients come into the picure.

I’ve been through a bunch of them and ultimately decided which one I liked the best. I’ll try and spare you from doing the same all over.

Digsby

Digsby gets honorable mention becayse it was my first Twitter client and because this program how I got started with Twitter and in no small way the reason why I still use it.

Digsby is labelled a social network client which gives you access not only to Twitter, in fact that’s the least of it, but also to Messenger, LinkedIn, Facebook, Yahoo Chat, Google Talk, the list goes on and on but you get the point. Digsby speaks with most social networks out there.

That was my reason for trying it out as I really didn’t feel that I needed a dedicated program to try out Twitter. I spent quite some time with Digsby and felt for a long time that it was the way to go. In fact the reason I dropped it was not so much Twitter related as it was Messenger related. It simply didn’t work as advertised, sending file for one was spotty.

As a Twitter client it performed admirably and for me at least it was a low cost to pay for trying out Twitter as I used it primarily as a Messenger client with the added benefit of being able to send out my tweets as well.

Twitterrific

imageTwitterrific is an interesting one as it didn’t start out on the desktop for me. It actually started out on my iPhone and went I got a Mac late last year it was the natural choice for the desktop as well as the iPhone experience with this thing is flawless as far as I’m concerned.

Now the application is pretty much the same on the Mac. Interestingly it turns out that the functionality doesn’t quite cut it on the desktop. Due to the nature of tweets messages need to be as compact as they can be.

http://www.dech.co.uk/wp-content/uploads/2008/07/photo.jpg http://estwitter.com/wp-includes/images/twitteriffic.gif

Imagine that you’re posting a link which can easily be 50 - 60 characters; at that point you really want to be able to shorten a link easily and post the short version insteand. Unfortunately Twitterrific doesn’t support this which is fine on the iPhone where cut and paste is not to be found so you tend not to post links. On the desktop though links are thrown left and right so not having the feature is a real pain point – at least for me.

Thus Twitterric was evicted from the Mac desktop but remain on the iPhone as one of the first apps I ever installed on that thing.

twhirl

image Before I delve into twhirl a word on Adobe AIR. Not so much because I find the platform interesting but because I find it interesting that as a platform a lot of the ecosystem is made up of … wait for it … Twitter clients. It’s interesting to me that a service like Twitter can drive a platform like AIR and not the other way around.

twhirl is pretty much like Twitterrific only the name is quite a bit easier to spell and it supports the link shortening feature I mentioned above. It being an Adobe AIR app also means that it’s cross platform for those us running cross ethnic platforms out there.

twhirl is like the girlfriend you can’t quite figure out if you want to spend your life with or leave for someone else. I left but ultimately came back so I guess it’s forever between us :)

And finally remember to follow me on Twitter once you get your favorite client up and running :)

posted on Monday, 19 January 2009 11:58:14 (Romance Standard Time, UTC+01:00)  #    Comments [3] Trackback
# Sunday, 04 January 2009

Community-People Back in May 2008 I wrote a short note about me trying out Twitter. At the time I just wanted to know more about what Twitter actually was as I heard about time and again on podcasts, blogs, everywhere really.

Interestingly whenever people talked about Twitter it was due to the service being down but still I felt compelled to take it out for a spin.

Twitter of course is the service which enables you to post little notices about what you’re currently doing which doesn’t sound all that useful until you actually sit down and think about it. In reality it turns out that there are numerous applications for a service like that. The notices are limited to only 140 characters which means that you have to be really short and sweet in the stuff you send to the service.

Fast forward to January 2009 with the experiment done and my conclusion is in: Twitter is indeed a service worth paying attention to. Read on to find out why.

Now what prompted this post is a question I got from Brian Rasmussen when I suggested that he take a look at it. Basically he asked why he should use Twitter, a question I didn’t quite know how the answer with anything but, “it’s cool”. Since that time I’ve been wondering what makes Twitter worth my while and yours as well, dear reader.

Jesper-Blad-Jensen-Twitter

Twitter is a lot of things to a lot of people. The value to me and our little community in particular lies in tying together everybody in a more coherent way than what is possible today. To me at least Twitter is a place where I get to keep in touch with a number of the Danish .NET developers in a far more personal way than what is possible at DotNetForum, ActiveDeveloper, etc. because the service is geared for throwing stuff out there without thinking too much about it.

 Morten-Jokumsen-Twitter

Why do I call it the back channel of our community? Due to the nature of the messages you stick on Twitter it quickly becomes just little notices about what’s going on right now. For example Mads used it to get an idea of which IoC framework to go with, I recently got a Mac and had no clue where to start so I elicited suggestions for apps to use, Niels uses it for communicating with the Umbraco team from time to time, recently Jesper wanted to know what to include in his ASP.NET MVC presentation coming up in ONUG in January, and Rasmus had a memory leak which he needed some input for fixing.

Mads-Kristensen-Twitter

Basically what you get is an inside look in the process leading up to a blog post, presentation, the solution to a giving issue, or whatever; something you don’t really get from reading the final product and often times much more interesting.

I would encourage you to go create an account with Twitter and follow a bunch a people from the Danish .NET community. Morten from DotNetForum was even kind enough to create a wiki with the Twitter names of a bunch of the Danish .NET guys which you can use as a starting point. You can follow me using my Twitter name  publicvoid_dk.

Of course there are a number of people which I’d like to see get Twitter accounts like Brian Rasmussen, Søren Skovsbøll, Mark Seemann, Kasper Bo Larsen, and Martin Bakkegaard Olesen,

posted on Sunday, 04 January 2009 13:41:19 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 29 December 2008

I had grand goals for 2008 when we started out the new year last time around, only stuff happened and my activity level on this blog has not been up to the goals I initially set out to reach. In spite of that I'm very happy with my accomplishments for 2008. They just happen to have occurred in a slightly different way than I originally thought.

The Blog

Surprisingly the most visited and commented post on the blog during 2008 wasn't even written during 2008. It caters to the more mainstream internet users, was written in 2006, and is about an annoyance I had with Windows and the My Music folder which disappeared from time to time.

But we are looking back at 2008 here so it's fitting to mention the posts which I'm most proud of which were actually written during 2008.  First up is my Developing with Commerce Server 2007 series in which I dove into the the development experience of Commerce Server. Also on the topic of Commerce Server 2007 I wrote a post on a generic mapping piece I did for a project early in the year which turns CS objects into nice POCO object for nice testability.

Work

Of course there was real work to done and 2008 brought some really interesting challenges with me participating in one of the largest e-commerce projects I've ever had my hands on. Huge customer, international team of devs, traveling across the Atlantic to do some of the work. All in all a great learning experience and as a result I'm now able to provide even better service to our customers. Oh and it was kinda fun too :)

I got to attend a couple of conferences as well. First Daniel from Microsoft was nice enough to invite me to JAOO; a conference I enjoy a great deal and later in the year I had a unique chance to fly out to Los Angeles to participate in PDC 2008. I have to say that if you ever get a chance to participate in a conference like the PDC you really should jump at it. It's spectacular show to be sure. I did a couple of podcast episodes about it too; in Danish mind you.

Finally I'm happy to report that we managed to add a number of very talented people to both to my own team at Vertica and to the integration team as well. I'm proud to have such great colleagues and to be able say that every day I learn something new as a result.

Aarhus .NET User Group

Now as I started the post out by saying that I haven't spent as much time on the blog as I would have liked and there's a really good reason for that: Aarhus .NET User Group which has sucked up a significant part of my time.

During 2008 the core group and I organized thirteen meetings, indeed we didn't miss a beat the entire year and even managed to do a bonus meeting in December with my good colleague Daniel about unit testing. Additionally we pulled off a code camp in the beginning of the year, the ANUG 1 year old birthday dinner, and a Christmas Dinner. Not too shabby if I do say so myself.

Support for the user group during 2008 was tremendous and I couldn't be happier about where we're at after just one and half year of operation.

More importantly we've shown other .NET developers in the Danish community that a user group in Denmark is viable and as a result new groups have sprung up during 2008. As I write this groups are up and running in Odense (ONUG), Aalborg (AANUG), and Copenhagen (CNUG).

ANUGCast (www.anug.dk/podcast)

Ever since we started the user group we've had requests for putting the meeting content online somehow, be it video, audio, or something else entirely. What we did from the start was write meeting summaries which weren't really the ideal way to bring the content online. It's adequate and we'll continue to do so but it's been clear from the start that it was far from sufficient.

Late in 2008 it struck me that the podcast format might be the ideal way of addressing the requests. With that in mind I set out to create a podcast based on the topics of the meetings. With that ANUGCast was born with the initial goal: to bring out an episode once a month. This quickly escalated to one per week and so far it's gone really well. In fact episode thirteen was posted today and I've got a bunch of episodes already in the can just waiting to get released.

The podcast is my little baby and I guess most of the time which would otherwise have been spent on the blog got diverted there. I enjoy hosting the podcast a great deal, so much so in fact that I'd do it full time if I could :)

Since starting out the podcast I've gotten it registered with more than 50 aggregation sites, we're on iTunes, and we've have more than 4000 5000 downloads since the pilot episode in September 2008, a number I'm particularly proud of. We seen a steady climb of downloads since the pilot episode and the past couple of months saw more than a thousand downloads each.

I guess I should do a couple of posts on how ANUGCast is made and some of the tricks I picked up wearing the hats of producer, sound engineer, basically every damn hat needed to make it happen :)

2009

The coming year will bring a similar activity level on the blog as 2008. It is my every intention to keep up my work with the user group and the podcast and even step it up a bit. 2009 will bring more real marketing of the user group to reach new audience which I'll write more about after we hold the first meeting of 2009. There's something to look forward to for sure. 2009 will also bring our first IT pro related meeting and will cover Hyper-V. It's intended as a pilot to kinda try the waters for something like that.

Oh and I went and got myself a Mac so I guess I'm sort of a Mac switcher as of December 22nd... 2009 is going to be interesting for sure.

posted on Monday, 29 December 2008 22:41:45 (Romance Standard Time, UTC+01:00)  #    Comments [1] Trackback
# Tuesday, 11 November 2008

At Vertica we employ a wide range of Microsoft server products in our solutions to maximize customer value. To help us manage these often complex environments we rely heavily on virtualization. For the longest time the obvious choice was Microsoft Virtual PC simply because it was there and freely available to use and just being able to run a virtual machine was amazing in its own right.

Our default setup when developing in the virtual environment is to install everything needed inside the virtual machine and use that exclusively. Running IIS, a couple of server products with Visual Studio and ReSharper works well but we’ve found that performance leaves something to be desired.

The obvious answer is to move Visual Studio out of the virtual environment, do development on the physical machine, and deploy the code to the virtual environment and test it there. Basically I require two things from this: 1) Pushing the build to the server should be very simple, 2) Debugging must be supported.

Pushing Code

We’ve got a bunch of options for pushing code to another environment: Publish Wizard in Visual Studio, msbuild or nant tasks, Powershell, and my personal favorite bat files :)

I wanted to create a generic solution which doesn’t dictate the use of msbuild or any other technology so I went with a bat file which in turns calls robocopy. With this in place we’re able to push files over the network to the target VM. Of course a one-time configuration of the virtual environment is needed but that isn’t in scope for this thing.

Download my deploy bat file. Basic usage Deploy c:\MyWebSite \\MyServer\MyWebSiteVDir.

Robocopy is part of the Windows Server 2003 Resource Kit.

Remote Debugging

Second requirement is debugging. I want a solution which is on par with running Visual Studio inside the virtual environment and that means debugging people! :)

The steps for doing remote debugging are well documented but for completeness sake I will include them here with nice screenshots to go along.

1) Copy Remote Debugger from \program files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger to somewhere on the virtual machine, e.g. desktop.

2) Run Remote Debugger on virtual machine (msvsmon.exe).

3) Grab the qualifier from the Remote Debugger (You’ll need it in a second).

Remote-Debugger-Qualifier

4) Connect to Remote Debugger from VS on physical machine via Debug > Attach to Process (CTRL + ALT +P)

5) In the Qualifier input field enter the qualifier from Remote Debugger window.

Visual-Studio-Attach-To-Process 

Volia. Set a break point on the remote machine and see the code break in Visual Studio.

VMWare

I stated earlier that we’re using Microsoft Virtual PC which is true but it’s also true that we’re looking into VMWare Workstation. My first reason for doing so is the performance boost which comes from running in VMWare. I haven’t done any sort of scientific testing of how much faster we’re talking about suffice it to say that it’s enough that you notice it when you’re going about your business in the virtual environment. VS is faster, compiles are faster, everything is just smoother. In my book the best sort of performance metric there is :)

Additionally VMWare provides other interesting features. The first one you’ll see is that storing and restoring state of a VM is blazingly fast. Enough so that you’ll actually find yourself using the feature all the time. I know I am.

Secondly VMWare supports multiple monitors. That’s right. Simply select how many monitors you want supported and it’ll do it. You can even switch on the fly. In case you’re wondering, yes, we do have three monitors setup for all the developer machine in the office :)

VMWare-Workstation-Multiple-Monitor-Support

The final feature is significant enough for our story to warrant a paragraph of its own. I accidentally stumbled across it this morning when I upgraded VMWare to version 6.5.

Remote Debugging Support in VMWare

You read my earlier steps to get remote debugging working which will work for any sort of virtual environment. VMWare however brings some nice debugging features to the table available right there in Visual Studio.

1) Goto the menu VMWare and select Attach to Process.

VMWare-Workstation-Debug-in-Virtual-Machine

2) Select the VM you want to start debugging on and point to the Remote Debugger that you’ve got locally in \program files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86.

VMWare-WorkStation-Attach-to-Process-in-Virtual-Machine

3) Click the Attach button and the Remote Debugger will launch inside the VM and you’re ready to debug.

No need to copy anything to the VM. It just works. You can even setup a config for this which enables you to attach to the debugger with F6. Nice!

In conclusion running Visual Studio outside of the VM is not only possible but with the right tools like VMWare in hand it’s even an enjoyable experience. Have fun!

posted on Tuesday, 11 November 2008 10:35:21 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Sunday, 05 October 2008

Day 3 of JAOO was a potpourri of topics for me, everything from JavaScript as an assembly language, JavaScript for building an OS, developer best practices, and data synchronization with a shiny new Microsoft toy. If you didn’t catch my summaries of Day 1 and Day 2 please make sure that you check them out.

Five Things for Developers to Consider

Last year I attended a couple of developer best practices sessions and came away liking them quite a bit so I figured I should attend at least one this year as well. The first one this year was basically five things which Frank Buschmann and Kevlin Henney collectively considers to be important to developers.

Of all the things they pulled out of their hats I liked their points on expressiveness the most. They talked about bringing out concepts which are implied in both the architecture and the low level design of a solution; something we strive to do as well. One of the key aspects when writing code I find is that more often than not code is written once and read dozens of times which means optimizing for readability is not only a good thing to do but the only thing to do.

An example of the above are variables of type string. Usually these guys contain a lot more than just mere strings, e.g. XML, social security numbers, etc., instead of going with just the string you could go for a class of type SocialSecurityNumber which would be a lot more explicit. The little things count.

Developer habitability is a term they touched on which I quite like. The idea is that if we create nice usable solutions which are easy to understand and simple in their composition developer habitability is increased – basically the code place is a nice place to live :)

Keeping in Sync

image Two-way synchronization is a notoriously difficult challenge to solve. Mostly when I’ve come up against this thing I’ve gone for a simpler solution like selecting a data master which overrides the slaves. Naturally I was excited to learn that Mike Clark was giving a talk on Microsoft Synchronization Framework which tackles this very issue.

Sync Framework actually forms the backbone of tool you might know already: Sync Toy which sync files across the network, file system, or whatever. Certainly a neat feature Sync Framework is about much more than that. It basically enables us to synchronize our custom data stores which to me is very exciting.

Included in the box is support for all data stores which have an ADO.NET Data Provider so we’re talking all major databases here. Additionally the framework gives us rich hooks so we can grab any step in the pipeline and modify it to our heart’s content.

A JavaScript OS

Really? An OS done in JavaScript? Apparently so if Dan Ingalls has his way, Actually he’s already done some amazing work on this Sun project which aims to liven up the web by doing away with a HTML replacing it with vector graphics rendered by a  JavaScript engine.

Actually my words won’t really do it justice so instead take a look at this video; basically the entire talk. Once you’re done with that go play with Lively Kernel live on the web.

JavaScript as an Assembly Language

image Keeping in the same vain I decided to go take a look at Erik Meijer talking about his current project: Volta. Volta is a project aiming to allows us to defer decisions on deployment model to a much later point in the project than what we currently do today. The current state of affairs is pretty much that we need to decide very early in the project phase which might or might not make sense. In any event having the option to defer those kinds of decisions is always better right?

Now the part which Erik focused on is the piece which allows us to run our apps in the web without actually coding for the web. The premise here is that if we view JavaScript as a language which we can target with the JIT compiler and generate a web implementation for our app which then runs without in web specific code ever written by us a devs.

Last year Erik gave the keynote at JAOO and talked about Volta at which time I was skeptical to say the least thus it was interesting to actually see that there’s some meat on the project after all. The idea is interesting to say the least and I look forward to seeing where it goes from here.

With two “extreme” JavaScript session done I was all JavaScripted out for the day but I will say this: My days doubting JavaScript as a “serious” language are way behind me.

TDD Take 2

image One is the big topics for me last year at JAOO was test driven development so I was curious to see whether new stuff had come up in the intervening time from then to now. Giving the talk on TDD was  Erik Doernenburg. I won’t go into a lot of detail about the talk because as it turns out not much have changed in the span of a year.

What was interesting for me to note is that our work with unit testing and test driven development at Vertica has paid off handsomely as everything that ThoughtWorks, which I would describe as the thought leaders in this space (no pun intended), are doing is basically what we’ve spent the last year implementing and I’m happy to report that we’re at the point where the culture is basically sustaining that particular way of doing code.

So a year and a half ago I set the goal of become better at doing unit testing and my great colleagues have ensured success in that area. For the coming year the focus will be on builds, continuous integration, and release management. To me these are natural steps in our continued development of our way of doing things … and it’s fun too :)

posted on Sunday, 05 October 2008 21:48:05 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Friday, 03 October 2008

JAOO-logo Day 2 of JAOO 2008 was all about architecture for me, agile architecture, architecture reviews, requirements gathering, architecture testing, and finally lessons learned in architecture, Be sure to catch my summary of JAOO 2008 Day 1 if you missed it.

Architecture Reviews

Frank Buschmann from Siemens in Germany was track host and also the first speaker of the day. I caught a couple of talks with Frank last year and it’s apparent that he knows his stuff. While hugely important the architecture talks tend to be quite difficult to follow because the very nature of the topic is fluffy.

Most of the talk was pretty run of the mill in terms of how to conduct an architecture review. I’ve never formally conducted such a review but we do do them at regular intervals at Vertica just not in any sort of structured manner. We do them when they make sense and they usually consist of peer reviews and initial design sessions.

image Most interesting to me were a couple of techniques which Frank brought to light to do á formal architecture review. It’s not something you do every day and it’s certainly not something which requires a lot of structure.

My key take away from the talk is the fact that preparation for an architecture review is essential. Basically you need to sit down and try and figure out what you or the client expect from the review as the goal will impact the process of doing the review. This highlights why we can get away we can get away with very informal reviews because our goal is usually to verify that the selected architecture.

Now the situation changes rapidly when we’re conducting architecture reviews for other companies. Here the objective is to both verify architecture but more importantly to figure out what went wrong after the fact when a new system doesn’t satisfy non-functional requirements, lacks adaption in their internal dev organization, lacks maintainability, or something else altogether.

So I took away the fact that I need to be a lot more conscious about what the client expects to get out of a review and I must admit that I’ve taken a lot of satisfaction from going in and pointing out all the deficiencies in existing systems without giving thought to the fact that more often than not systems due have something good to bring to the table in spite of its deficiencies, perceived or otherwise.

Requirements Gathering

image Next up a talk which I didn’t really know what to expect from. The talk though turned out to be one of my favorites at this year’s JAOO due to the fact that it was very different from what I’ve seen at any other conference and it covered a topic, the importance of which I can’t stress enough, Communication.

Chris Rupp from Sophist at which she is the CEO and business analyst. Before I get started with my summary I must mention the fact that she spoke flawless English; a feat I’ve rarely seen performed by a German person. No hint of accent, nothing, just perfect English.

The meat of the talk was all about understanding what your client is telling you and more importantly filling in the blanks. The premise of the talk was basically something that we’ve know collectively in the software business for a while: The client doesn’t know what he/she wants. She had a twist on this though that I couldn’t agree with more which went along the lines that we can’t expect the client to know what they want. Building software is a complex task and it’s our responsibility as a community to help our clients to figure out what they want.

Chris touched on quite a number of different techniques with which we can employ to fill in the blanks. I was very pleased with the fact that she decided to focus on just a single technique called Neuro Linguistic Programming (NLP). My very limited understanding of NLP is that it’s basically the theory of the programming language of the human mind. What I took away from the talk is that NLP might be the key to picking up subtle nuances in the conversations I have with clients. Is a sentence phrased imprecisely? Maybe the client doesn’t really know what the details should be in that particular case. Is the client using very general terms to describe a feature? That could mean that we’re lacking some details, maybe we shouldn’t really allow everybody to update everything.

As I stated my understanding of NLP is very limited at this point but I definitely see a lot of potential here so I went ahead and suggested they we get some books on the subject so we can investigate further. I’m hooked at this point no doubt about it.

Agile Architecture

image James Coplien did a talk on what I thought would be pretty standard only-design-and-build-the-architecture-you-need-right-now kind of talks. Indeed he started out like that but he quickly went on to blowing our collective minds with proposing a new style of architecture where we separate the What and the Why more clearly. Now I won’t state that I understood half of what he was saying but I got the general drift and I definitely need to look into this some more.

If I were to compare it with something I know from the domain driven design world I’d compare it with the Specification pattern on steroids but I feel that it’s a poor comparison as his ideas describe the overall solution architecture where the Specification pattern is just small bits of pieces of any given solution.

To better understand the concepts I need to see a lot more source code :) You can download the pre-draft book which James is writing on the subject I think you’ll enjoy the new ideas a great deal.

Software Architecture and Testing

…. zzZZZzz…. nuff said.

Top Ten Software Architecture Mistakes

Needless to say I was not in the most energetic of moods having sat throught the snooze fest which was the previous talk. The guy in front of me must have agreed as he actually nodded of there for a while during the testing talk. It was actually pretty entertaining watching him do battle with very heavy eye lids, the mightiest of foes :)

image At least Eoin Woods (cool name or what?) took up the challenge and turned the whole mess around at the next talk in which he discussed his list of top ten architecture mistakes. Being in the last slot of the day is no easy task but he manged to get the entire room going, lots of laughs, lots of good stories, and lots of good information.

His talk basically served to highlight some of the mistakes that we’ve all made and continue to make from time to time. I believe that talks like this are invaluable as they serve to keep us mindful of at least some of the pitfalls of software architecture.

I liked that fact that this talk contained nothing but concrete examples and real world tips and tricks which we could take home with us and use. My favorite take away is to always have a plan B. I think most good architects subconsciously have these hanging around but I like the idea of having plan B be very explicit. It helps the the team if and when to enact it.

Just formulating plan B and sticking it into a document to me is hugely valuable; it gives you pause and helps you think through plan A and ultimately helps build trust as the customer ultimately gets a better solution and should, God forbid, plan A turn out to be a dud we’ve got something to fall back on. Having plan B be visible leaves more wiggle room for the client and I firmly believe that it helps build trust.

Continue to JAOO 2008 Day 3…

posted on Friday, 03 October 2008 22:33:17 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Thursday, 02 October 2008

JAOO-logo Last year was my first JAOO experience and I was fortunate enough to get to attend this year as well. My first experience with JAOO was very positive so I was looking forward to this year quite a lot.

The Keynote

As always we started out with a keynote which this year was held by Anders Hejlsberg from Microsoft and of course fellow Dane :). Mr. Hejlsberg talked about the future of CLR languages with three pillars forming the basis: Declarative, Concurrent, and Dynamic. Interestingly functional languages like F# and new language features like LINQ seemed to fulfill this quite nicely and so played a central role to his talk.

Anders delivered a solid talk and he even mentioned a new C# keyword which we can expect to see in the next incarnation of the language: dynamic. The idea is to declare a variable dynamic to enable easier lookup of methods than what we’ve got today with reflection. Sort of like dynamic dispatch known from dynamic language but keeping everything statically typed. Powerful stuff.

Interestingly he stopped by the Danish Microsoft HQ to give a similar talk the day before from which you can watch a clip which sums up his points.

CI and more CI

image For the the last year or so we’ve been hard at work introducing unit tests and, to some extent, test driven development. By introducing unit testing I don’t just mean just introducing the concepts and seeing what happens but really have the concepts nested deeply in the way we develop software at Vertica. I’m proud to announce that we’ve had a great deal of success in doing in no small way due to my very talented colleagues and Daniel in particular.

The next logical step in this work is to introduce continuous integration, the act of building the software and running all the structured test upon check in to the source repository. Naturally I was keen to attend a couple of sessions on this very topic.

Unfortunately Chris Read from ThoughtWorks gave a very run of the mill CI talk covering the concepts and the benefits but never really digging down deep in any of the aspects. Not that the talk was bad but he simply tried to do too much in the span of a very short time which meant that he never really got around to talking about anything concrete. He did touch briefly on various client projects he’d been involved with which gave some interesting insight into the problems we might face and he mentioned a concept of creating CI pipelines which jived well with my idea of how it should work. I’d have liked to hear a lot more about actual practices, do’s and don’ts, which would have made the talk immensely more engaging.

I followed up with what seemed to be a nice topic but turned out to be one of the pitfalls of JAOO. Not the presentation itself I’d judge it to be quite useful … for Java developers. Basically it involved taking the build process a step further than Ant by introducing a scripting language on top of Ant. Powerful stuff but sadly it didn’t apply to myself.

So I talked about the pitfalls at JAOO. Basically it’s important to be mindful of the fact that you can come across talks which are heavily based on some technology. So for a .NET dev it’s probably bad to walk in on some specific Java topic and vice versa.

Cloud Computing and Insight into Google

Google App Engine Cloud computing is getting a lot of attention at the moment and frankly I fail to see why so I wanted to see if I could gain some insight into the world of cloud computing. I actually ended up getting an interesting insight into Google as Gregor Hohpe discussed various in-house technologies they employ at Google to scale to the massive size required to run services on the level which Google does.

I was fascinated with BigTable, Google’s distributed cache, which can support tuples larger than a terabyte. The Google File System was an interesting piece of kit as well as the scales to sizes of lots and lots of petabytes. While Gregor told us about the Google File System he mentioned an internal joke which goes along the lines of, “What do you call 100 terabytes of free disk space?”, “Critically low disk space”. I’m a geek so I find stuff like that funny you know :)

He did demo Google’s cloud computing service, App Engine, which basically enables us to write Python code, deploy it to BigTable, and run it from there basically allowing developers to scale apps to the same size as Google itself.

PowerShell Blows My Mind

image A while back I listened to a Hanselminutes in which Scott talks about PowerShell the ultimate scripting environment from Microsoft. Since then I’ve wanted to learn more and I jumped at the chance to see Jeffrey Snover creator of PowerShell present it himself.

Basically his presentation blew my mind. From start to finish it was all PowerShell script flowing over the screen and I struggled to keep up with everything going on.

My interest in PowerShell comes from the fact that we’re on the brink of introducing CI in our dev process as I mentioned and I figure that PowerShell will come in handy in that it’ll help us automating some of the more tricky stuff. Also it’s my firm belief that PowerShell is a technology most .NET devs will start using over the coming years as it’s simply the way to get things done or even test out small ideas without cranky up the entire VS IDE.

Form the talk my impression is that we are in fact dealing with a very powerful scripting environment, not that I actually doubted that to begin with but it’s nice to get the point hammered home from time to time. The other aspect I came away with is that there’s a lot to learn: What we’ve got is a new syntax to deal with and even more importantly a completely new mindset. PowerShell is modeled over UNIX commands where everything can be piped together to produce interesting results. A way of thinking we’re not really used to in Windows land although I feel we can benefit tremendously from.

Continue to JAOO 2008 Day 2…

posted on Thursday, 02 October 2008 22:06:02 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Saturday, 13 September 2008

ReSharper-Logo I was fortunate enough to attend a special event at Trifork at which the manager, Oleg Stepanov, of the Jetbrains team creating ReSharper gave a talk on ReSharper functionality. He basically demoed a bunch of R# features most of which are pretty well known to the Vertica team and myself but a couple of nuggets did present themselves and I figured if we don't know about them probably others don't as well.

Please note that all keyboard shortcuts mentioned in this post are based on the standard R# Visual Studio keyboard layout.

Smart Code Completion

On the light side I'll start with a feature I knew was in there but I never quite got why it was useful. The feature in question is smart code complete or as I like to thing about it Smart Intellisense. You find the feature in the ReSharper menu under Code > Complete Code > Smart (CTRL + ALT + SPACE). Smart Code Completion is basically smart intellisense, you could say that it puts the "intelli" in the intellisense :)

What it does is that when you activate the feature it suggests methods and properties based on the types in the local scope. So if you're in the process of assigning an int variable from somewhere it will only suggest methods based on matching return types, not just name as is the case with standard Visual Studio intellisense. Check out the screenshots below, the one to the right is standard Visual Studio intellisense (CTRL + SPACE), the left one is R# Smart Code Completion where the list is greatly reduced.

ReSharper-4x-Smart-Code-Completion-Normal-Intellisense  ReSharper-4x-Smart-Code-Completion 

Complete Statement

Probably the most useful feature that I picked up at the meeting is Complete Statement. Complete Statement is available from the R# menu under Code > Complete Code > Complate Statement (CTRL + SHIFT + ENTER).

It bascially tries to complete the current statement that you writing so if for example you're writing a method signature you and you use the feature it will complete the method signature and move the cursor to the method body enabling you to write your code in a more fluent manner. It works in a number of situations so you really want to learn the shortcut and start experimenting with it.

Complete Statement for if-statement. First step inserts the missing parenthesis and the curlies. Second step moves the cursor to the body of the if-statement.

ReSharper-4x-Statement-Completion-If-Step1  ReSharper-4x-Statement-Completion-If-Step2 ReSharper-4x-Statement-Completion-If-Step3

Complete Statement for method signature. Inserts the curlies and moves the cursor to the method body.

ReSharper-4x-Statement-Completion-Method-Step1 ReSharper-4x-Statement-Completion-Method-Step2

And for a string variable. Inserts the semi colon and moves the cursor to the next line.

ReSharper-4x-Statement-Completion-string-Step1 ReSharper-4x-Statement-Completion-string-Step2

Generate in Solution Explorer

You probably know about the Generate feature in Visual Studio which enables you to generate properties, constructors, etc.. What I didn't know about this feature is the fact that it's also availble in the Solution Explorer and basically enables you to create a class, interface, struct, or folder. Very handy indeed.

Generate is available from the R# menu Code > Generate (ALT + INS).

ReSharper-4x-Generate-In-Solution-Explorer

Camel Case Navigation

I love the code navigation features of R#. They let me find my way around a code base very simply. I've found this particularly useful in code bases I don't know very well because I usually have an idea of what another developer might choose to call something so I just go look for part of that type name. Anyway a twist on the navigation features is the fact that you can navigate via Camel Casing so if you have a type named OrderManagementService you could look for it by typing the entire thing but with Camel Casing you basically enter the upper case letters of OrderManagementService (OMS) and it will find that type for you. Very handy and my second favorite new feature of R# :)

BTW Navigate to Type is CTRL + T, Navigate to Any Symbol is CTRL + ALT + T, Navigate to File Member is ALT + <, and Navigate to File is CTRL + SHIFT+ T. Learn 'em, love 'em.

ReSharper-4x-Navigate-by-CamelCase-Standard ReSharper-4x-Smart-Code-Completion

Coming Features

Oleg also told us a little bit about what we can expect to see in R# 4.5. The main "feature" of the 4.5 release is performance tuning and bringing down the memory footprint. They're look at speeding up R# by a factor 2 and bringing down the footprint by 100 mb. Certainly very welcome. They are sneaking in new features though and one of them is to include "Find unused code" in Solution Wide Code Analysis.

Download ReSharper 4.1

posted on Saturday, 13 September 2008 15:37:02 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Monday, 08 September 2008

evil-insideWhen first I saw the var keyword in C# 3.0 I was excited, my body tingling with excitement for the possibilities this keyword would bring to the world of many a .NET developer: Productivity, clarity, fame, and fortune. Unfortunately now that C# 3.0 has been with us for a while I feel that I must warn the public of the evil that is the var keyword. Productivity, clarity, fame, and fortune have succumbed to mind boggling spaghetti code, confusion, and lets be honest fame and fortune were never really on the table to begin with :)

What then is this evil of which I speak? Massive overuse of the var keyword. Observe the following hot dish of spaghetti bolognese:

   1:  public void SpaghettiBolognese()
   2:  {
   3:      var calculator = new Calculator();
   4:      var taxLevel = GetTaxLevels();
   5:      var person = GetPerson();
   6:      var tax = calculator.CalculateTaxLevel(person, taxLevel);
   7:   
   8:      person.Tax = tax;
   9:  }

All kidding aside this piece of code breaks one of my most fundamental rules when reading and writing code: Don't make me think. Grokking a piece of code is though enough as it is having to keep types and varibles in memory (read: the developer brain) will slow down the process of code reviewing or debugging a piece of code.

For now I'm using a couple of rules to keep the var silliness at manageable levels.

1) Always use proper types for variables which are set from a method or property. It makes the code so much more readable.

   1:  Tax tax = calculator.CalculateTaxLevel(person, taxLevel);

2) Do use the var keyword when there is no question about which type it will be inferred to.

   1:  var calculator = new Calculator();
   2:  var i = 100;
   3:  var s = "Søren";

While the var keyword does offer a nice productivity gain it's important to realize when to use and more importantly when not to use it. Also it would seem that the var keyword is in cahoots with the good folks at Jetbrains as ReSharper is very eager to convert perfectly well formed type declarations to implicitly typed ones. As I started out by saying be wary of the var keyword - it's one sneaky bastard :)

Var-keywords-is-a-sneaky-bastard

posted on Monday, 08 September 2008 21:57:14 (Romance Daylight Time, UTC+02:00)  #    Comments [10] Trackback

opera-mobile-95-logo This summer I spent two weeks in southern England vacationing with my family; beautiful country side though not really known for its blazing high speed broadband connections. Chance would have that Opera Mobile 9.5 got released while I was there and naturally I faithfully downloaded it and checked it out. I did not have a chance to regret it.

Before we continue I have to put up the standard issue beta disclaimer for the browser: Opera Mobile 9.5 is not in final form. Be aware that it might bring your Windows Mobile phone to its knees, even destroy it completely also bear in mind that there's a slight chance that the beta status of the product will affect your ability to have children. Consider yourself warned! :)

Before we dive in you should know that Opera Mobile 9.5 will only work on Windows Mobile devices equipped with a touch screen. Why you ask? The reason is the fact that navigating web pages on Opera Mobile 9.5 is pretty much driven with the touch screen alone. While that is a bit of a let down for non-touch screen device it's also the main reason why you should care about Opera Mobile as a Windows Mobile user. What it means is that basically you can get an iPhone-like browsing experience on Windows Mobile. Simply put you don't have to use those annoying little scroll bars in Pocket IE to get around the page instead you swipe your finger across the screen which will produce a nice scroll complete with the rubber band effect found on the iPhone.

Auto full screen is a great feature. It does exactly what you'd expect it to do: When displaying a web page Opera will switch to full screen mode automatically and leave it there until you click the little transparent menu icon which will bring up the browser chrome and enable you to enter a new URL, view your favorites, or switch tabs.  

Opera-Mobile-95-Full-Screen

The tab feature is a particularly nice touch at least for me as I've often found myself needed to browse away from a particular page because I needed to look something else up. A very annoying problem that Opera Mobile does away with like that. Now there's a small caveat as only three tabs are support at any one time. At first I thought that it would be a major deal but as it turns out you really never need that many active tabs; at least I never hit the upper limit during my normal usage of the browser.

Opera-Mobile-95-Tabs

Add to this the fact that Opera Mobile will display web pages in their complete form like you're browsing from a PC and we're in business. Much like Safari on the iPhone Opera will display the web page in its entirety on the tiny screen and allow you to zoom in on areas of interest by double tapping the screen.

All is not well in Opera land though and the beta part of the product did rear its ugly head. While touch scrolling works fine most of the time I did find that it would screw up during page load. Trying to scroll on page while loading it would more often than not send the browser scrolling to the bottom of the unfinished page, warp speed and all. Also on the subject of weird behavior with loading pages I found that the auto full screen feature would mess up when trying to scrolling during page load and continually switch on and off until the page finished loading. The bug is very profound on a slow connection like GPRS but you hardly notice it browsing on a WLAN or using 3G.

It does seem like a couple of issues are related to trying to navigate pages while Opera is loading as I found it exceedingly difficult to tab the links. This wasn't helped any by the fact that no visual cues are provided which will give a clue whether or not the link you just clicked was indeed clicked properly. Again on slow connections this is more pronounced as the only cue you get that you actually clicked the link is when the destination page actually turns up. This also goes for button on web pages which don't produce any cues to the fact that they were actually clicked.

Opera-Mobile-95-Menu-Options

Opera Mobile does lack a little polish here and there which I discovered on my phone which is outfitted with a hardware keyboard that unfortunately Opera doesn't seem to be aware of as it helpfully kept popping up the software keyboard whenever I tapped a text field. Just a minor annoyance that I hope Opera will fix with the next release.

On a nice side note I became aware of the bug because Opera is a damned fast browser at loading content. Even on very slow connections text starts displaying very quickly and were it not for the scrolling bug you'd be reading web content in a matter of seconds on a GPRS connection. After a while I simply turned of the auto full screen mode while on GPRS so I could browse away to my hearts content. The feature is very good and I did turn it back on when I was able to connect via a faster connection.

What this boils down to is basically that the combination of touch scrolling and PC-like rendering of the web pages make for a usable web browsing experience on Windows Mobile where previously I'd say that it was lacking in a number of ways though better than what you'd find on comparable Symbian-based phones. During my vacation I found myself browsing the web more than I ever did. Even I returned home I found myself reaching for the phone where I previously would have gone and fired up the desktop to browse. Opera Mobile is that good missing polish aside.

If you own a touch screen Windows Mobile phone I highly recommend that you go and give Opera Mobile 9.5 a spin. You won't regret it.

posted on Monday, 08 September 2008 20:26:12 (Romance Daylight Time, UTC+02:00)  #    Comments [3] Trackback
# Tuesday, 17 June 2008

Need I say more? Be one of the first to grab it from FTP.firefox-logo

posted on Tuesday, 17 June 2008 19:46:10 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Wednesday, 11 June 2008

vista_logo In Windows XP and Windows Server 2003 I used the "Run As" command religiously for testing various stuff. Today I needed the same thing in Windows Vista, right clicked a program in the start menu and ... nothing. No Run As command. Confused I held down the shift key in the hopes that it would appear. Again nothing.

Turns out the only thing you get in Windows Vista is the "Run as Administrator". Oh you can have Vista prompt you for credentials every time you select Run as Admin by changing the local group policy but I really don't want to spend the time changing the configuration or the hassle of having to enter credentials every time I want to run something as admin. I'm lazy like that.

Sysinternals to the rescue with ShellRunAs. It adds a new menu item to the right click menu which allows you to enter a different set of credentials to run the application under. Nice! No hacking of group policy required.

 

ShellRunas

Download ShellRunas

posted on Wednesday, 11 June 2008 15:29:34 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Tuesday, 10 June 2008

ReSharper-Logo My favorite Visual Studio add-in just got revved to version 4.0. Full LINQ support included along with a number of other goodies. I may have to update my ReSharper review now :)

Although I got to say that an install screen looking like this would scare me just a little if I didn't know the product all ready. Busy, Busy, Busy!

 

ReSharper-40-install-screen

Download ReSharper 4.0

posted on Tuesday, 10 June 2008 09:20:01 (Romance Daylight Time, UTC+02:00)  #    Comments [7] Trackback
# Thursday, 05 June 2008

Community-People In my first post I covered the Why? of community and ended up with this mission statement, "The Danish .NET community is an open platform through which developers meet as equals to share experiences and inspire each other through enthusiasm".

With the Why? in place I followed up with What? and came up with my personal idea for the Danish .NET community, "The Danish .NET community is about face to face meetings where people participate on equal terms and secondarily about online activities to make up for the intervening periods.".

How?

And now for my favorite part of the series: The practical aspect. The how!

How do we go about creating an open platform through which developers meet to share experiences? In many way I already feel we've made good inroads on that one. Naturally I'm a little bit colored here due to my involvement in ANUG but I honestly feel that the the user groups out there are the very best vehicle for getting developers together. Especially with user groups popping up in major cities across Denmark and the possibility of cooperation between them.

That's why I'm taking the initiative to bring the core groups of the Danish .NET User Groups together on a regular basis to knit the enclaves of .NET community better together.

The NUGs will create a nice platform from which to create the informal gatherings which are the geek dinners. I like the idea of geek dinners and I feel that the informal nature of such gatherings help people let their guard down a bit and talk more freely about whatever challenges they're facing day to day.

Microsoft of course is playing their part in this with the TechTalks which I feel are much better than the Meet Microsoft events of yesteryear due to their clearer focus. Although I feel that Jutland is left out in the cold a bit.

Microsoft is very keen to help out and I've wracked my brain to come up with ideas for places where they can help out because basically the .NET community seen with my eyes is better than ever.

One way to help out the NUGs is by helping us put together large scale shared events, maybe full day events with specific themes and who knows, maybe in the long term we can go even bigger and create a yearly .NET conference? Microsoft has experience with this kind of stuff with the Meet Microsoft events and I feel it could work even better with the special sauce that the NUGs bring to the table.

Also I'd like to see large scale events based on the open space principle. Simply bring together a bunch of enthusiastic and opinionated people and have them go at it. We've discussed doing this within ANUG but we feel that the scale is too small to do it without any sort of structure. But imagine gathering people from across the country for a day of open space discussion; I see some magic happening there.

We need to take a long hard look at what's already out there and not try and create new initiatives. Basically what will happen is that we'll water down the community until relevant information is scattered across the ruins of the community useless to all. In that vain I propose that we start using some of the prominent .NET sites out there to share information like DotNetForum.dk. More specifically I'd like Microsoft to not try and invent the wheel by creating their own platform for sharing content. Use what's out there, use DotNetForum.dk, ActiveDeveloper.dk, or whatever else. Please don't try and do something completely new. Just get the content out there and back the existing efforts by doing so.

I was surprised to find that people place an enormous value on web casts and specifically on web casts created here in Denmark. I partially agree that they are a good vehicle for information but only for some information. I've given Daniel a though time in the past but he has proven that web casts are the way to go for personal interviews with people in the community. His unique position with Microsoft along with his outgoing personality makes him perfect to go out there and do just that.

These are some of my opinions and ideas on how we can make the .NET community even better. In short we need to create more opportunities for us to meet face to face and use the existing platforms to promote new content.

I'd like Morten Jokumsen's opinion on where he sees DotNetForum.dk, I'd like to hear from Daniel Mellgaard Frost and Bo Drejer whether we can establish a strategy based on some of this stuff, I'd like to hear from the powers that be at ONUG Jesper Blad Jensen, Joachim Lykke Nielsen, and Kasper Bo Larsen what their opinions on this are, and the same thing goes the KNUG guys Jakob T. Andersen and Mads Kristensen.

posted on Thursday, 05 June 2008 19:48:20 (Romance Daylight Time, UTC+02:00)  #    Comments [3] Trackback

Community-People This is my second post in the series Do! Community! Why? What? How?. In this post I'll try to address the What based on the mission statement from the previous post, "the Danish .NET community is an open platform through which developers meet as equals to share experiences and inspire each other through enthusiasm".

What?

What makes a community? I guess that's it different for each individual. For me it's all about meeting people and doing so continually. I first started feeling part of a community with my involvement in Århus .NET User Group and Danish Forum for .NET Architects.

Meeting the same people again and again, getting a sense of what they're about, and why they care about the things that they do, that's what community is for me.

Blogs, web casts, online articles, never really did it for me. To me it's very impersonal although once I've met a person I usually follow their blog religiously.

Everything should have the chance to participate in this on the level he or she desires be it as an attendee at a meeting, as a speaker, posting to a blog, whatever, and everybody should have even opportunity to do so.

The Danish .NET community is about face to face meetings where people participate on equal terms and secondarily about online activities to make up for the intervening periods.

Read part 3 Do! Community! How?

posted on Thursday, 05 June 2008 19:47:18 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback

Community-People In my last post I was pretty harsh in my statements about Microsoft and Daniel in particular but I felt it necessary to get out there in order to spark a debate or at least get the right people thinking about what's going on.

Now that said I also feel that whenever someone puts forth criticism it's vital to back it up with something substantial to address the situation. That's what I intend to do with my next couple of posts.

First I'd like to address why we should care about the community at all. The why of it. Second what can we do about it. The what. And finally I'll talk about ways to get where I'd like to see the community go. The how.

Background

I never felt as part of any community in my years working with Microsoft technology, not when I spent a lot of time answering questions on news groups, not when I spent time on Eksperten.dk, and not even when I attended the Meet Microsoft events regularly when they were still running.

During the last year though that started to change. Along with the other members of the core group I've busied myself with getting Aarhus .NET  User Group off the group. Right around the launch of ANUG I was invited to be part of the Danish Forum for Danish .NET Architects. Both initiatives have brought change to the way I think about the Danish community. With that in mind I'll try to explain why we should care or at least why I care.

Why?

To me community is inspiration, participation, enthusiasm. At the core of each of these words are people. Interaction with people, knowing people, sharing experiences with others.

I care about the community because I care about people. I care about creating something which benefits others, not just myself. That's why I blog, that's why I spend my spare time helping out with ANUG, that's why I take the time to answer every comment and e-mail I receive.

Simply put you should care about the community because it provides developers a great way of inspiring each other, of sharing the enthusiasm that most of us feel every day when we go to work, and finally because community knits together competency centers across the country which otherwise wouldn't benefit from each other.

In short I feel that we should care about the community because the Danish .NET community is an open platform through which developers meet as equals to share experiences and inspire each other through enthusiasm.

Read part 2 Do! Community! What?

posted on Thursday, 05 June 2008 19:46:35 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Monday, 02 June 2008

publicvoid-logo I seldom take the time to respond to a blog post directly but in this case though I feel that I must.

Before I get to the actual commentary a little background on what's going on in the Danish Microsoft developer community: Microsoft Denmark is very eager to reboot their community effort. In that vain they're trying to engage the people who are active in the community. Central to this initiative is Daniel Mellgaard Frost, the new developer evangelist with Microsoft. Since he came on board two months ago he's been very visible and has shown lots of energy and enthusiasm for which I have nothing but praise to sing. All is well and good up to this point.

As part of this effort a number of people was named Microsoft Designated Information Providers of which I am one. This Wednesday all the MDIPs where pulled together for the first time in a community event set up by Daniel Mellgaard Frost.

I honestly didn't know what to expect and so I was rather shocked when Daniel stood up first thing and started rattling of all sorts of demands for content delivered by the MDIPs. Now don't get me wrong I'm happy to help out but I do so on my own time and because I enjoy the work I do with ANUG a great deal. Not because I seek to please Microsoft thank you very much. I'm sure that Daniel meant it well when he stood up and tried to take control of the meeting but he came off very matter of fact and became defensive when challenged on his point.

Bad start aside we did get a good discussion going and it seems that Microsoft is very keen to help us out. Now my only problem is that when we get right down to it all we got from the meeting was a whole bunch of fluff. I understand that we're in the early phase of this thing but honestly if the MS evangelists are so eager to make stuff happen in the community it would have been so much better come to the meeting with concrete initiatives instead of a lot of "we'd like to do this...", "we could do that...", "We don't want to step on anybody's toes...". In short I'm missing purpose and direction on this one. I simply didn't take away any sense of an overall strategy for the initiative which is a crying shame given all the energy put into it.

Case in point we wanted to create a place where the MDIPs could communicate about ideas which everybody felt would be a good thing. Now the MS guys seemed at a loss as how to make this happen. While the we were discussing various avenues of making this happen Morten Jokumsen simply whipped out his iPhone and created a new group on DotNetForum. See here's an example of "Do! Community!". Don't talk about it. Do it!

Another example is the community event scheduled for the next day open to anybody and everybody. A meeting set up by Daniel although he apparently didn't deem it necessary to come prepared or even well rested. He spent five minutes there before leaving the scene to the attendees. What happened after he left? Odense .NET User Group was formed by the attendees, web sites went up, and a core group of people committed themselves to getting the group off the ground. That's "Do! Community!". Don't set up a meeting like that, sit back, and wait to see if something might happen. Set something up and make it happen!

There is a lot, a lot! of good intentions within Microsoft to do good in the community but I feel that they're paralyzed from taking action. Everything seems to be a committee and they don't want to cause a stir by favoring one initiative other another. That's not doing. That's not even trying.

And finally we come to my point with all this. I'm not trying to bash Microsoft, the evangelists, or Daniel specifically. What I am trying to get across is the fact that before you can start acting up in the community you need to prove yourself. Prove that you want to make a difference. Even more importantly make an actual difference.

I know that Daniel is very active with ActiveDeveloper.dk, both now and prior to his job with MS as evangelist and he is trying to do good, not doubt about it. His latest post though seems to indicate that he feels that he personally is the driving force behind the Danish .NET community. I'm flabbergasted when I see comments like these , "you just have to kick people over the knee to make things happen", "the new Odense .NET User Group that I helped kick-start", and my personal favorite, "it's incredible how much I've accomplished over the last two months".

Now Daniel, I personally don't feel that you've accomplished anything as of yet. Yes, you've put heavens and seas in motion but that's a simple matter. Before putting comments like those online I'd like to see some follow through on the initiatives. Essentially it's all for naught until something is proved viable in the long term and we have yet to see that.

Do! Community!

posted on Monday, 02 June 2008 10:19:56 (Romance Daylight Time, UTC+02:00)  #    Comments [3] Trackback
# Friday, 30 May 2008

As an experiment I'm trying out Twitter a service for spewing your thoughts at the internet with little or no filtering. Kind of a public chat room.

http://twitter.com/publicvoid_dk

posted on Friday, 30 May 2008 14:32:52 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Friday, 16 May 2008

At Vertica we do our very best to keep everybody happy. We pull out all the stops with inspiring work environment, nice desktops, multiple monitors, top-notch salaries, food, private healthcare, fysio, massages, etc.. But best of all, like Apple we think differently too, the fruit of the day says it all.

Fruit-of-the-Day-2 

Yes that indeed is a rhubarb, and no I have no idea why the fruit company would think it a good idea to include it in the delivery :)

posted on Friday, 16 May 2008 15:30:36 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Wednesday, 14 May 2008

ReSharper-Logo Getting "Failed to load the supplementary package '0c6e6407-13fc-4878-869a-c8b4016c57fe'" every time you fire up Visual Studio will make for veeery long dev days I tell you that. Luckily I was able to find the Issue Details in Jetbrains' JIRA which told me the issue was resolved in version 3.0. Doh! Reinstalling everything in sight won't help either...

A thread on the Jetbrains forum however lead me to the solution. You simply go like this from the Visual Studio 2008 SDK command promt:

devenv /ResetSkipPkgs

We were going through a rocky patch there, ReSharper and I, but I'm happy to report that we're back together and all is well :)

posted on Wednesday, 14 May 2008 15:14:57 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Tuesday, 13 May 2008

aspnet During my ANUG talk about the ASP.NET MVC Framework a question came up regarding what the landscape of ASP.NET land would look like with ASP.NET MVC being open source. Would we start to see lots of different branches floating around out there?

The answer to this is a resounding no as the license model of ASP.NET MVC only allows for you to download the code off of CodePlex, make changes, but not redistribute those changes.

What this means that you'll be able to take the code make tweaks here and there if you're not satisfied with how a particular aspect of ASP.NET MVC works.

The Gu has the word.

posted on Tuesday, 13 May 2008 09:23:26 (Romance Daylight Time, UTC+02:00)  #    Comments [4] Trackback
# Sunday, 04 May 2008

anug_logo_200x85 Our eleventh meeting is over with and I'm particularly relieved as I was doing the talking the entire evening this time around. Before I tell you more about it first let me take you through the user group news.

2008 Booked

I'm happy to announce that we've booked speakers for the rest of 2008 which means that we'll be able to take a breather until 2009 :) Getting people to present is just half the battle; we need a place to hold the meetings as well and we're not completely set on that front.

So which topics are we going to cover in the meetings to come? By popular request here's the complete list:

  • May, Windows Communication Foundation, Klaus Hebsgaard
  • June 20th, Geek Dinner
  • June, Test Driven Development, Mark Seemann
  • July, Open Space: Pro Tools
  • August, Subsonic, Lasse Eskildsen
  • September, F#, Christian Holm Nielsen
  • October, Usability, Søren Skovsbøll
  • November, BlogEngine.NET, Mads Kristensen
  • December 12th, Geek Christmas Dinner, *.*

I'm very happy about this list I'll tell you that :)

New Web Site

We talked about this forever and now, thanks to Peter, our new web site is a reality. Peter did a bang up job with the new web site and I sincerely hope that it becomes the hub of information on ANUG in the future instead. LinkedIn will still be the way to gain membership and Facebook will still drive sign ups for the meetings but anug.dk will bind the information together for us.

To keep non-Facebook users up to date with new meetings we hope to integrate the new web site with the ANUG backend system: Google Docs :) We keep information about upcoming events and ideas in a Google Docs spreadsheet which I created some code to access via the Google API. Peter is integrating my code into the web site as I type this :)

In a related note anug.dk was hacked due to a security issue in BlogEngine.NET. Kudos to Peter for updating with the security patch literally within minutes of learning of the hack. Everything is back in full operation as of writing this. If you're running a BlogEngine.NET site be sure to check out official response to the security issue.

Mile Stone Reached

ANUG reached a significant milestone during April: Our 100th member. Actually we're up to 107 members right now a number I'm very pleased with. What does this mean for ANUG?

It means two things: First we're doing something right here which is nice to know :) and second we're suffering from our own success as finding place to hold our meetings is becoming increasingly difficult with more and more people attending our meetings.

A positive problem to be sure but something we have to deal with nonetheless so I'd like to take this opportunity to call for help from companies with ample space to house 30 - 40 .NET devs talking tech. Please e-mail me if you're able to house us for a meeting or two.

Vertica, Kristelig fagbevægelse, Ditmer, Scanvaegt, Systematic, Up-Site are all examples of companies which are supporting Aarhus .NET User Group and by extension the .NET community. A few new companies are lined up to help in the future including iPaper, Suzlon, and Vesta.

ASP.NET MVC Framework Presentation

aspnet It was with some trepidation that I looked at the number of people signed up for the ASP.NET MVC Framework presentation. For one thing I wasn't sure that we'd be able to fit everybody in the Up-Site offices and secondly I was giving the presentation :)

As it turned out we were able to fit everybody; barely. We had to highest turnout ever with approximately 35 people in attendance. I was particularly please with the fact that a number of new companies were represented at the meeting including Mjolner Informatics and Vola. The Nordic Company-guys even made the trip all the way from Copenhagen to Aarhus to attend for the second time.

I'm very pleased with the way my presentation went. From the very start we had good interaction with lots of questions and remarks about the new web framework from Microsoft. Even my demos went off without a hitch, incredible! :) My Poor-Mans-Update-Panel seemed to come across particularly well.

Thanks to everybody there I had a good time presenting this stuff. Oh yeah I thought I'd better link to the most popular slide of the evening (You probably had to be there to get it though) :)

Tour de Up-Site

Up-Site-logo Up-Site is an interesting company I'd never heard of before we started ANUG. Actually Morten Bock, a developer with Up-Site, was one of the very first people I shook hands with at the very first user group meeting. Morten was kind enough to facilitate some nice surroundings for our April meeting.

Now what is that makes Up-Site so interesting? For starters they have a very clear idea of their business model and it seems that that was the case from the very beginning. As the CEO Lars Henrik Larsen told us Up-Site specializes in helping  companies select the right content management system; very competently I might add. Up-Site specializes in no less that five different CMS'es ranging from the high-end right down to the free and open source.

Their offices are among the nicest I've visited yet. I could really tell that the guys at Up-Site have attention for detail with nothing being left to chances decor-wise. Very nice and definitely not something you see everyday as developers tend towards the functional and not much else :)

A nice touch is their wall of fame which holds a little plaque for each of their delivered solutions. A pretty comprehensive wall of fame too.

posted on Sunday, 04 May 2008 22:50:29 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Friday, 25 April 2008

Vertica_Logo My good colleague Daniel has joined the blogging fray. He's been at it for a while too before telling anyone about it so he's got a nice bunch of posts up already and keeping up the pace. He's been working on a Commerce Server/SharePoint project for the past couple of months and of course the regular C# stuff so expect to see more posts about those subjects.

Subscribed!

posted on Friday, 25 April 2008 15:30:57 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback

img_ecommerce With an e-commerce solution online payment naturally follows. Recently I've been involved in a couple of e-comm projects which needed integration with a payment provider.

In the good old days integration was a no-brainer you'd simply go with the API, HTTP-RPC or web services, for the nicest solution design-wise and development-wise as well.

Last year though VISA/MasterCard introduced the PCI compliance requirement for businesses handled credit card information. A move which in theory was good in that it limits the number of businesses which handle the information and by extension limits the chance for leaked information via security breach.

Now I write in theory because what happened is the VISA/MasterCard went a bit too far in their requirements. Basically if a credit card number is ever entered on your server you need to be PCI compliant; while this make sense for a payment provider for a store it means that you can't use the APIs as you'd normally would instead you have to use a payment window provided by the service provider.

If you want to handle credit card information at all you have to submit your system for quarterly reviews by external security consultants and your system will have to comply to the same standards as the payment providers themselves adding a yearly running cost of $5,000 - $30,000. Did I mention that VISA/MasterCard is bumping up the requirements on a quarterly basis? All in all this adds up to the conclusion that not handling credit card information on your servers at all is becoming the default choice and by extension the payment window becomes the default choice.

The payment window adds another layer of complexity to your solution in that you have to redirect your customer completely over to the payment provider site to process the credit card after which point the customer returns to your site to view a confirmation if everything went well. My main complaint about the payment window is the lack of contiguity for the customer. A great many sites here in Denmark use the payment windows with more or less success in that department.

Payment-Windows-Redirection-Flow

With the payment window being the default choice for now it's important that Danish online e-tailers figure out how to integrate the window in the most user friendly manner. Not doing so signals lack of professionalism in the best case and in the worst case they could loose customers because they are confused by a completely different look and feel in the most critical part of the checkout flow: Payment.

A company which does this extremely well is Trollbeads.com. Their integration with the payment window is seamless, in fact when I shopped there a couple of weeks ago I didn't even notice that I was redirected to their payment provider for processing my payment information. I should notice I do this for living :)

Take a look at the following screenshots to see what I mean:

Trollbeads-basket

(Basket)

Trollbeads-checkout-DIBS

(Checkout)

That's how it should be done and I didn't even do it myself :)

posted on Friday, 25 April 2008 15:14:53 (Romance Daylight Time, UTC+02:00)  #    Comments [1] Trackback
# Monday, 14 April 2008

Back in 2005 I wrote about the "magic" third connection available with Terminal Services with the -console switch and I've happily used it ever since. Everything was dandy until recently when suddenly it stopped working for me.

It turns out that with Windows Vista Service Pack 1 Microsoft deemed it necessary to rename the switch to -admin. Only for Windows Vista mind you and only for Service Pack 1.

So the command for getting at the console of a Terminal Services enabled machine is now

mstsc.exe -admin

Thanks Microsoft!

posted on Monday, 14 April 2008 10:36:45 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Friday, 04 April 2008

anug_logo_200x85 My good friend at Scanvaegt Henrik Kristensen has talked about Windows Workflow Foundation ever since I first met him so naturally we had to invite him to give a talk on it for ANUG.  We were fortunately enough that he accepted and even provided a nice cantina for the ANUG boys to use for the duration.

User Group News

The March meeting opened up as usually with myself giving everybody a quick rundown on what's going on with ANUG; mainly focusing on meetings to come. We been fortunately enough to be able to stay ahead of the curb on planning meeting and I'd like it to stay that way so we're planning the next batch of meeting for April, May, June, and we've even got January 2009 booked so stay tuned :)

Future topics include ASP.NET MVC which I'll do a little song and dance about, Windows Communication Foundation which Klaus Hebsgaard has been good enough to agree doing a talk on, and finally a talk on TDD which Mark Seemann agreed to do.

The not so set in stone topics that we'd like to see presentations on include WPF to kind of come full circle on .NET 3.0, Pro Tools where we discuss the various tools and utilities that devs use in the day to day work to get the job done, Subsonic the darling of the ORM world, and a bunch of others topics. We'd like to bring in more open source tools and topics like DotNetNuke and NHibernate so don be shy, please contact me with your ideas.

The meeting marks the second showing of our new format. We noticed that we got quite a good buzz among people for the first break of the evening but not so much so with the second so we decided to break up the main presentation in two pieces to allow people more time to talk with each other and get to know each other. Like the previous time at Systematic it worked out nicely with people clustering around and discussing a number of different subjects. The new format is definitely a keeper.

IMG_2247 IMG_2248

Download my slides

Windows Workflow Foundation w. Henrik Kristensen

The main event of the evening was of course Windows Workflow Foundation a tool that I myself didn't know a lot about. Mostly high level stuff but I definitely see the potential there.

Henrik made a good show case on WF, the rules engine and the workflow engine. He even brought out his LEGO Mindstorms robot the demo how you could do a workflow to actually control it. The little bugger has other ideas though as it cruised along and right over the edge of the table. In spite of the rebellious little robot ("is this how is begins?", he asked  with the killer Mindstorm robot fast approaching) the point was made very clear with the showing.

Pitfalls in WF were highlighted neatly by way of the robot. For example WF has a notion of a parallel activity which at first look might persuade you to think that you get some kind of multi threading for free. Unfortunately the only we get is a kind of pseudo parallelism where the parallel activities are execute in order.

IMG_2258 IMG_2257 IMG_2261

In all Henrik did a great job of highlighting why we should care about WF. Basically every LOB application today has some kind of workflow built in be it by way of statuses, custom state machines, whatever. The rules engine in particular was interesting to me as I've lacked such a thing a number of times in the past. What I've done before is basically forego the better solution for something simpler to implement because I didn't want the hassle of building a rules engine myself. No more, the next time such a need arises I'll definitely have a go at using the WF rules engine. The configurability alone is worth it.

For the workflow engine I noticed an interesting thing which is the fact that inherently it will make you thing in components. Basically a workflow is built from a number of activities which are stand alone tasks that can manipulate the data flow in some manner. Pretty much we're dealing with a components here and the model makes you go into a certain mode where you naturally try to decouple the activities from each making for much better reusability.

Workflow Foundation may not be the sexiest of the four pillars of .NET 3.0 but it does provide some real value to the developer toolbox.

Download Windows Workflow slides by Henrik Kristensen

Scanvaegt International Redux

Like the last time we were fortunate enough to be able to use the Scanvaegt offices Henrik also gave us a rundown of Scanvaegt. We both figured that the audience would have changed a bit so it was safe to to the presentation once more. Last time around he brought a great video of a machine sorting chicken (yes, I know it sounds lame, but you have to see this thing in action :)). This time he brought a different one showing a machine 3D scanning salmon and cutting it in equal sizes in seconds. This time around you can even check it out yourself as Henrik provided me with the video to share with you guys.

Interesting stuff has happened at Scanvaegt since the last time we were there. Last year they'd pretty much just been bought by a competing company by the name Marell. Since then they've worked on sorting out their product lines as a lot of overlap has been going. Among other things this means that Henrik is non associated with their Icelandic development department and he's even had the pleasure of going to Iceland a couple times. He briefly outlined some of the unique challenged in working in a very distributed environment. Interesting stuff for sure.

Download Scanvaegt International slides by Henrik Kristensen

posted on Friday, 04 April 2008 12:39:15 (Romance Daylight Time, UTC+02:00)  #    Comments [2] Trackback
# Thursday, 03 April 2008

glatfore_skilt I'm now officially a statistical anomaly. Only last November I encountered a ghost driver and in my post Could Have Been Me I write about what could have happened if I'd decided to try and take over another car at the very moment the ghost driver came upon me.

Well tonight turned out to be that night. On my way home from Århus I encountered a different ghost driver only this time I was actually in the same lane as the other guy. Luckily I had my wits about me and managed to swerve out of his way.

So please just for a year or so give me break. I really think that I've had to deal with enough ghost drivers for one life time. WTF!

posted on Thursday, 03 April 2008 23:16:00 (Romance Daylight Time, UTC+02:00)  #    Comments [2] Trackback
# Saturday, 15 March 2008

anug_logo_200x85 Saturday 15th marks the date for the first ever code camp held by Aarhus .NET user group. We went with ASP.NET for beginners as the theme building a small blog application with Visual Studio 2008 and ASP.NET 3.5. Due to space constraints we'd set the upper limit for attendees to 12 and I'm happy to report that we had a full house. The skills levels of the attendees varied from people who'd never opened Visual Studio before to people with some knowledge about ASP.NET.

Brian put together a nice program which took the attendees through creating various new features for the blog application like adding comments and membership support for login. To get the attendees coding some of the interesting parts of the application Brian provided a nice starting point for the code camp with a blog application laid out nicely in a Visual Studio solution along with a database. Due to differences in how we usually work with the stuff we encountered some interesting problems with getting the database up and running in SQL Server Express. Turns out that the SQL Server engine is prohibited from accessing user folders on the machine it's running on.

Coaches were in place to help out with questions the attendees might have, Søren Lauritsen who signed up to help at the last minute provided valuable help during the day, as did Brian himself between the short tech briefings, and myself. Every single attendee came well prepared and had all the prerequisites installed before showing for the code camp, thank you all for being so well-prepared.

 

 IMG_2234 IMG_2228 IMG_2235

 

During the day we had nice discussions on various aspects of ASP.NET, .NET in general, and we provided a number of tips and tricks like looking at compiled assemblies with Reflector and taking that a bit further with TestDriven.NET which enables you to simply right-click on a referenced assembly in Visual Studio and open it up in Reflector; no more digging around the file system to find that pesky assembly.

URLRewriting.NET was discussed along with ASP.NET MVC for creating friendly URLs. And I should mention that I'm doing a presentation on ASP.NET MVC in April If you want to know more about that in general. I'll post more information about that meeting once I know more about the particulars. As always the date and time is set so make sure to mark your calendar for April 30th 18:00 if you wish to attend.

Thanks to all who attended the code camp my impressions are of a successful day during which the attendees learned a lot. With that I'll leave you with some more pictures from the day. As you can see people were deeply focused but still very eager to help each other out. Nice work everybody!

Be sure and grab Brian's source code and presentations. Also you can check out all the pictures taken at the meeting.

IMG_2238 IMG_2243 IMG_2241 IMG_2231 IMG_2240 IMG_2237

posted on Saturday, 15 March 2008 12:35:50 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Thursday, 13 March 2008

Commerce-Server-2007-Logo In my mini series about the Commerce Server development experience I did a piece called Magic Strings Galore which describes the general tendency to have all data in the various CS objects accessible via strings. Imagine a product with a rich description. You would access that like a hashtable, e.g. product["RichDescription"]. No way of knowing the return type , no discoverability via intellisense, poor refactoring support. Sure ReSharper takes care of some of that by looking at string literals when doing refactoring but surely there must be a better way to fix this. It turns out there is and I'm going to let you in on the secret :)

In my previous post .NET Framework 3.5 and Microsoft Commerce Server 2007 A Match Made in Heaven I discussed some options for using extension methods to add missing functionality to the built-in Commerce Server classes. Using this method you can augment the existing interface of Commerce Server but it doesn't really provide you with a nice place to put all the domain logic that is bound to turn up eventually. To solve this problem I came out with an automatic mapping layer sitting on top of the Commerce Server Profile System which translates the stock Profile into rich domain entities filling in the gap and giving you that place to put your custom logic and at the same time doing away with all the problem with magic strings that I described above. I call it the ProfileRepository.

The ProfileRepository is not a true object relational mapper in the sense that I'm not really converting from the relational model. Luckily all that is taken care of for me by the profile system of Commerce Server so I pretty much just have to provide the type safe abstraction.

Requirements

ProfileRepository My requirements for the ProfileRepository is the following. I want the developer using the framework to easily be able to map an entity, say User, to a profile, say UserObject.

Additionally I want to abstract the actual implementation of the entity by only working with interfaces so the consumer of the framework has the freedom to switch implementations, e.g. for unit testing or later in the development phase of the application.

Finally I want the consumer of the entity to be blissfully unaware of Commerce Server sitting underneath the repository; basically a full implementation of the repository pattern as outlined by Martin Fowler.

With that in mind we end up with a basic class hierarchy in place as depicted in the class diagram in the picture to the right.

Mapping Engine

ProfileMapper With the basic class hierarchy in place lets take a look at how the actual mapping of a Commerce Server profile to an entity happens.

Interestingly the Profile System operates with a type system completely separate from .NET and indeed COM making mapping interesting. The type system is pretty weak and doesn't express everything needed to perform mapping of all data types. For example there's no way of telling the difference between an association between two profiles and a string; they both turn up as a string.

To work around this limitation I went with assumptions based on the target of the mapping. So from the type of the actual target property on the entity I can deduce that we're dealing with an association because the actual type is IProfile and not string. Same thing goes for GUIDs and strings which also show up as the same thing; a string. Now I love the string type as much as the next guy but this is borderline ridiculous :)

To perform the mapping I employ a mapping engine which knows about all the mapping rules supported by the engine like rules for handling primary keys, one-to-one relationships, one-to-many relationships, value types, DateTimes, Guids, etc..

Each rule is an implementation of the specification pattern meaning that the engine will evaluate against each mapped property of the target entity and determine whether a particular rules is applicable to current property. Each rule employs reflection to determine whether that is case so the GuidMappingRule would use reflection to determine whether the type of a property on the entity is in fact a Guid.

Creating a Mapped Entity

To create a mapped entity you need to perform three simple steps: Create the interface which will expose the entity, e.g. the IUser interface. Second create the actual implementation of that interface, e.g. the UserObject class. The third and final step is to decorate the properties of the implementation with mapping information. Simple and easy. The code for IUser and UserObject might looks like this:

 

public interface IUser : IProfile

{

    string FirstName { get; set; }

    string LastName { get; set; }

    IAddress PreferredAddress { get; set; }

}

 

[Profile("UserObject")]

internal class UserObect : IUser

{

    [ProfileProperty("first_name")]

    public string FirstName

    {

        get { ... } set { ... }

    }

 

    [ProfileProperty("last_name")]

    public string LastName

    {

        get { ... } set { ... }

    }

 

    [ProfileProperty("PreferredAddress")]

    public IAddress PreferredAddress

    {

        get { ... } set { ... }

    }

}

 

Loading an Entity

With the mapping complete loading an entity is pretty straightforward: You new up the profile repository and call the generic method Get<T> with the key of the profile you want and presto you get an instance of the IUser interface returned to you complete with associated entities, the preferred address in the case. The Key class might seem superfluous but there's a point to it as it enables support for multiple key types like Guid, int, etc..

 

IProfileRepository profileRepository = new ProfileRepository();

IUser user = profileRepository.Get<IUser>(new Key("{EEDA89C9-E231-4002-AC24-7FD7FAB2F2FD}"));

 

All the Rest

Your spider sense is probably tingling by now. How's the ProfileRepository able to figure out which implementation of the IUser interface to instantiate? The answer to that is a piece that I omitted in my previous description: Sitting inside the ProfileRepository is an inversion of control container (IoC), in this case Windsor from the Castle project, which dynamically instantiates the correct type based on a configuration file.

Interestingly Windsor will be a key component in coming features in the ProfileRepository. As it stands today there are a number of improvements that can be made to it. Most prominently is implementation of lazy loading. All associations are eager loaded today which means that if you ask for a any one profile entity you'll get a complete object graph back which might not be suitable for all scenarios especially if we're dealing with many associated profiles.

With Windsor in place I intend to employ dynamic proxies to instantiate modified types with the lazy loading pattern injected into the relevant properties. Thanks goes out to Søren Skovbøll who came up with the idea for this and even provided me with POC code. His general knowledge on ORMs came in handy for a couple of things on this too :)

There are several opportunities for other performance improvements. The ProfileRepository uses reflection quite extensively to perform the automatic mapping which as you know is a costly operation. For a future release of this guy I'd like to throw in some caching for the rules which employ the reflection routines. The net result here would be that the rule is evaluated once per property and entity and from that point on reflection is only used for actually initializing the values of the properties.

Finally the ProfileRepository is load-only at this point and naturally I'd like to get create and update functionality in there as well. A customer self-service module would definitely need this feature in place to enable users to edit their user profiles, signing up for newsletters, etc..

With ProfileRepository I've tried to bring the full power of the profile system as a general purpose data access to bear in the sense that what we've got with the profile system is very cool and flexible but needs just that little bit extra to provide a nice development experience as well as something that supports the overall maintainability of the system.

posted on Thursday, 13 March 2008 21:46:16 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Wednesday, 12 March 2008

Commerce-Server-2007-Logo Imagine this scenario: The year is 2003 and you're a Commerce Server developer who is setting out on the first Commerce Server 2002 and .NET project. Exited? You bet I was :) Way back we were asked to subcontract on a large ticket booking system developing the user profiling piece for the solution. With my background in OO naturally I wanted to go ahead a create business entities from the profiles in Commerce Server by inheriting them building entities specific to my needs.

Now you know what I ran into by trying to do that right? Pretty much a brick wall behind yet another brick brick wall. What I ran into was the fact that the Profile class is sealed and the brick wall behind it was the ProfileContext which is also sealed. Pretty much my idea was dead in the water at that point. So to return to the point of this post. Today several years after the fact it dawns on me that I have the perfect new tool to actually get some of what I wanted back then: Extension Methods.

As you probably know extension methods is a way of spot welding new functionality onto existing classes whether they are sealed or not. Basically a handy way of making external functionality available where you need it. Lots and lots of articles about extension methods have been written about the topic so suffice it to say that you put your extension methods in a particular namespace and to make them available on the target types you simply import them into your current context via a using or Imports statement depending on your particular language fetish :) With that out of the way lets skip right ahead to how you might use them in Commerce Server.

You can envision a scenario where you augment the profiles with specific functionality based on what you're doing. Say you're creating a product review profile in which you store the customer's opinion about a particular product. Use extension methods on that guy to include rendering a star rating, setting the customer review description, or administrative methods like approve/reject review.

For the product classes like ProductFamily, Category, etc. which are also sealed you could opt for something else entirely. From time to time we utilize the concept of search category, i.e. a category where the child categories and/or products are determined by a search criterion. Instead of putting this functionality in a helper somewhere you could go for a extension method to load the child objects of your current search category.

But why go for the individual entities themselves, why not go for the services published by Commerce Server like ProfileContext or CatalogContext to provide enhancements to the core functionality? I can certainly see some interesting scenarios enabled in the Marketing system, a subsystem which traditionally hasn't been very open to extensions.

To sum up extension methods is a brand new extension point for Commerce Server. Commerce Server has traditionally been very extendable but some areas are completely locked down. Until now that is. The scenarios I describe above is one way to do away with a lot of helpers floating around. Of course a much better way of accomplishing this is fashioning a facade layer on top of Commerce Server which will accommodate not only your custom logic but more importantly support your automated test suited (you do have one, don't you? :)). The facade layer will be the subject of a future post as I feel very strongly that architecturally this is a great thing to have in place to support you now and in the future.

Also let this post be a cry to the powers that be at Cactus to open up the APIs for inheritance. I certainly understand the need for sealing stuff especially with COM lurking under the covers but please, pretty please don't let that baggage spill over in future versions when COM is completely removed from the product. Finally sprinkle some interfaces on top and I'll promise to be a good boy from here on in :)

posted on Wednesday, 12 March 2008 16:28:20 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Tuesday, 11 March 2008

Final proof that spammers are the root of all evil? Should I be worried at all that diabolical powers are taking over my inbox? Or even more worrisome that I took the time to actually make a post out of this? :)

Spam-666

posted on Tuesday, 11 March 2008 19:44:38 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 10 March 2008

aspnet A quick note to let you know that this Saturday we open the doors for the first code camp in the history of Aarhus .NET user group. The fun starts at 9:00 and we're digging into ASP.NET and building a blog application, just because we can :) The goal of this code camp is to give you a sense of what's available in ASP.NET and how to use some of it.

During the day you'll be able to ask the experts for help and meet some of your fellow aspiring ASP.NET developers.

Please note that the number of attendees is capped at 12 as we can't seat any more than that.

Read more and sign up.

posted on Monday, 10 March 2008 21:35:43 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback

CarboniteLogo For a while now I've had a nice file server sitting under my desk at home for storing everything in a nice central location. One problem with the solution though is the complete lack of backup on that particular box. I really wanted to run Mozy on the box to get everything backed up off site in a simple and cost effective manner. One problem though: Mozy Personal doesn't support Windows Server which is running on my file server.

In my post Online Backup: Carbonite vs. Mozy I declare Mozy the winner due to a number factors like features, performance, and configurability. What I failed to mention though is the fact that you cannot use Mozy Personal for backing up a Windows Server box. For that you have to spring for a professional plan where you're charged by the gigabyte instead of a flat rate. For me that's not really an option as I've got a look of stuff to back up. For a couple of gigabytes it's probably fine.

Carbonite however allows you to back up a Windows Server box with the personal edition which means that it's a more cost effective solution for the home. Since I did the comparison Carbonite has added the number one feature I felt was missing from the solution: Version history. Mozy had this and Carbonite now does too. One thing hasn't changed though, Carbonite is still pretty slow to the tuned of 100k/s when you back up. Mozy will complete the initial backup approximately four times faster than Carbonite. Not a big deal once you're over the initial backup and into regular running mode with only updated files are transferred but getting over that initial hump does take a long while.

So to sum it up: With Carbonite allowing you to run the client on a server box it's a compelling offer for those of us running home servers wanting offsite backup. For everything else I'd still say that Mozy is superior. Alternatives like Amazon S3, JungleDisk (using S3), etc. are still way more expensive to use since they too charge by the gigabyte.

Mozy allows you to pay by the month allowing you to opt out immediately should you find a better solution. With Carbonite you're locking in for a minimum of a year. Where. Network performance with Mozy is a lot better than what you get with Carbonite, though that might just be me being located far away from the Mozy data center. The Mozy client still offers oodles more configuration options than what you find in the Carbonite ditto.

posted on Monday, 10 March 2008 21:06:49 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback

SharedView-Start-SessionIt's been a while since I updated my toolbox so I thought I'd do a little post about my most recent addition: Microsoft SharedView.

What is SharedView? As the name implies it's a way of sharing what you see on your computer with others remotely. I've had to muck around with LiveMeeting a lot lately and boy is that a piece of work in the sense of me not being able to get anything done within a reasonable amount of time. Setting up a shared desktop experience with that stuff is like swimming in molasses: A lot of effort and very little reward. You have to pay to use it too.

Contrast this experience with SharedView where you're up and running in a matter of a couple of minutes. Did I mention that's completely free to use? To test out SharedView I tried it out with a partner abroad. I hadn't mentioned anything about it before the meeting but we literally had the program up and running within two minutes sharing the presentation that I needed him to see. Cool stuff! The experience has that Apple-feel to it: I got the job done, nothing more, nothing less.

SharedView allows you to share a single program window or the entire desktop and you can hand over control of the window to any participant in your current session. Setting up a new session is a simple matter of you logging in with your Live ID and clicking "Start new session" which will provide you with a link you can send to the participants of the meeting. That's it.

I definitely see this little gem coming in handy with customer meetings where we need to do sprint demos but can't come on site to do so. With a new delivery going down each month having the entire dev team on site is somewhat of a drain on the customer.

Download Microsoft SharedView

posted on Monday, 10 March 2008 20:44:54 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 18 February 2008

Microsoft_Team_System_Logo I'm proud to announce a very exiting meeting for Februar: The guys at Systematic are going to tell us all about their experience with Team System. Topics for the meeting include:

  • General introduction
  • Configuration management
  • Continuous integration

Following that we'll some insight into the world of software engineering at Systematic working with CMMI level 5 and agile processes like SCRUM.

The meeting will take place February 27th 18:00 at:

Systematic Software Engineering A/S
Søren Frichs Vej 39
8000 Århus C.

Signup and more information

posted on Monday, 18 February 2008 11:15:49 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Wednesday, 06 February 2008

anug_logo_200x85 With the release of Visual Studio 2008, C# 3.0, and VB 9 in November last year we felt that it would be nice to get some information out there on what to expect of the new language features available in the new versions.

As always I started out the meeting by summing up what the core group has done since the last time around; we've been quite busy too. I'm very proud to announce that we've booked meetings at various companies around Aarhus until May. We do have a gap in April but I expect to put on a little song and dance about the MVC framework for ASP.NET.

IMG_1998 IMG_1999

IMG_2000 IMG_2001

IMG_2002 IMG_2003
IMG_2004

Future Meetings

So what have we got planned for you guys to enjoy? February will bring us a talk from the guys at Systematic where they'll tell us all about their experience with working with Team System with CMMI and SCRUM. Personally I'm looking forward to this one quite a bit as we're using Team System internally at Vertica as well; I know that many of you guys are too so I expect that a lot of very useful information will come from this one.

March brings us not one but two events: First there's our code camp for people who want to know more about ASP.NET where we'll continue building our blog application. I expect that we can accommodate approximately ten people and those attending will have to bring their own laptop. we'll do a full Saturday better our hands dirty in the code. More information on this will follow shortly so stay tuned.

Later in March we'll head back out to Scanvaegt where Henrik Kristensen will give a talk on his works with Workflow Foundation. He's been doing a number of POC applications and is very eager to share his experiences with the rest of us.

ASP.NET MVC Framework has garnered a lot of attention lately, but what is it and why should you care? I'll try and explain this with my talk on it in April. I'm doing an internal talk at Vertica this Friday and I figured that you guys would find it interesting as well. There are certainly some nifty techniques at play in the framework that I'm looking forward to sharing with you.

Finally Klaus Hebsgaard from Kristelig fagbevægelse will head up the April meeting with his talk about WCF. He's doing some very interesting work with WCF in conjunction with a large SOA project at KF and I know for certain that he'll have a lot to say about the topic.

Facebook

As an experiment we decided to do sign up for the meeting via Facebook. While it's been quite a success there are a couple of flies in the ointment: 1) Some people really dislike Facebook and outright refuses to use it. 2) Some companies actively block Facebook in their firewalls.

In the light of this information we'll not do exclusive sign ups on Facebook in future. Carrying forward we'll do Facebook sign ups primarily but also allow e-mail sign up as to allow everyone full access to the user group. Additionally we'll make sure that all relevant information will be available from the anug.dk web site.

Jobs

Interestingly we were contacted by a company owner who wanted to make our members aware of a job opening at his particular company. Our stance on this is that we won't advertise job openings in the interest of keeping our purpose clear and keeping a neutral position with respect to the companies kind enough to let us use their offices for the meetings.

Meetings Outside Aarhus

We've discussed the idea of holding meetings outside of Aarhus as a number of interesting companies exist in the vicinity. When asked though the members of the group indicated to me tat they weren't willing to travel too far outside of the city to attend our meetings. So we'll try and keep the meetings local as to not impose too much of a travel burden on the attendees.

Language Features in C# 3.0 and VB 9, Henrik Lykke Nielsen, Captator

The main attraction of the evening was of course .Henrik, Microsoft's RD for Denmark, and part owner of Captator.  I asked Henrik to give this particular talk because I know he's very fond of VB and I really wanted to see even attention given to both languages. It turned out though that when asked the attendees were interesting in C# 3.0 only so VB was mentioned in passing but the upshot of course was the fact that Henrik able to gauge the interest of the attendees and adapt his talk accordingly. Tip of the hat for that.

Henrik gave a very detailed talk on C# 3.0 and we even got into some IL discussions along the way which was a nice twist on the evening. I must say that I'm impressed with Henrik's deep knowledge on the subject having given a similar talk myself internally at Vertica I figured that I knew most of what he was going to say still I got a couple of nuggets of gold to take home from the meeting.

To understand many of the new features of C# 3.0 you need to understand what's already put in place in previous versions of the languages and again Henrik did an admirable job of getting everyone up to speed before moving on to the new features.

Slides are forthcoming as I'm still waiting to receive them from Henrik. While you're waiting for those why not head on over and take a look at his blog?

posted on Wednesday, 06 February 2008 21:32:18 (Romance Standard Time, UTC+01:00)  #    Comments [2] Trackback
# Tuesday, 05 February 2008

Commerce-Server-2007-Logo A while back a friend of mine posted a comment here asking me to describe what it's like developing with Commerce Server 2007. Initially I wanted to reply to him in comments but thinking more on it I really want to provide a different and real perspective on how Commerce Server is to work with as a product, a perspective in which I want to dig a deeper than the usual how-to and tutorials you see on the various Commerce Server blogs; mine included.

Check out part 1 Secure By Default where I discuss the security aspects of Commerce Server, part 2 Three-way Data Access in which I write about the various ways of getting data into your applications, part 3 Testability which not surprisingly is all about how CS lends itself to unit testing, part 4 Magic Strings Galore where I take on low level aspects of the APIs, part 5 Pipelines where COM makes a guest appearance in our mini series, part 6 which is all about getting your solution into production, and part 7 where I rip into the reference shop implementation: The Starter Site.

The Good Stuff

In this the final part of my mini series about developing Commerce Server I'm going to cover the stuff that I love about working with Commerce Server 2007. While I didn't start out with a particular roadmap for this series of articles I've noticed a trend when I look back over the posts: They aren't very positive about Commerce Server. Why is that? Does it mean that Commerce Server is a bad product? The answer to that eluded me for a while until our salesman pointed out a particular fact about engineers: Our job is to know the weak spots of the technology we're working with in order to produce the best possible solutions. While this is great trait for an engineer it certainly doesn't make for a great sales person :). I guess the reason for my negative slant stems from this fact: For me and my team to deliver the very best Commerce Server solutions we have to be constantly aware of any and all weaknesses of the product which is why I naturally gravitate towards that mode of describing the development experience.

So to answer the question posed above: Is Commerce Server a bad product? Certainly not, actually I enjoy working with a very mature platform which provides a lot of great features out of the box. Actually I've found myself in the fortunate situation of being able to tell a customer that, "yes we can do that out of the box", more often than not. I truly enjoy that part of my job because I find that customers are used to not getting anything out of the box if they're coming from the traditional business which started out on the web on a custom solution.

Actually I come across two types of distinct businesses when I go out and do Commerce Server work in the field: The business which primarily grew out of the web with the webshop at the core and the traditional business with the ERP at the center. As I mentioned above Commerce Server is a very compelling offer for the webshop-centered business because it provides a much more sound foundation than the custom built solution. The benefits for the traditional business are of course the same but interestingly I've found that Commerce Server is aligned very well with the way ERP guys typically think about a business. A good example of this is the rich way in which we can express business data in Commerce Server, in the areas of the ERP which concern a webshop we're able to not only match the capabilities of ERP systems but in some cases even surpass them, e.g. richness of the order schema and the way shipping is handled, the flexibility of the catalog, etc..

Were I to use a single word to describe Commerce Server it would be "flexible". Flexible in every sense of word as you can customize every aspect of the product to the suit the needs of the customer. Pretty the only limitation you'll come across is your own knowledge about the platform. With the right knowledge you can shape Commerce Server to suit the particular requirements of your customer which is why getting the right people working on your Commerce Server-project is essential for the success of it. You might argue that this is the case for all types of projects but I've seen how bad I project can go if the people working on a Commerce Server project lacks the proper skills to do so. The sound foundation I wrote about previously suddenly starts to look pretty wobbly and you end up in a situation where the platform is working actively against your business instead of with it.

So what happens if you get the right people working on your project creating the right architecture? Something akin to magic that's what. With the projects we've got going on right now I see one particular trend: The architecture that we're putting in place on top of Commerce Server leveraging the platform without working against it is actually opening up new avenues of possibilities for us as the projects move forward. Instead of feeling barred in by the choices we make I increasingly find that our solutions just support new requirements from the customer either "automagically", with reconfiguration of the existing system, or with very little modification to the system because the features were built on the sound foundation that is Commerce Server. That and of course the fact that I've got the privilege of working with the best damn e-commerce team out there :)

posted on Tuesday, 05 February 2008 21:21:13 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback

Commerce-Server-2007-Logo In light of the success of the Aarhus .NET User Group on Facebook I went ahead and created a Facebook group for Microsoft Commerce Server for all of us working with the product. If you have an interest in getting in touch with people with deep Commerce Server knowledge please don't hesitate to join the group. Prominent people like Ryan Donovan and Max Akbar are already in there so why aren't you? ;)

Microsoft Commerce Server Facebook group

posted on Tuesday, 05 February 2008 19:42:14 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Wednesday, 23 January 2008

Commerce-Server-2007-Logo A while back a friend of mine posted a comment here asking me to describe what it's like developing with Commerce Server 2007. Initially I wanted to reply to him in comments but thinking more on it I really want to provide a different and real perspective on how Commerce Server is to work with as a product, a perspective in which I want to dig a deeper than the usual how-to and tutorials you see on the various Commerce Server blogs; mine included.

Check out part 1 Secure By Default where I discuss the security aspects of Commerce Server, part 2 Three-way Data Access in which I write about the various ways of getting data into your applications, part 3 Testability which not surprisingly is all about how CS lends itself to unit testing, part 4 Magic Strings Galore where I take on low level aspects of the APIs, part 5 Pipelines where COM makes a guest appearance in our mini series, and part 6 which is all about getting your solution into production.

The Starter Site

Ah the fabled Starter Site... When I look at my search logs for this blog I see that people are very interested in the Starter Site and are doing lots of searches for it. Ever since Commerce Server 2000 Microsoft has provided a reference implementation of a commerce site for developers to learn from. Ever since Commerce Server 2000 it's been and all-around bad idea to actually use the Starter Site in production. As a Commerce Server developer you'll cross paths with these guys so it's important for you to know what they're al about.

So is it still a bad idea to put the reference implementation for Commerce Server 2007 in production?Yes and no, while the Starter Site is a step up from previous reference implementations it's still not what I'd call production ready. The Starter Site provides great insights into the workings of the Commerce Server APIs but it's not exactly a shining example of web application architecture.

The Starter Site is done as a web site project in Visual Studio which by itself is not an issue. The problem though is that there is no separation between UI and application logic. All business logic is placed in the App_Code folder of the web site which means that it lacks reusability completely.

Additionally the code which is there lacks support for testing as well, all components are implemented directly on top of the CS APIs which as I discussed in part 3 Testability means that we have no means of creating unit tests for our custom code. Not only that but the entry point to the subsystems used is the CommerceContext which is initialized by a number of HTTP handlers during execution of the ASP.NET pipeline, this means that we're effectively bound to an ASP.NET context which in turns blocks the ability to test anything.

Now the abstraction provided for the Profile System does show some good ideas. Profiles are abstracted in nice type safe objects which in turn are mapped to the underlying Profile System by use of attributes. Great idea but I'd like to have seen the idea carried a few steps further. For instance relationships are implemented by a pattern that the developer needs to redo for every single property which reference another profile or list of profiles.

Oh and some corners are cut here and there: We have a nice abstract BaseProfile class which serves as the base for all profile implementations, perfect but in an especially grievous example a method in the BaseProfile is implemented with knowledge about one of its inheritors, see that's bad OO design right there.

Finally the Profiles which is most Commerce Server projects serve as the data store for domain objects are bound to ASP.NET like the rest of the business logic of the Starter Site meaning you can't reuse them in other contexts like winform apps.

All that said the Starter Site is a very nice running reference from which you can learn a great deal. There's no code like running code, all else aside the Starter Site is probably the best reference and source of learning Commerce Server especially if you're already familiar with earlier versions of Commerce Server.

Provided with the code base is the control gallery which is a collection of ASP.NET controls that you can use in your own site. They're implemented as server controls which means that they're easily transferred to other sites. That's ten points right there :)

So while the Starter Site is an exercise in bad web application architecture there are very good reasons for downloading it and taking a look at it. Learning is the obvious route but if you find yourself tight for time or money on your particular e-commerce project you can get some good results with the Starter Site as a foundation provided you take care and identify the weaknesses of the code base and rearchitect accordingly.

posted on Wednesday, 23 January 2008 20:42:01 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 14 January 2008

My favorite news reader of all times, FeedDemon, is now free for anyone to download and use. What makes this guy stand out from the competition is not the simple and easy to use UI, it's not the fact that you get a nice Hot/Not list of feeds, not the fact that you can subscribe to any quirky feed on the planet.

No what really makes FeedDemon shine and what made me cough up $29,95 having tried the product only a couple of times is the synchronization features. Simply put FeedDemon has made me use RSS more than I did in the past because I don't have to worry about reading my feed in multiple locations. Now to be fair Google Reader does provide the same feature but I simply can't bring myself to read my feeds in a web interface. With lots of information rolling by I need a nice work flow to process everything; while Google has done everything possible to make this happen in their web interface it's simply no match for a well designed desktop application.

To put it short, download FeedDemon, synchronize and be happy. Even if you don't read feeds in multiple locations you'll still have off site backup for your feeds ;)

posted on Monday, 14 January 2008 21:52:25 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Sunday, 13 January 2008

I'm starting to look into the ASP.NET MVC Framework and needed to download the CTP just now which is not all that interesting. What is interesting though is the fact that I was greeting with a dialog asking me whether I wanted to try the new Silverlight version of the MS Download site. Naturally I couldn't resist :)

Check out the MS Download Center beta done in Silverlight.

posted on Sunday, 13 January 2008 14:42:45 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 17 December 2007

Ever since beginning my work with Commerce Server it was apparent that we needed some way to link the disparate subsystem with each other in a uniform way. Sure there are lots of links between catalog, order, and even profiles out of the box but the problem with them is that they're all done in different ways.

My colleague Brian found an excellent solution to this problem by introducing a concept he calls Extension Profiles which is basically a profile you tag on to other data objects in Commerce Server. With this in place you can use the extension profile in a number of ways like mapping objects or extending non-extendable CS objects like ShippingMethods and Payments.

I've been bugging Brian to write about them for a while and during the weekend it seems that he finally got around to it.

Check out How to extend Commerce Server Payment Methods and Shipment Methods

posted on Monday, 17 December 2007 08:29:05 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Friday, 14 December 2007

Commerce-Server-2007-Logo A while back a friend of mine posted a comment here asking me to describe what it's like developing with Commerce Server 2007. Initially I wanted to reply to him in comments but thinking more on it I really want to provide a different and real perspective on how Commerce Server is to work with as a product, a perspective in which I want to dig a deeper than the usual how-to and tutorials you see on the various Commerce Server blogs; mine included.

Check out part 1 Secure By Default where I discuss the security aspects of Commerce Server, part 2 Three-way Data Access in which I write about the various ways of getting data into your applications, part 3 Testability which not surprisingly is all about how CS lends itself to unit testing, part 4 Magic Strings Galore where I take on low level aspects of the APIs, and part 5 Pipelines where COM makes a guest appearance in our mini series.

Deployment

When time comes to deploy your application we've got a number of options when it comes to custom apps crated purely on top of the .NET framework: Installers, xcopy deployment, automatic build processes, etc.. When it comes to Commerce Server the deployment procedure is a bit more involved but aspects of the deployment are supported by some interesting tools as we'll see in a minute.

Commerce Server comes with a handy tool for publishing your application in a single file called a PUP file. This works great the first time around and greatly simplifies first-time deployment unfortunately it only works for the initial deployment subsequent deployments are more involved because manual deployment is required unless you're fortunate enough to be working with the enterprise version.

Let's first deal with the manual deployment because that's frankly the most fun to write about :) Commerce Server is split across a number of different subsystems each running on top of a separate database. Each subsystem has different deployment requirements and steps that you'll need to follow. I won't bore you with the actual steps here just know that deploying a Commerce Server application requires a lot of steps and I do mean a lot involving a number of disparate tools.

Security requirements of authorization manager further complicates deployment because the business data is protected by an additional layer of security different from what's found at the system level. All this has to be created either manually or via a command line tool provided with the product.

One alleviating factor to the long list of manual deployment steps is the fact that Commerce Server is split across a number of different databases, one for each subsystem. You can isolate changes to each subsystem thus easing deployment by the fact that you can deal a subset of your Commerce Server application at a time.

It becomes really interesting when we start talking about the enterprise version which brings another tool to the table which will automate deployment of most of your application: Commerce Server Staging (CSS). The staging tool allows you to move business data and files from one server to the next. this means that you can enforce pretty much and hands off policy on your production server and only have your business users work in a staging environment for creating and testing purposes.

The only caveat to staging is that it doesn't support profiles, you can however use a more crude approach to deploying your profiles automatically. Notice that I wrote business data and files. You can basically have CSS move binary files to production and have it execute a command pre and post transfer which could be a bat file, a custom executable which essentially makes CSS a very unique and useful tool not just in conjunction with Commerce Server.

Just think about what can be done with a tool like CSS with a regular old ASP.NET app. You could basically have CSS move your compiled ASP.NET app and SQL script to production, have it move the assemblies in place, and finally execute your SQL scripts. Voila automatic deployment, only downside is that you need an enterprise version of Commerce Server around :) I really think that Cactus should look into making this a standalone tool.

posted on Friday, 14 December 2007 07:00:06 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Wednesday, 12 December 2007

Did you know that each field you post using the HTML form element is limited to 100KB? I sure didn't and it can cause trouble if you still have to deal with Commerce Server 2000 and 2002 because the Bizdesk rely heavily on XML islands on the client to create a rich client side experience.

Specifically this can cause trouble when you add a large number of variants to a product all at once. You can work around it by creating a limited number of variants at a time.

PRB: Error "Request Object, ASP 0107 (0x80004005)" When You Post a Form

posted on Wednesday, 12 December 2007 08:45:44 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback

Commerce-Server-2007-Logo A while back a friend of mine posted a comment here asking me to describe what it's like developing with Commerce Server 2007. Initially I wanted to reply to him in comments but thinking more on it I really want to provide a different and real perspective on how Commerce Server is to work with as a product, a perspective in which I want to dig a deeper than the usual how-to and tutorials you see on the various Commerce Server blogs; mine included.

Check out part 1 Secure By Default where I discuss the security aspects of Commerce Server, part 2 Three-way Data Access in which I write about the various ways of getting data into your applications, part 3 Testability which not surprisingly is all about how CS lends itself to unit testing, and part 4 Magic Strings Galore where I take on low level aspects of the APIs.

Pipelines

When I first encountered pipelines in Commerce Server 2000 they were a nice feature to have available and they made sense because they handle a much bigger load due to the fact that they're essentially COM objects executed in an ordered fashion. All this made a great deal of sense back in the day when we were dealing with plain old VBScript and ASP.

When Commerce Server 2002 came out it still made sense that they stuck around because the .NET support in Commerce Server 2002 came in the form of managed wrappers for the COM objects which came with the product.

Would you be surprised to learn that COM based pipelines stuck around for Commerce Server 2007 too? Well they did which means that you have to know a little something about COM to get it going. Especially when it comes to debugging problems with a server setup. Weird HRESULTS is something you still have to contend with although the situation is vastly improved from the older versions.

Fortunately you can go ahead and build your pipeline components in .NET and expose those to COM so all is not lost. It does however mean that you need to make sure that your pipeline components behave as expected at runtime in order to avoid cycling objects in and out of the GAC. The keyword is developer productivity, you don't want to spend too much time mucking about with getting everything good to go for every little change you make to your pipeline components.

Traditionally pipelines is the area where people ask the most questions because it's a pretty opaque topic to dive into at first. Every time I create a new pipeline component it pains me to know that we have the nice System.Transactions namespace available to us in .NET.

Luckily Cactus feels our pain and has a replacement on their roadmap for the next version of Commerce Server but until then you better get those interop skills up to speed and. Alternatively you can choose to forego the pipeline system altogether and do any custom business logic outside pipeline components but that's not always an option.

Developing with Microsoft Commerce Server 2007 Part 6: Deployment

posted on Wednesday, 12 December 2007 07:00:23 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
«Older Posts Newer Posts»