# Monday, 19 January 2009


In a previous post I wrote about Twitter and what it means to the Danish developer community. The real value of Twitter however does not come by visiting the site from time to time. You have to participate actively to keep the conversation going and that’s where the Twitter clients come into the picure.

I’ve been through a bunch of them and ultimately decided which one I liked the best. I’ll try and spare you from doing the same all over.


Digsby gets honorable mention becayse it was my first Twitter client and because this program how I got started with Twitter and in no small way the reason why I still use it.

Digsby is labelled a social network client which gives you access not only to Twitter, in fact that’s the least of it, but also to Messenger, LinkedIn, Facebook, Yahoo Chat, Google Talk, the list goes on and on but you get the point. Digsby speaks with most social networks out there.

That was my reason for trying it out as I really didn’t feel that I needed a dedicated program to try out Twitter. I spent quite some time with Digsby and felt for a long time that it was the way to go. In fact the reason I dropped it was not so much Twitter related as it was Messenger related. It simply didn’t work as advertised, sending file for one was spotty.

As a Twitter client it performed admirably and for me at least it was a low cost to pay for trying out Twitter as I used it primarily as a Messenger client with the added benefit of being able to send out my tweets as well.


imageTwitterrific is an interesting one as it didn’t start out on the desktop for me. It actually started out on my iPhone and went I got a Mac late last year it was the natural choice for the desktop as well as the iPhone experience with this thing is flawless as far as I’m concerned.

Now the application is pretty much the same on the Mac. Interestingly it turns out that the functionality doesn’t quite cut it on the desktop. Due to the nature of tweets messages need to be as compact as they can be.

http://www.dech.co.uk/wp-content/uploads/2008/07/photo.jpg http://estwitter.com/wp-includes/images/twitteriffic.gif

Imagine that you’re posting a link which can easily be 50 - 60 characters; at that point you really want to be able to shorten a link easily and post the short version insteand. Unfortunately Twitterrific doesn’t support this which is fine on the iPhone where cut and paste is not to be found so you tend not to post links. On the desktop though links are thrown left and right so not having the feature is a real pain point – at least for me.

Thus Twitterric was evicted from the Mac desktop but remain on the iPhone as one of the first apps I ever installed on that thing.


image Before I delve into twhirl a word on Adobe AIR. Not so much because I find the platform interesting but because I find it interesting that as a platform a lot of the ecosystem is made up of … wait for it … Twitter clients. It’s interesting to me that a service like Twitter can drive a platform like AIR and not the other way around.

twhirl is pretty much like Twitterrific only the name is quite a bit easier to spell and it supports the link shortening feature I mentioned above. It being an Adobe AIR app also means that it’s cross platform for those us running cross ethnic platforms out there.

twhirl is like the girlfriend you can’t quite figure out if you want to spend your life with or leave for someone else. I left but ultimately came back so I guess it’s forever between us :)

And finally remember to follow me on Twitter once you get your favorite client up and running :)

posted on Monday, 19 January 2009 11:58:14 (Romance Standard Time, UTC+01:00)  #    Comments [3] Trackback
# Sunday, 04 January 2009

Community-People Back in May 2008 I wrote a short note about me trying out Twitter. At the time I just wanted to know more about what Twitter actually was as I heard about time and again on podcasts, blogs, everywhere really.

Interestingly whenever people talked about Twitter it was due to the service being down but still I felt compelled to take it out for a spin.

Twitter of course is the service which enables you to post little notices about what you’re currently doing which doesn’t sound all that useful until you actually sit down and think about it. In reality it turns out that there are numerous applications for a service like that. The notices are limited to only 140 characters which means that you have to be really short and sweet in the stuff you send to the service.

Fast forward to January 2009 with the experiment done and my conclusion is in: Twitter is indeed a service worth paying attention to. Read on to find out why.

Now what prompted this post is a question I got from Brian Rasmussen when I suggested that he take a look at it. Basically he asked why he should use Twitter, a question I didn’t quite know how the answer with anything but, “it’s cool”. Since that time I’ve been wondering what makes Twitter worth my while and yours as well, dear reader.


Twitter is a lot of things to a lot of people. The value to me and our little community in particular lies in tying together everybody in a more coherent way than what is possible today. To me at least Twitter is a place where I get to keep in touch with a number of the Danish .NET developers in a far more personal way than what is possible at DotNetForum, ActiveDeveloper, etc. because the service is geared for throwing stuff out there without thinking too much about it.


Why do I call it the back channel of our community? Due to the nature of the messages you stick on Twitter it quickly becomes just little notices about what’s going on right now. For example Mads used it to get an idea of which IoC framework to go with, I recently got a Mac and had no clue where to start so I elicited suggestions for apps to use, Niels uses it for communicating with the Umbraco team from time to time, recently Jesper wanted to know what to include in his ASP.NET MVC presentation coming up in ONUG in January, and Rasmus had a memory leak which he needed some input for fixing.


Basically what you get is an inside look in the process leading up to a blog post, presentation, the solution to a giving issue, or whatever; something you don’t really get from reading the final product and often times much more interesting.

I would encourage you to go create an account with Twitter and follow a bunch a people from the Danish .NET community. Morten from DotNetForum was even kind enough to create a wiki with the Twitter names of a bunch of the Danish .NET guys which you can use as a starting point. You can follow me using my Twitter name  publicvoid_dk.

Of course there are a number of people which I’d like to see get Twitter accounts like Brian Rasmussen, Søren Skovsbøll, Mark Seemann, Kasper Bo Larsen, and Martin Bakkegaard Olesen,

posted on Sunday, 04 January 2009 13:41:19 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Monday, 29 December 2008

I had grand goals for 2008 when we started out the new year last time around, only stuff happened and my activity level on this blog has not been up to the goals I initially set out to reach. In spite of that I'm very happy with my accomplishments for 2008. They just happen to have occurred in a slightly different way than I originally thought.

The Blog

Surprisingly the most visited and commented post on the blog during 2008 wasn't even written during 2008. It caters to the more mainstream internet users, was written in 2006, and is about an annoyance I had with Windows and the My Music folder which disappeared from time to time.

But we are looking back at 2008 here so it's fitting to mention the posts which I'm most proud of which were actually written during 2008.  First up is my Developing with Commerce Server 2007 series in which I dove into the the development experience of Commerce Server. Also on the topic of Commerce Server 2007 I wrote a post on a generic mapping piece I did for a project early in the year which turns CS objects into nice POCO object for nice testability.


Of course there was real work to done and 2008 brought some really interesting challenges with me participating in one of the largest e-commerce projects I've ever had my hands on. Huge customer, international team of devs, traveling across the Atlantic to do some of the work. All in all a great learning experience and as a result I'm now able to provide even better service to our customers. Oh and it was kinda fun too :)

I got to attend a couple of conferences as well. First Daniel from Microsoft was nice enough to invite me to JAOO; a conference I enjoy a great deal and later in the year I had a unique chance to fly out to Los Angeles to participate in PDC 2008. I have to say that if you ever get a chance to participate in a conference like the PDC you really should jump at it. It's spectacular show to be sure. I did a couple of podcast episodes about it too; in Danish mind you.

Finally I'm happy to report that we managed to add a number of very talented people to both to my own team at Vertica and to the integration team as well. I'm proud to have such great colleagues and to be able say that every day I learn something new as a result.

Aarhus .NET User Group

Now as I started the post out by saying that I haven't spent as much time on the blog as I would have liked and there's a really good reason for that: Aarhus .NET User Group which has sucked up a significant part of my time.

During 2008 the core group and I organized thirteen meetings, indeed we didn't miss a beat the entire year and even managed to do a bonus meeting in December with my good colleague Daniel about unit testing. Additionally we pulled off a code camp in the beginning of the year, the ANUG 1 year old birthday dinner, and a Christmas Dinner. Not too shabby if I do say so myself.

Support for the user group during 2008 was tremendous and I couldn't be happier about where we're at after just one and half year of operation.

More importantly we've shown other .NET developers in the Danish community that a user group in Denmark is viable and as a result new groups have sprung up during 2008. As I write this groups are up and running in Odense (ONUG), Aalborg (AANUG), and Copenhagen (CNUG).

ANUGCast (www.anug.dk/podcast)

Ever since we started the user group we've had requests for putting the meeting content online somehow, be it video, audio, or something else entirely. What we did from the start was write meeting summaries which weren't really the ideal way to bring the content online. It's adequate and we'll continue to do so but it's been clear from the start that it was far from sufficient.

Late in 2008 it struck me that the podcast format might be the ideal way of addressing the requests. With that in mind I set out to create a podcast based on the topics of the meetings. With that ANUGCast was born with the initial goal: to bring out an episode once a month. This quickly escalated to one per week and so far it's gone really well. In fact episode thirteen was posted today and I've got a bunch of episodes already in the can just waiting to get released.

The podcast is my little baby and I guess most of the time which would otherwise have been spent on the blog got diverted there. I enjoy hosting the podcast a great deal, so much so in fact that I'd do it full time if I could :)

Since starting out the podcast I've gotten it registered with more than 50 aggregation sites, we're on iTunes, and we've have more than 4000 5000 downloads since the pilot episode in September 2008, a number I'm particularly proud of. We seen a steady climb of downloads since the pilot episode and the past couple of months saw more than a thousand downloads each.

I guess I should do a couple of posts on how ANUGCast is made and some of the tricks I picked up wearing the hats of producer, sound engineer, basically every damn hat needed to make it happen :)


The coming year will bring a similar activity level on the blog as 2008. It is my every intention to keep up my work with the user group and the podcast and even step it up a bit. 2009 will bring more real marketing of the user group to reach new audience which I'll write more about after we hold the first meeting of 2009. There's something to look forward to for sure. 2009 will also bring our first IT pro related meeting and will cover Hyper-V. It's intended as a pilot to kinda try the waters for something like that.

Oh and I went and got myself a Mac so I guess I'm sort of a Mac switcher as of December 22nd... 2009 is going to be interesting for sure.

posted on Monday, 29 December 2008 22:41:45 (Romance Standard Time, UTC+01:00)  #    Comments [1] Trackback
# Tuesday, 11 November 2008

At Vertica we employ a wide range of Microsoft server products in our solutions to maximize customer value. To help us manage these often complex environments we rely heavily on virtualization. For the longest time the obvious choice was Microsoft Virtual PC simply because it was there and freely available to use and just being able to run a virtual machine was amazing in its own right.

Our default setup when developing in the virtual environment is to install everything needed inside the virtual machine and use that exclusively. Running IIS, a couple of server products with Visual Studio and ReSharper works well but we’ve found that performance leaves something to be desired.

The obvious answer is to move Visual Studio out of the virtual environment, do development on the physical machine, and deploy the code to the virtual environment and test it there. Basically I require two things from this: 1) Pushing the build to the server should be very simple, 2) Debugging must be supported.

Pushing Code

We’ve got a bunch of options for pushing code to another environment: Publish Wizard in Visual Studio, msbuild or nant tasks, Powershell, and my personal favorite bat files :)

I wanted to create a generic solution which doesn’t dictate the use of msbuild or any other technology so I went with a bat file which in turns calls robocopy. With this in place we’re able to push files over the network to the target VM. Of course a one-time configuration of the virtual environment is needed but that isn’t in scope for this thing.

Download my deploy bat file. Basic usage Deploy c:\MyWebSite \\MyServer\MyWebSiteVDir.

Robocopy is part of the Windows Server 2003 Resource Kit.

Remote Debugging

Second requirement is debugging. I want a solution which is on par with running Visual Studio inside the virtual environment and that means debugging people! :)

The steps for doing remote debugging are well documented but for completeness sake I will include them here with nice screenshots to go along.

1) Copy Remote Debugger from \program files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger to somewhere on the virtual machine, e.g. desktop.

2) Run Remote Debugger on virtual machine (msvsmon.exe).

3) Grab the qualifier from the Remote Debugger (You’ll need it in a second).


4) Connect to Remote Debugger from VS on physical machine via Debug > Attach to Process (CTRL + ALT +P)

5) In the Qualifier input field enter the qualifier from Remote Debugger window.


Volia. Set a break point on the remote machine and see the code break in Visual Studio.


I stated earlier that we’re using Microsoft Virtual PC which is true but it’s also true that we’re looking into VMWare Workstation. My first reason for doing so is the performance boost which comes from running in VMWare. I haven’t done any sort of scientific testing of how much faster we’re talking about suffice it to say that it’s enough that you notice it when you’re going about your business in the virtual environment. VS is faster, compiles are faster, everything is just smoother. In my book the best sort of performance metric there is :)

Additionally VMWare provides other interesting features. The first one you’ll see is that storing and restoring state of a VM is blazingly fast. Enough so that you’ll actually find yourself using the feature all the time. I know I am.

Secondly VMWare supports multiple monitors. That’s right. Simply select how many monitors you want supported and it’ll do it. You can even switch on the fly. In case you’re wondering, yes, we do have three monitors setup for all the developer machine in the office :)


The final feature is significant enough for our story to warrant a paragraph of its own. I accidentally stumbled across it this morning when I upgraded VMWare to version 6.5.

Remote Debugging Support in VMWare

You read my earlier steps to get remote debugging working which will work for any sort of virtual environment. VMWare however brings some nice debugging features to the table available right there in Visual Studio.

1) Goto the menu VMWare and select Attach to Process.


2) Select the VM you want to start debugging on and point to the Remote Debugger that you’ve got locally in \program files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86.


3) Click the Attach button and the Remote Debugger will launch inside the VM and you’re ready to debug.

No need to copy anything to the VM. It just works. You can even setup a config for this which enables you to attach to the debugger with F6. Nice!

In conclusion running Visual Studio outside of the VM is not only possible but with the right tools like VMWare in hand it’s even an enjoyable experience. Have fun!

posted on Tuesday, 11 November 2008 10:35:21 (Romance Standard Time, UTC+01:00)  #    Comments [0] Trackback
# Sunday, 05 October 2008

Day 3 of JAOO was a potpourri of topics for me, everything from JavaScript as an assembly language, JavaScript for building an OS, developer best practices, and data synchronization with a shiny new Microsoft toy. If you didn’t catch my summaries of Day 1 and Day 2 please make sure that you check them out.

Five Things for Developers to Consider

Last year I attended a couple of developer best practices sessions and came away liking them quite a bit so I figured I should attend at least one this year as well. The first one this year was basically five things which Frank Buschmann and Kevlin Henney collectively considers to be important to developers.

Of all the things they pulled out of their hats I liked their points on expressiveness the most. They talked about bringing out concepts which are implied in both the architecture and the low level design of a solution; something we strive to do as well. One of the key aspects when writing code I find is that more often than not code is written once and read dozens of times which means optimizing for readability is not only a good thing to do but the only thing to do.

An example of the above are variables of type string. Usually these guys contain a lot more than just mere strings, e.g. XML, social security numbers, etc., instead of going with just the string you could go for a class of type SocialSecurityNumber which would be a lot more explicit. The little things count.

Developer habitability is a term they touched on which I quite like. The idea is that if we create nice usable solutions which are easy to understand and simple in their composition developer habitability is increased – basically the code place is a nice place to live :)

Keeping in Sync

image Two-way synchronization is a notoriously difficult challenge to solve. Mostly when I’ve come up against this thing I’ve gone for a simpler solution like selecting a data master which overrides the slaves. Naturally I was excited to learn that Mike Clark was giving a talk on Microsoft Synchronization Framework which tackles this very issue.

Sync Framework actually forms the backbone of tool you might know already: Sync Toy which sync files across the network, file system, or whatever. Certainly a neat feature Sync Framework is about much more than that. It basically enables us to synchronize our custom data stores which to me is very exciting.

Included in the box is support for all data stores which have an ADO.NET Data Provider so we’re talking all major databases here. Additionally the framework gives us rich hooks so we can grab any step in the pipeline and modify it to our heart’s content.

A JavaScript OS

Really? An OS done in JavaScript? Apparently so if Dan Ingalls has his way, Actually he’s already done some amazing work on this Sun project which aims to liven up the web by doing away with a HTML replacing it with vector graphics rendered by a  JavaScript engine.

Actually my words won’t really do it justice so instead take a look at this video; basically the entire talk. Once you’re done with that go play with Lively Kernel live on the web.

JavaScript as an Assembly Language

image Keeping in the same vain I decided to go take a look at Erik Meijer talking about his current project: Volta. Volta is a project aiming to allows us to defer decisions on deployment model to a much later point in the project than what we currently do today. The current state of affairs is pretty much that we need to decide very early in the project phase which might or might not make sense. In any event having the option to defer those kinds of decisions is always better right?

Now the part which Erik focused on is the piece which allows us to run our apps in the web without actually coding for the web. The premise here is that if we view JavaScript as a language which we can target with the JIT compiler and generate a web implementation for our app which then runs without in web specific code ever written by us a devs.

Last year Erik gave the keynote at JAOO and talked about Volta at which time I was skeptical to say the least thus it was interesting to actually see that there’s some meat on the project after all. The idea is interesting to say the least and I look forward to seeing where it goes from here.

With two “extreme” JavaScript session done I was all JavaScripted out for the day but I will say this: My days doubting JavaScript as a “serious” language are way behind me.

TDD Take 2

image One is the big topics for me last year at JAOO was test driven development so I was curious to see whether new stuff had come up in the intervening time from then to now. Giving the talk on TDD was  Erik Doernenburg. I won’t go into a lot of detail about the talk because as it turns out not much have changed in the span of a year.

What was interesting for me to note is that our work with unit testing and test driven development at Vertica has paid off handsomely as everything that ThoughtWorks, which I would describe as the thought leaders in this space (no pun intended), are doing is basically what we’ve spent the last year implementing and I’m happy to report that we’re at the point where the culture is basically sustaining that particular way of doing code.

So a year and a half ago I set the goal of become better at doing unit testing and my great colleagues have ensured success in that area. For the coming year the focus will be on builds, continuous integration, and release management. To me these are natural steps in our continued development of our way of doing things … and it’s fun too :)

posted on Sunday, 05 October 2008 21:48:05 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Friday, 03 October 2008

JAOO-logo Day 2 of JAOO 2008 was all about architecture for me, agile architecture, architecture reviews, requirements gathering, architecture testing, and finally lessons learned in architecture, Be sure to catch my summary of JAOO 2008 Day 1 if you missed it.

Architecture Reviews

Frank Buschmann from Siemens in Germany was track host and also the first speaker of the day. I caught a couple of talks with Frank last year and it’s apparent that he knows his stuff. While hugely important the architecture talks tend to be quite difficult to follow because the very nature of the topic is fluffy.

Most of the talk was pretty run of the mill in terms of how to conduct an architecture review. I’ve never formally conducted such a review but we do do them at regular intervals at Vertica just not in any sort of structured manner. We do them when they make sense and they usually consist of peer reviews and initial design sessions.

image Most interesting to me were a couple of techniques which Frank brought to light to do á formal architecture review. It’s not something you do every day and it’s certainly not something which requires a lot of structure.

My key take away from the talk is the fact that preparation for an architecture review is essential. Basically you need to sit down and try and figure out what you or the client expect from the review as the goal will impact the process of doing the review. This highlights why we can get away we can get away with very informal reviews because our goal is usually to verify that the selected architecture.

Now the situation changes rapidly when we’re conducting architecture reviews for other companies. Here the objective is to both verify architecture but more importantly to figure out what went wrong after the fact when a new system doesn’t satisfy non-functional requirements, lacks adaption in their internal dev organization, lacks maintainability, or something else altogether.

So I took away the fact that I need to be a lot more conscious about what the client expects to get out of a review and I must admit that I’ve taken a lot of satisfaction from going in and pointing out all the deficiencies in existing systems without giving thought to the fact that more often than not systems due have something good to bring to the table in spite of its deficiencies, perceived or otherwise.

Requirements Gathering

image Next up a talk which I didn’t really know what to expect from. The talk though turned out to be one of my favorites at this year’s JAOO due to the fact that it was very different from what I’ve seen at any other conference and it covered a topic, the importance of which I can’t stress enough, Communication.

Chris Rupp from Sophist at which she is the CEO and business analyst. Before I get started with my summary I must mention the fact that she spoke flawless English; a feat I’ve rarely seen performed by a German person. No hint of accent, nothing, just perfect English.

The meat of the talk was all about understanding what your client is telling you and more importantly filling in the blanks. The premise of the talk was basically something that we’ve know collectively in the software business for a while: The client doesn’t know what he/she wants. She had a twist on this though that I couldn’t agree with more which went along the lines that we can’t expect the client to know what they want. Building software is a complex task and it’s our responsibility as a community to help our clients to figure out what they want.

Chris touched on quite a number of different techniques with which we can employ to fill in the blanks. I was very pleased with the fact that she decided to focus on just a single technique called Neuro Linguistic Programming (NLP). My very limited understanding of NLP is that it’s basically the theory of the programming language of the human mind. What I took away from the talk is that NLP might be the key to picking up subtle nuances in the conversations I have with clients. Is a sentence phrased imprecisely? Maybe the client doesn’t really know what the details should be in that particular case. Is the client using very general terms to describe a feature? That could mean that we’re lacking some details, maybe we shouldn’t really allow everybody to update everything.

As I stated my understanding of NLP is very limited at this point but I definitely see a lot of potential here so I went ahead and suggested they we get some books on the subject so we can investigate further. I’m hooked at this point no doubt about it.

Agile Architecture

image James Coplien did a talk on what I thought would be pretty standard only-design-and-build-the-architecture-you-need-right-now kind of talks. Indeed he started out like that but he quickly went on to blowing our collective minds with proposing a new style of architecture where we separate the What and the Why more clearly. Now I won’t state that I understood half of what he was saying but I got the general drift and I definitely need to look into this some more.

If I were to compare it with something I know from the domain driven design world I’d compare it with the Specification pattern on steroids but I feel that it’s a poor comparison as his ideas describe the overall solution architecture where the Specification pattern is just small bits of pieces of any given solution.

To better understand the concepts I need to see a lot more source code :) You can download the pre-draft book which James is writing on the subject I think you’ll enjoy the new ideas a great deal.

Software Architecture and Testing

…. zzZZZzz…. nuff said.

Top Ten Software Architecture Mistakes

Needless to say I was not in the most energetic of moods having sat throught the snooze fest which was the previous talk. The guy in front of me must have agreed as he actually nodded of there for a while during the testing talk. It was actually pretty entertaining watching him do battle with very heavy eye lids, the mightiest of foes :)

image At least Eoin Woods (cool name or what?) took up the challenge and turned the whole mess around at the next talk in which he discussed his list of top ten architecture mistakes. Being in the last slot of the day is no easy task but he manged to get the entire room going, lots of laughs, lots of good stories, and lots of good information.

His talk basically served to highlight some of the mistakes that we’ve all made and continue to make from time to time. I believe that talks like this are invaluable as they serve to keep us mindful of at least some of the pitfalls of software architecture.

I liked that fact that this talk contained nothing but concrete examples and real world tips and tricks which we could take home with us and use. My favorite take away is to always have a plan B. I think most good architects subconsciously have these hanging around but I like the idea of having plan B be very explicit. It helps the the team if and when to enact it.

Just formulating plan B and sticking it into a document to me is hugely valuable; it gives you pause and helps you think through plan A and ultimately helps build trust as the customer ultimately gets a better solution and should, God forbid, plan A turn out to be a dud we’ve got something to fall back on. Having plan B be visible leaves more wiggle room for the client and I firmly believe that it helps build trust.

Continue to JAOO 2008 Day 3…

posted on Friday, 03 October 2008 22:33:17 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Thursday, 02 October 2008

JAOO-logo Last year was my first JAOO experience and I was fortunate enough to get to attend this year as well. My first experience with JAOO was very positive so I was looking forward to this year quite a lot.

The Keynote

As always we started out with a keynote which this year was held by Anders Hejlsberg from Microsoft and of course fellow Dane :). Mr. Hejlsberg talked about the future of CLR languages with three pillars forming the basis: Declarative, Concurrent, and Dynamic. Interestingly functional languages like F# and new language features like LINQ seemed to fulfill this quite nicely and so played a central role to his talk.

Anders delivered a solid talk and he even mentioned a new C# keyword which we can expect to see in the next incarnation of the language: dynamic. The idea is to declare a variable dynamic to enable easier lookup of methods than what we’ve got today with reflection. Sort of like dynamic dispatch known from dynamic language but keeping everything statically typed. Powerful stuff.

Interestingly he stopped by the Danish Microsoft HQ to give a similar talk the day before from which you can watch a clip which sums up his points.

CI and more CI

image For the the last year or so we’ve been hard at work introducing unit tests and, to some extent, test driven development. By introducing unit testing I don’t just mean just introducing the concepts and seeing what happens but really have the concepts nested deeply in the way we develop software at Vertica. I’m proud to announce that we’ve had a great deal of success in doing in no small way due to my very talented colleagues and Daniel in particular.

The next logical step in this work is to introduce continuous integration, the act of building the software and running all the structured test upon check in to the source repository. Naturally I was keen to attend a couple of sessions on this very topic.

Unfortunately Chris Read from ThoughtWorks gave a very run of the mill CI talk covering the concepts and the benefits but never really digging down deep in any of the aspects. Not that the talk was bad but he simply tried to do too much in the span of a very short time which meant that he never really got around to talking about anything concrete. He did touch briefly on various client projects he’d been involved with which gave some interesting insight into the problems we might face and he mentioned a concept of creating CI pipelines which jived well with my idea of how it should work. I’d have liked to hear a lot more about actual practices, do’s and don’ts, which would have made the talk immensely more engaging.

I followed up with what seemed to be a nice topic but turned out to be one of the pitfalls of JAOO. Not the presentation itself I’d judge it to be quite useful … for Java developers. Basically it involved taking the build process a step further than Ant by introducing a scripting language on top of Ant. Powerful stuff but sadly it didn’t apply to myself.

So I talked about the pitfalls at JAOO. Basically it’s important to be mindful of the fact that you can come across talks which are heavily based on some technology. So for a .NET dev it’s probably bad to walk in on some specific Java topic and vice versa.

Cloud Computing and Insight into Google

Google App Engine Cloud computing is getting a lot of attention at the moment and frankly I fail to see why so I wanted to see if I could gain some insight into the world of cloud computing. I actually ended up getting an interesting insight into Google as Gregor Hohpe discussed various in-house technologies they employ at Google to scale to the massive size required to run services on the level which Google does.

I was fascinated with BigTable, Google’s distributed cache, which can support tuples larger than a terabyte. The Google File System was an interesting piece of kit as well as the scales to sizes of lots and lots of petabytes. While Gregor told us about the Google File System he mentioned an internal joke which goes along the lines of, “What do you call 100 terabytes of free disk space?”, “Critically low disk space”. I’m a geek so I find stuff like that funny you know :)

He did demo Google’s cloud computing service, App Engine, which basically enables us to write Python code, deploy it to BigTable, and run it from there basically allowing developers to scale apps to the same size as Google itself.

PowerShell Blows My Mind

image A while back I listened to a Hanselminutes in which Scott talks about PowerShell the ultimate scripting environment from Microsoft. Since then I’ve wanted to learn more and I jumped at the chance to see Jeffrey Snover creator of PowerShell present it himself.

Basically his presentation blew my mind. From start to finish it was all PowerShell script flowing over the screen and I struggled to keep up with everything going on.

My interest in PowerShell comes from the fact that we’re on the brink of introducing CI in our dev process as I mentioned and I figure that PowerShell will come in handy in that it’ll help us automating some of the more tricky stuff. Also it’s my firm belief that PowerShell is a technology most .NET devs will start using over the coming years as it’s simply the way to get things done or even test out small ideas without cranky up the entire VS IDE.

Form the talk my impression is that we are in fact dealing with a very powerful scripting environment, not that I actually doubted that to begin with but it’s nice to get the point hammered home from time to time. The other aspect I came away with is that there’s a lot to learn: What we’ve got is a new syntax to deal with and even more importantly a completely new mindset. PowerShell is modeled over UNIX commands where everything can be piped together to produce interesting results. A way of thinking we’re not really used to in Windows land although I feel we can benefit tremendously from.

Continue to JAOO 2008 Day 2…

posted on Thursday, 02 October 2008 22:06:02 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback
# Saturday, 13 September 2008

ReSharper-Logo I was fortunate enough to attend a special event at Trifork at which the manager, Oleg Stepanov, of the Jetbrains team creating ReSharper gave a talk on ReSharper functionality. He basically demoed a bunch of R# features most of which are pretty well known to the Vertica team and myself but a couple of nuggets did present themselves and I figured if we don't know about them probably others don't as well.

Please note that all keyboard shortcuts mentioned in this post are based on the standard R# Visual Studio keyboard layout.

Smart Code Completion

On the light side I'll start with a feature I knew was in there but I never quite got why it was useful. The feature in question is smart code complete or as I like to thing about it Smart Intellisense. You find the feature in the ReSharper menu under Code > Complete Code > Smart (CTRL + ALT + SPACE). Smart Code Completion is basically smart intellisense, you could say that it puts the "intelli" in the intellisense :)

What it does is that when you activate the feature it suggests methods and properties based on the types in the local scope. So if you're in the process of assigning an int variable from somewhere it will only suggest methods based on matching return types, not just name as is the case with standard Visual Studio intellisense. Check out the screenshots below, the one to the right is standard Visual Studio intellisense (CTRL + SPACE), the left one is R# Smart Code Completion where the list is greatly reduced.

ReSharper-4x-Smart-Code-Completion-Normal-Intellisense  ReSharper-4x-Smart-Code-Completion 

Complete Statement

Probably the most useful feature that I picked up at the meeting is Complete Statement. Complete Statement is available from the R# menu under Code > Complete Code > Complate Statement (CTRL + SHIFT + ENTER).

It bascially tries to complete the current statement that you writing so if for example you're writing a method signature you and you use the feature it will complete the method signature and move the cursor to the method body enabling you to write your code in a more fluent manner. It works in a number of situations so you really want to learn the shortcut and start experimenting with it.

Complete Statement for if-statement. First step inserts the missing parenthesis and the curlies. Second step moves the cursor to the body of the if-statement.

ReSharper-4x-Statement-Completion-If-Step1  ReSharper-4x-Statement-Completion-If-Step2 ReSharper-4x-Statement-Completion-If-Step3

Complete Statement for method signature. Inserts the curlies and moves the cursor to the method body.

ReSharper-4x-Statement-Completion-Method-Step1 ReSharper-4x-Statement-Completion-Method-Step2

And for a string variable. Inserts the semi colon and moves the cursor to the next line.

ReSharper-4x-Statement-Completion-string-Step1 ReSharper-4x-Statement-Completion-string-Step2

Generate in Solution Explorer

You probably know about the Generate feature in Visual Studio which enables you to generate properties, constructors, etc.. What I didn't know about this feature is the fact that it's also availble in the Solution Explorer and basically enables you to create a class, interface, struct, or folder. Very handy indeed.

Generate is available from the R# menu Code > Generate (ALT + INS).


Camel Case Navigation

I love the code navigation features of R#. They let me find my way around a code base very simply. I've found this particularly useful in code bases I don't know very well because I usually have an idea of what another developer might choose to call something so I just go look for part of that type name. Anyway a twist on the navigation features is the fact that you can navigate via Camel Casing so if you have a type named OrderManagementService you could look for it by typing the entire thing but with Camel Casing you basically enter the upper case letters of OrderManagementService (OMS) and it will find that type for you. Very handy and my second favorite new feature of R# :)

BTW Navigate to Type is CTRL + T, Navigate to Any Symbol is CTRL + ALT + T, Navigate to File Member is ALT + <, and Navigate to File is CTRL + SHIFT+ T. Learn 'em, love 'em.

ReSharper-4x-Navigate-by-CamelCase-Standard ReSharper-4x-Smart-Code-Completion

Coming Features

Oleg also told us a little bit about what we can expect to see in R# 4.5. The main "feature" of the 4.5 release is performance tuning and bringing down the memory footprint. They're look at speeding up R# by a factor 2 and bringing down the footprint by 100 mb. Certainly very welcome. They are sneaking in new features though and one of them is to include "Find unused code" in Solution Wide Code Analysis.

Download ReSharper 4.1

posted on Saturday, 13 September 2008 15:37:02 (Romance Daylight Time, UTC+02:00)  #    Comments [0] Trackback