Windows 8 Pricing

Windows 8 LogoBased on the blog entry from Brandon LeBlanc the Windows 8 Upgrade (via Windows.com) will probably be available for $39.99. This is quite a smart move by Microsoft as you get Apple’s Mac OS X Snow Leopard from their store for about $29.00.

If the upgrade really works from Windows XP up to Windows 7, Microsoft might be possible to shift many users to the next generation of Windows. While the price for previous Windows upgrades was quite high, many users kept their former version of Windows until they bought a new personal computer, obtaining a new OEM version of Windows.

Nowadays, computers used at homes definitely last longer than the average Windows life cycle and quite a lot personal computers outside in the wild might see at least two version of Windows until being replaced.

Therefore, personally I welcome this move by Microsoft, considering to upgrade even more than one of my licenses to the new Windows version. Even though, this might be a strategic move by Microsoft to keep market share, existing customers definitely gain by the relatively low prices by upgrading even from previous Windows version. This might be quite an advantage as one has to buy each and every Mac OS version on the way upgrading from previous versions.

Total Costs of Ownership

If someone can’t let things go because they did cost much money at some point in time, consider the following:

  • What was the value of this item once you purchased it?
  • What would it costs if you had to buy it today again?
  • Would you spend that money for this item today?
  • Do you still use this item or do you keep it just because it did cost money some time ago?
  • What a re the costs of keeping it?

From all thoughts, I propose the latter is the most important. Keeping things does come with cost (time for organizing, time for tidying up,…). If you are not using something at all and probably not using it anytime in the future while you spend time in carrying it from one place to another, there is no rational reason to keep it. Calculate the costs of doing so, and you will realize that the original investment of purchasing it, was only a minor portion of the total costs of ownership

2011 Spam Statistics

2011 Spam StatisticsAs I recently moved my mail server to a new cloud provider, I used this opportunity to check the mail statistics for 2011. All together I had

64.097 mails processed from which

42.469 where spam mails and

375 included viruses.

All together this makes 66.25% of spam. However, only 0,58% of processed mails included viruses, which is a quite  surprisingly fact. Over the last year I encountered only  two to five spam mails a day in my inbox, and I have not reported any false positive at all. In contrast, I pick up false positives in my GMX junk mailbox on a regular weekly  base. For my personal mail server, I am using SpamAssassin with settings, I improved over two to three years as well as a set of various DNS blacklist.

Quite a part of the regular mails origin from various mailing lists and newsletters which leaves me with about 60 mails a day to process. A number to be definitely improved (i.e. reduced) for 2012.

Cross-domain Mash-up using Google Feed API

If you want to retrieve cross-domain content via AJAX/JavaScript to build a mash-up client, browsers might restrict these calls upon security reasons.

Digging through the resources on the Web, you might figure out that there are various approaches. I decided against any server-side processing of the request as I did not want to make an extra call to the my server. Also any jQuery plugin related approach would not work at the moment due to recent unavailability of jQuery plugins.

Looking for an alternative approach I came along the Google Feed API. Basically, it allows you to download any public Atom or RSS feed and consuming it in your JavaScript.

Once you got your API key which is based on the domain you want to call the API from, you can immediately  start using it.  The key is valid for all pages within this domain. Usage of the API includes adding the script your head of the HTML, loading the API using Google Loader’s load()call and finally hooking up your code as call-back in the setOnLoadCallback function. The feed is then provided either as JSON or as XML by the Google Feed API and can be easily used within you code without any cross-domain restrictions.

Google Plus Operator

Google has replaced the + (plus) operator for their search. While looking for a certain expression (using the plus operator) Google tells that from now an double quotation marks are necessary to find an exact expression.

SNAGHTML19b9b0e

Not sure if I like this, however, it looks like there are not many options to ignore this change. This probably has to do with all the G+ notation. It feels to me as bad as product and event names like .net or build which in combination with the new double quotation mark operator find some 2,490,000,000 results not relevant at all.

SNAGHTML19f637b

Restoring NT Backups using Windows 7

After a recent data loss, I had to restore several backups from various sources. Unfortunately, some of these backups were made on a Windows Server 2003 machine. However, it seems that Windows 7 does not come with any possibility to restore these backups out of the box, though.

The Windows NT Backup – Restore Utility seems to be the solution for this issue. During installation you might get the notification to turn on Removable Storage Management – on Windows Vista.

Enable Removable Storage Management Dialog

However, this is one of the features not available in Windows 7 anymore. Fortunately, Microsoft did release another version of this tool for Windows 7. Even the tool itself is now called Windows NT Backup Restore Utility for Windows 7 and for Windows Server 2008 R2 you will find it only as Update for Windows 7 x64-based Systems (KB974674) on the Web – of course this would be the exactly what you are going to look for, yes?

Once downloaded the right bits and installed, you will see the familiar UI of the former backup tool.

Windows NT Backup Restore Utility for Windows 7 and for Windows Server 2008 R2

Simply follow the Restore Wizard to access your old backups.

The Cleaner Coder

The Clean CoderI recently finished the latest book from Robert C. Martin aka Uncle Bob, called The Clean Coder.
Once finished there are many pro and cons about this book. At the beginning I was quite skeptic about the book but at the end I am glad I’ve read it to the very end.

The books is neither a set of rules that make you a better software developer nor is it really provide a code of conduct to follow in your professional life. However, if you spend some time in this industry, you will have many déjà vu moments while reading this book.

Very positive (if you a frequent reader and being interested in the people behind the books) is the fact that you will learn a lot about Martin as a person. Each chapter is more or less a short essay about a past project or part of his former work, some experiences he made and the (not so surprisingly) conclusions he made. There are quite a few sentences that are worth to remember, sometimes things you thought of many times but have not found the right words to write it down. Also you might find some interesting anecdotes you might learn something from (or did you know where carriage return and new line come ‘/r/n’ from and why they vary on different operating systems?).

One very positive aspect is that he points out what a professional (software developer) is, how he should behave and what could do to be recognized as such. In our industry you are still recognized as some kind of nerd, a geek who codes 24h hours a day, does not sleep, consumes a lot of caffeine and plays video games which a lack of social skills. While some of these things might be true, one expects often that you work more than the regular time, you solve each and every problem without any failure and that you come up with miracles, wonders performing magic, voodoo and code kung-fu each and every day (often for a very conservative salary). Nothing you would expect from other professionals (lawyers, doctors etc.).

Eventually, he writes about many things I, and probably you too, experience each and every day in our day work. At the end it is a nice reading book you might read during some evenings. Just the very final chapter about the tools he uses in his work (vi, Emacs, Eclipse etc.) and the frequent mentioning of FitNesse (which is Martins’ project) are quite unnecessary.

I am not fully convinced that the book is a must have reading, however, I work in this industry for nearly 13 years in various projects, research and product development, large and small companies, consulting and academia with different teams in different countries. At the very end it is quite calming to see once more that my problems are everywhere the same, have been the same for a long time and probably will stay the same for a long time in this industry.

Teamwork in Scrum

ScrumMaster Logo SealSome aspect attracting my attention since my very first Scrum training is the fact how a Team actually handles team work.

In a conventional software development team you probably find a bunch of hierarchical organized group members including managers, an architect, engineers and developers. While the architect figures out what to do with respect to the managers needs, the engineers define how to do it and de developers eventually do whatever the hierarchical organization above has defined.

In a Scrum Team, the Team is told what to do directly by the ProductOwner in form of stories. Consequently, what to do, how to do it and the act of doing it is completely within the responsibility of the Team. In the conventional approach, each level could blame the level above in case of a failure. The developers blame the engineers for insufficient definitions, the engineers blame the architect for a faulty architecture or misleading bullet tracing and the architect might blame the managers due to a strict budgeting, too much pressure and so on. Of course each manager might blame his manager in turn and so on.

So, how do we end up in a team effect? I have seen teams where each developer picks a story from the task board, teams where each developer had his/her own area of stories (server, client, user interface etc.) and teams where the overall team works at one story at once. Boris Gloger, holds the view the entire Team (of developers) works on the same story. I am not absolutely sure yet,  how this works in larger teams (5+ developers) with small stories, however in theory, if a story is done by the hole team it succeeds as team.

If each developer works on a separate story, I recall a passage from the latest book of Robert C. Martin, talking about who breaks the code will own it. What’s the logical implication of this? If a developer breaks code he might get blamed by other team members. That’s the quick death to a team. If the team separates in smaller groups which maybe blame this developer, this might be an even quicker detach to the team. If the blaming happens in front of the ProductOwner, this might fundamentally harm the overall Scrum Team, in particular the Team – ProductOwner relationship. Suspiciousness, maybe shorter sprint duration and additional tension within the whole Scrum Team might be the result. The very best ScrumMaster will have a hard time undoing this. Considering the fact that a really good team might take up six to twelve moths to form up, blaming should be avoided at all circumstances.

So what’s the right attitude in this case? If something goes wrong, a story is not fully implemented, code is broken, previous functionality is lost or whatever goes to the dogs, the whole team should stand up, work together to fix this issue as quick as possible without caring who introduced it. Of course the team should perform a root cause analysis what exactly happened, however, as it should be done in retrospectives, the topic should not be what went wrong and who’s fault it was. Instead it should be about how to avoid such a issue in the future. That way its all about improvements and that’s what Scrum is about at the very end.

Processing a Larger Pair

Yesterday, I had my first poker round for a very long time with two good friends of mine. A couple of years ago, we started playing Texas Hold’em – as computer scientists, of course just because of the maths and statistics.

Four of a KindDuring yesterdays game, we had a great hand facing a pocket pair of sevens a pocket pair of aces. By the flop and turn two more sevens came up providing me four of a kind and eventually the pot. Afterwards we had a nice chat about when and how to play a pocket pair as with three or less players on the table, one would play a pocket pair to the very end most of the times. However, with yesterdays hand in mind, I was quite interested in the statistics and the probability, my opponent might have a larger pair than 77.

I thought of this being a nice exercise for today visualizing this using Processing. As for any visualization, I needed some data. Therefore, I picked the corresponding table of probabilities from the Poker probability page on Wikipedia.

The visualization itself is straight forward, drawing he probabilities, axis and finally the labels. I decided to draw the axis after he probability curves simply to keep them on top of any other element on the canvas.

void draw()
{
    for(int col = 1; col < colCount; col++) {
      drawProbability(col);
    }
    drawAxis();
    drawLabels();
}

Finally, the result looks like the following. Indeed, you can see, that within a game of three people, there is only a 12% chance that someone would get a larger pair even if you hold a pocket pair of twos.

Processing chart for Poker Probabilities holding a Pair

Of course, you could create such a chart using Microsoft Excel and there is no rocket science in this visualization. However, this was quite a nice exercise to re-activate my Processing skills. Positioning of labels is done relative to the size of the canvas and the length of the text as well as the color chosen for the number of opponents is chosen dynamically. The whole example is available at http//aheil.codeplex.com.

I See Clouds of White

For several years, I run my own local server as well as a root server hosted online. I run all kinds of services, some of them I used on a regular base, some of them I used from time to time and others I just set up to learn and experiment. However, the time I set up most of these, was a time when there where not many choices if you wanted to host something online. So I run my local repository, Microsoft Team Foundation Server, my own mail server online, my FTP and Web Server and many other services.

As maintaining all these services became almost a full time job, I finally decided to move anything in to the Cloud – or at least somewhere online. In some way, this is an experiment as I try to achieve to run anything I need (or lets say I want) somewhere online while staying within the budget of my Web and local server.

For the local server I calculate $42 for maintenance and electricity a month while the monthly  rent for the Web server is $70. All together I face yearly costs of nearly $1.350 average fixed costs a year not included software licenses and time invested to maintain and update the servers.

Step by step I now move my services to various online services (free and paid). First of all I moved my blog to wordpress.com. That was a rather easy decision as I already switched to the WordPress software several moths ago on my own server. Exporting and importing the content therefore was quite an easy job. Finally, I picked domain mapping for http://www.hack-the-planet.net which is about $12 a year.

To keep track of stuff to do, I use remember the milk for quite a time now. $25 a year is not that cheap for a simple list of todos, however, I get the Web application, a fine app for iPhone and iPad as well as GMail and Google Calendar Gadgets synced all over the place.

A critical step, however, are my source code repository. I maintain all code I’ve ever written in CVS and Subversion for ages. Without your own server it’s not that easy to grant rights on repositories for friends and colleagues you work with. Here, I decided to move to two different platforms. First of all, I started a new project called aheil code (to keep the corporate identity in sync with aheil blog) at CodePlex. That’s the place I plan to share anything under Ms-PL license. Closed source however, I go to store with Assembla. They provide a limited but cost free program for private repositories which should be sufficient for my needs.

Instead of using my own FTP to exchange files between machines (and people), DropBox appeared to be a great solution. I joined DropBox at a very early beta state and I am still very happy. (If you don’t have an DropBox account yet, follow http://db.tt/kNZcbyI which gives you and me 250MB of extra free space). I use about 4GB of space at the moment for free. However, once you need more there is always the possibility to switch to a paid account. The client is available for almost any platforms and I use it for various scenarios across many of them including Web, Windows, Mac and iOS. Before I used Microsoft Live Mesh, however, canceling the beta, changing the name, running two Microsoft services (Mesh and SkyDrive) at the same time you were not able to combine and finally changing the software drove me finally to DropBox.

I terms of productivity tools, I completely switched to Google Calendar as it syncs nicely with iPhone and iPad and even iCal on my Mac. I used (and really liked) Outlook for many years, but the lack of syncing with third party online services seem to be an epic fail in 2011. I can tell you that you won’t notice this fact within Microsoft (living in a happy Exchange and SharePoint equipped world), but out there in the World Wild Wide Web, connectivity is all that counts.

Also, I joined Evernote to sync, copy and share notes and documents. Again, client software is available for all major platforms including iOS, Windows and Mac. I still try to figure out how to use Evernote on a daily base, but at the moment, the maintenance costs (manual sorting, organizing etc.) are beyond the benefit.

So far, I was not able to cover all services I need, for example I am still looking for a good (and secure) online backup solution, a way to host my IMAP server and Web server as well as a possibility for a local storage solution. At least the last point seems to be almost solved by my new router which allows you use a external HDD as network drive. Using my previous solution, I was able to connect to my local network from anywhere using OpenVPN in a very convenient way. Also here, I am looking for an alternative solution where maybe a router might take care of this.

So far, the experiment to move everything to the “cloud” was quite a success. I was able to migrate quite a lot of my services and only spent 3% of my available budget for services so far.