Revolutionary becomes Evolutionary

Recently, I discussed a lot with friends and colleagues about new mobile devices. Using Windows Mobile for years, I switched to Apple’s iPhone 3GS three years ago. Before, I talked quite a lot with a friend who recently bought one at that time.

Before, I used an iPod for listening to music and Windows Mobile devices such as the HTC Hermes or the HTC Touch Pro for quite a few years. Over time, I got annoyed by always carrying two devices, two power plugs, two connector cables and by managing at least two different applications to sync both devices. Eventually, my decision to buy an iPhone was driven by quite rational thoughts.

I was pleased with the hardware quite a lot, never worried about the processor, ram or other components of the device. The only drawback for me as an developer was the fact that you cannot simply deploy your home brewed application to the device.

I skipped two generations of the iPhone, finally rethinking of getting a new device. What shall it be? Meanwhile, I am quite off the track developing for Windows Mobile. Also, the hardware fragmentation for Windows devices is quite a bit. Similar situation with Android based devices. Which one is the reference hardware to buy? While the idea of developing for the Android platform is tempting, there are more facts to consider.

After three years, I have to admitt, the ecosystem lock in is quite a reason. IPad (first gen but with 3G), an almost retina but bought a few months to early MacBook and quite a lot of periphery to use with my devices is a good reason to stay. Nevertheless, with the new Lightning connector many peripheral devices became obsolete.

Much more than the hardware lock in is definitely a data lock in. Dozens of apps with your data, synced address books and calendars, lifelogging and quantified self data collected over the years is a good reason to stay with the current platform.

With the release of the new iPhone there is a lot of making-fun-of-the-new-iphone going on, however considering the facts above you see there are simple reasons to stay with a platform. This is definitely a goal of every manufacturers, and Apple plays this this game very well.

Looking at the new hardware, iOS 6 as well as the new Mac OS, there is no rocket science, there is no Star Trek communicator and no universal translator comming with new iPhone. There is no revolution, simply a technological evolution of a long designed system. A system that grew five generations.

Personally, I think a steady evolution of technology is worth quite a lot. I don’t want to migrate all the data, I don’t want to worry about the hardware to buy, I don’t want to learn new user interfaces and usability concepts for now. I want a device being part of my daily (business) life, easy to use, sitting in my pocket being available when needed. With the current evolution of the iPhone this should be possible for the next one or two hardware generations.

Said that, it will be a 64GB iPhone in white for me while it will be a Nokia Lumina 900 or a Google Galaxy Nexus for others due to the same or similar reasons mentioned above.

iPhone 5

The Role of an Architect in Scrum

Over the last few days I (re-)thought a lot of the role of an architect in a Scrum team. I tried to avoid to read about other opinions and just thought of my very own experience over the last few years in agile teams.

There should be an architect in your organization, however, he should not be part of the Scrum team itself. Scrum is about the development process in your organization or project and as such is it a required constraint for the architecture to develop.

The architect deals with non technical issues, so called facts of life, politics, organizational constraints, budget restrictions and so on. No team member could address thees in the regular development sprints.

As an architect one should analysis and manage risks – again nothing a team member could do during a regular two or four week sprint. Considering risks probably begins long before the team is Assembled and t he project is started. In fact, as an architect one could even point out the project might not doable due to various reasons.

An architect has to run in iterative cycles as constraints change, customers come up with new and modified requirements and organizational goals mit shift over time. All this might require some redesign or evolution of the architecture. However, these architectural cycles are not bound to the development cycles of e team. They are more related to the business, customers and organization.

The architect is a technical leader. Maybe by prototypes or bullet tracing he shows how hard parts of the system can or will be addressed. As such he provides a base to the team to better estimate the complexity of the upcoming implementation. Eventually, as architect it is important to address the hardest issues first whiled the Scrum team addresses the tasks with the highest business value first.

As an architect you teach and coach your team. In Martin Fowler's article Who Needs an Architect?, published in IEEE Software, Fowler points out, that an good architect mentors his team, not sitting in his ebony tower.

Dealing with uncertainty is probably one of the most underestimated aspects of the dayjob of an architect. In the role of an architect, I have to make decisions under a high degree of uncertainty. As many aspects of a project change over time, early decisions are uncertain. However, as an architect, you have to consider the consequences of these decisions. It's a major part of your job to consider risks and work on plans wether any event occures that affects your project both, a a threat or a chance.

I contrast, in the role of an developer, you should not be confronted with uncertainty. You have a tough schedule and proper tasks to fulfill, Technologie and tools are in place and in the best case you have all the knowledge to perfomreyou task. If the requirements for your task are not clear, you probably cannot fulfill it.

I have experienced this very issue during the last few years several times. Most of the time this was caused by the lack of an architect or the position of the architect was not dealing with the role of the architect in a proper way, most of the time writing code themselfs.

Therefore, do architects write code? This might be one of the most discussed topics for software engineering. Personally, I have experienced both, architects writing (productive) code all the time a well as architects never worked in product development. As an architect you probably should provide certain coding skills. Earning your street cred before becoming an architect is inevitable. You must be able to read, understand and improve the code. However, you have to delegate the actual task of building the code base to others. Nevertheless, you probably have to improve your coding skills permanently. Therefore, an erchitect must be able to write excellent code, however he should not write the bits delivers despite prototypes and tracer bullets.

As a fact, running an agile environment does not supersede an architect. Neither does it reduntize the overall product planing and designing process. The architect is not part of the Scrum team as he does not deliver within the Scrum team. He is in a position existing before the Scrum team is assembled, he leads the direction and can hold an consulting position for the team. However, the a architect is not part of the sprint plan and as he is not part of the Scrum team as such.

Futures Software Architectures

Looking at myself, I see how different I work nowadays with devices than almost 30 years ago. In the early days of personal computers you spend a lot of time in figuring out what you actually can do with your Commodore C64 or your very first 286 hardware while knowing each component's specification. Nowadays it is simply about the available software. Most of the users probably do even not know about any technical details of the device they are using beside if it is slow or fast.

If you look at professionals who use computers, they often use one specific application, which maybe is shut down only once at the end of the week. Personal users probably don't know that there are more applications on the computer than the web browser.

As computer professionals we tend to forget to think about the why others do use computers. We see the full potential of the latest programming language, the computing power, the maximum available bandwidth and all the fancy features we know about.

Tablets such as the iPad or the new Nexus are great for end users. Quite intuitive to use, and no need to worry about the hardware. Whatever users want to to, they simply have to find the right app. I fact, I use my iPad for many common tasks, even for writing, blogging and editing images the apps are meanwhile quite well done.

Specialized applications used by various professionals do not need a fully equipped personal computer. Ever looked at a doctor's place? In every surgery you might find a personal computer running often just one program. Or have a look at a common electronic it furniture megastore. Each information desk will probably has one personal computer running one program on it. Often, these programs are typically host applications where the client continually requests information from a server application

There is no reason to put a fully equipped computer in every room for a single application. Either a thin client or some lightweight tablet might the answer here. Either a web hosted application or a small application communicating with a server (e.g. In the cloud) might be a good solution.

Cloud Hosted App

As professional software architects and designers we should consider this while designing application even if stakeholders still request old fashioned desktop applications.

 

The XBOX Blackout

After dealing with a RROD a couple of years ago, a few weeks ago, my replacement Xbox also went down to the dogs. No replacement this time. Actually, it seemed some solder joint was broken.

With a new box, this time a Xbox 250GB Slim, I was confronted with the data migration issue. Luckily, the new Xbox already has the drivers for the migration kit build in, which means you can connect your old HDD directly to the new box. Therefore I made use of my old Data Migration Kit.

XBOX 360 Data Migration Kit

To make the process of upgrading easier, first set up your new box with a temporary account. Connect the old HDD to the Data Migration Kit and plug in the kit to the new Xbox. It will recognize the kit and ask whether to copy from or to the new console.

Copy Content to XBOX

Afterwards you can select what copy. The only drawback is you cannot copy already installed games from disc. These you have to reinstall at a later point in time.

2012-08-31 14.30.19

Once started this process might take quite a while. I haven’t found anything on the Xbox site about data migration to a 250GB disc or the new boxes. However, luckily the guy in the following video pointed out how it works and that the software/drivers a re part of the new Xbox. 10 minutes worth watching.

Once accomplished, it might be necessary to transfer the rights on digital content to your new Xbox following the steps on http://www.xbx.com/drm.

O’Reilly Books on Your Finger Tips

O’Reilly’s camel book was one of the programming books, I bought quite some years ago. Since then  I am a big fan of O’Reilly books. Eventually, O’Reilly started to provide books in various digital formats. As owner of various e-book readers, I was quite pleased when O’Reilly stated to offer their books for download. Purchasing books not only from O’Reilly rather from a whole bunch of publishers, downloading, updating and copying the books from all these websites became almost  day job over time.

Even more, I was pleased by O’Reilly recently offering a beta service to synchronize purchased books to your Dropbox account. In your Personal Info area, you’ll find the Dropbox settings. Once authorized and the file formats selected to sync, you can start syncing your books.

O'Reilly Dropbox Settings

While newly bought books will synced automatically, previous purchased need to synchronized manually. Therefore, you’ll find a Sync to Dropbox button in the Your Products area to select which previously purchased e-books to download.

Sync to Dropbox

After Dropbox has finished, you have all your selected books as well as future purchases in your local Dropbox\Apps\O’Reilly Media folder. No worries if you delete one if these files, you can initiate the synchronization again as described above.

O'Reilly Media Folder in Dropbox

Not only that your e-books are synced to your computer, once available in Dropbox, the files are also available on all devices supported. Eventually, this means you can easily access your books on iPad, iPhone or Android devices. As Dropbox even supports Kindle Fire,  this might be a good reason to pick up this device. Based on rumors, this might be available early September. Until then, the Kindle stays the last device I have to copy my books manually. However, due to the fact they a re synced to a dedicated folder, it is easy to pick them up.

O'Reilly Media on iOS

Actually, I am that pleased with this great kind of integration, that I have asked Manning (also a publisher, I own a lot of e-books) about a similar feature. Eventually, it was confirmed that such a feature is currently being developed.

If you have no Dropbox account yet, you can support this blog by following this referrer signing up for a free account.

WordPress Internal Server Error 500 for Uploaded Images

After upgrading WordPress on my Windows Server 2008 to version 3.4.1, I encountered a quite strange behavior. Using the Add New Post functionality, images, uploaded by multi-file-uploader have not been displayed anymore, neither in the editor nor in the post itself.

Add New Post Editor

Once you finished your article, your blog will end up with a Internal Server Error 500. However, all thumbnails created by WordPress can be requested without any problems.

500 - Internal Server Error Message

Eventually, I started to do some research on this issue, ending in a quite exhaustive digging down to the metal of Windows, however, with a quite surprisingly outcome. In this article I’ll try to give an overview of this issue, explaining why this occurs on Windows and how to solve this with almost a  single click.

There are quite a lot of blog entries and stackoverflow answers about this topic with more or less useful steps. If you are just looking for the quick answer, without the need of understanding what causes this particular problem, here it comes:

Change the system’s default temp folder (C:\Windows\temp) rights by granting rights for IIS_IUSRS user and you are probably done.

Temp Properties

The WordPress image upload is using the standard PHP functionality, using the temporary upload folder specified in your php.ini file.  By default, PHP is using the system’s temporary directory (e.g. c:\windows\temp) for uploading the initial image.

php.ini File Uploads Settings

Eventually, using the system’s temp folder is the root cause of the issue described in this article. When uploading the image to the temp folder, the file is initially created on the system. Consequently, the file is inheriting the folder’s security settings. After uploading, the original file is copied into the destination folder, e.g. \wp-content\uploads\2012\08.  Here all thumbnails are generated from the original file. As they are being created in the destination folder, they will inherit the security settings from this folder resulting in two different sets of permissions applied to the original file and the thumbnails. This explains why you will only receive an error with error code 500 for the original file, while all the thumbnails can be requested without any problems. 

In case you have already images the destination folder causing an error code 500, you can reapply the actual rights of the wp-content folder which will probably fix the problem.

In case changing the permissions of the system’s temp folder does not fix the issue, check your php.ini file if another upload folder is specified in the file uploads section. Baer in mind that PHP will use the system’s temp file for uploads also as backup in case PHP has no access rights for the folder specified in the php.ini  file. 

If you consider granting IIS_IUSRS as a security risk to access the system’s default temp folder you might want to specify an alternate upload folder anyway.

Restore Desktop Layout on Windows

Works on my machine!I continually move between different office places using different setups for monitors with my laptop. Sizes, numbers and orders of the monitors vary from place to place. As a consequence, you either deal with a complete mess on your desktop or you spend several hours per week in rearranging icons on your desktop.

Tired of doing so, I was looking for a nice tool for Windows, easy to use. Desktop Restore by Jamie O’Connell is such a tool available for Windows x86 and x64 systems and it is free to use (while he appreciate donations). For me, it works fine on a Windows 7 64-bit machine.

It integrates well with the Windows Explorer  context menu where you can save and restore layouts for different resolutions. This even allows you to set up your desktop for different locations and restore them with a single mouse click.

Desktop Restore

I have used it for ages, however, I have not really realized how great this tools until I set up my machine from the scratch recently. 

Understanding Average Performance Counter in .NET

For the current project I am working on, I recently had to implement a way of easy adding and using Performance Counters in .NET. While working on the code base, I implemented various counters as examples how to use the new infrastructure and how to implement counters in the code base.

While investigating in performance counters, I’ve seen quite a series of posts and articles describing the usage of the AverageTimer32 and AverageTimer64 classes. However, all the examples there seemed to be wrong. One of these examples was a question I have answered on stackoverflow.com, leading to this post.

Basically, all the examples I have seen propose to throw a set of measurements into the mentioned expecting that the counter provides the average of these measurements. The AverageTimer32/64, however, does not calculate the average of all measurements you perform. Instead it provides the ration of your measurements to the number of operations you provide.

To understand how the AverageTimer32/64 works, it might be helpful to understand the formula behind it. This also answers why one needs an AverageBase to use an AverageTimer32/64.

The formula the AverageTimer32/64 is based on is as following:

((N1 - N0) / F) / (B1 - B0)

given
N1 current reading at time t (provided to the AverageTimer32/64)
N0 reading before, at t – 1 (provided to the AverageTimer32/64)
B1 current counter at t (provided to the AverageBase)
B0 counter before, at t – 1 (provided to the AverageBase)
F Factor to calculate ticks/seconds

In a nutshell the formula takes the current time in ticks and subtracts the previous one provided. The result divided by the factor F gives you the time you operation run since the last measurement taken at t-1. Usually, this factor should be 10.000.000 ticks per second.

Now you divide this by the current base counter minus the previous base counter provided. Usually, this might be one. As a result you have the average time of your operation for a single measurement.

Using the AverageBase you now can step over various measurement points. Think of a case where you can set the counter only every tenth operation you perform. Since your last measurement you would increment the AverageTimer32/64 by the new time measurement for all ten operations while incrementing the AverageBase by ten. Eventually, you will receive the average time for one operation (even if you have measured over all ten operation calls).

In most examples, a set of timespans are provided  for this counter to calculate an average value. Let this be a series of numbers like 10, 9, 8, 7 ,6 while increasing the AverageBase by 1 every time providing one of these figures.

For the second measurement you will receive the following result:

(9 – 10) / F / (1 – 0) = -1 / F / 1

With F being 1 for simplicity you will get -1 as result. Given measurements that provide most of the time similar results, for a large number of experiments you will end up with am average value near zero.

Based on the previous example, the correct values to submit, however should be 10, 19, 27, 34, 40.  Again the same example we will show a different result.

(19 – 10) / F / (1 – 0) = 9 / F / 1

With F being 1 again, you will have an average time of 9 for your second measurement. As you can see from the formula, the every value measured needs to be greater than the previous one to avoid the effect previously showed.

You might use a global Stopwatch to achieve this goal. Instead of starting it new, you might use use Start() – not Restart() for each measurement. As seen above, the counter will calculate the difference time internally. That way you will get correct measurements.

public void Compute()
{
_stopwatch.Start(); // do not restart the Stopwatch used for average counters


// code to be measured
// ...


_stopwatch.Stop();


_avgTimeCounter.IncrementBy(_stopwatch.ElapsedMilliseconds);
_avgTimeCounterBase.Increment();
}

PerformanceCounterDe

Even if called AverageTimer32/64, this type of counter is not strictly restricted to time. You can think of using this counter for a variety of measurements. For example 404 responses in relation to the total number of HTTP requests,  disk transfer rations  and so on.

Clean Interface Inheritance

Recently, I had a interesting conversation about interface inheritance with one of my colleagues. Reason was a decision necessary how to implement basic behavior based on a set of interfaces for a number of classes. At first, I was not comfortable to  let one interface inherit from another. I was quite biased from the design of the code base I am currently working on.

Generally spoken, each class inherits from its very own interface. In addition the inheritance scheme of classes is reflected in the interface inheritance as seen in the example below. Why would one come up with such a design at all?

Complexe Interface Inheritance

To understand this design (it’s still a design) some more context is required. In the particular codebase a dependency injection container is used to resolve instances of a particular type. To do so a unique identifier is required. This could be a string but also an interface. E.g. the Microsoft Extensibility Framework (MEF) makes usage of interfaces for resolving. Using MEF it is quite easy to get a set of components implementing a particular interface (e.g. some kind of IPlugin interface).

The issue I’ve seen, there are only few common interfaces in the codebase . Instead of collecting all types of a particular interface, dedicated interfaces are used to resolve particular instances of types. However, by using interfaces this way, the focus is to identify types by their interfaces not using interfaces as contracts.

Back to the issue how to design the interface inheritance, we came up with two alternate approaches, both valid indeed, but with very different design goals.

Implementing or Inheriting interfaces

In the left hand approach two types inherit from the same base type implementing a particular interface. Both types also implement another interface, i.e. both types fulfill the same contract. In the right hand approach we see both types inheriting from the same interfaces as well. While at the end both approaches will end in the same a similar result, there is a significant difference in the semantic.

Based on both approaches, we came up with two possible solutions for a .NET implementation. While this might seem quite academic to you, there is quite a difference how one might use these types.

Inheriting Interfaces vs Implementing Interfaces

As in the example before, both approaches will end up in two classes implemented identically, however, both implementations show semantic differences best seen when considering the usage of these classes. In the left hand example one could iterate through a typed list of IAlgorithm calling a Dispose method required by the IDisposable interface. This implementation is obviously contravariant. Following the right hand scheme, you might still iterate through the list, however,  before accessing the Dispose method it is required to cast the concrete instance to IDisposable. While being still contravariant, it is not implicitly possible.

The question now, is to decide when to use the fist or the second approach. Interfaces inheriting from other interfaces is absolutely valid once you can answer the question if your type is some kind of with yes. In the given example each algorithm is an IDisposable. No exception, no excuse. Choosing the second approach you should be able to answer the question whether your type needs to fulfill a particular contract with yes. If only a few algorithms need to be fulfill the contract given by the IDisposable interface, and an algorithm is not a IDisposable by default, the second approach might be the right to choose. While each algorithm is still an IAlgorithm, only some of them could implement the IDisposable interface.

Maybe this seems obvious to you, however, I still see quite experienced developers having significant problems in choosing appropriate inheritance structures. From avoiding inheritance at all to the point of using the most complex inheritance structures you might have ever seen in your programmers life. There is no right or wrong but there might be a best solution suitable to your problem. So never hesitate questioning your current design looking for a better approach. 

Windows Metafile Preview on Windows 7

Works on my machine!The visual preview of files in Windows Explorer is one of the great features of Windows when looking for a certain file. Unfortunately, with Windows Vista Microsoft disabled the preview for Windows Metafile Format (.WMF) and Enhanced Metafile Format (.EMF) files. As I needed t work a lot with EMF files during my latest book project with Springer, I was looking for some way to enable the preview of the file types mentioned above in Windows Explorer.

Windows 7 without EMF/WMF preview

Fortunately, there is a great plugin called emfplugin written by Daniel Gehringer to enable the preview. The plugin is available for x32 and x64 machines and should work on both, Windows Vista and Windows 7. Once installed (and rebooted) the Windows Explorer is capable of displaying EMF and WMF files.

Windows Explorer EMF and WMF thumbnail preview plug-in

The plugin is licensed under the MIT license, so its safe to go with it. At the very end this raises the question why Microsoft did actually disable the preview for two formats developed by Microsoft itself and whether they might work with Windows 8 again.