Friday, August 31, 2007

Windows Genuine Software Crap®

Damn, do I have a low tolerance to the Microsoft® marketing tricks nowadays. They sell you a notebook with pre-installed Vista Home Premium®, which is crappiest® OS I've seen since Windows 95® pre-SE and tell you that, of course, you can buy it clean but it's not gonna affect the price. They evidently give exactly that Membership Provider part, which you're most interested in, to the stupidest® intern in the team, who forgets un-comment property, so providing True or False to the constructor doesn't make any difference. Of course, of course, they cut the cost by cutting QAs while project BAs have never been a Microsoft® top priority.

And finally, the ingenious piece of corporate brainwork - Windows Genuine Software® checker. Great, - it didn't work for my perfectly LEGITIMATE Windows Server 2003®. I couldn't download an important patch, because it kept failing and complaining that this system is not supported. I saw this thing fly on pirated® systems (not mine, I've just cast a glance at computers of some naughty, naughty people, who had long since been cured and own legitimate copies now) but why it can't reward me® - such a genuine LEGITIMATE user?!

OK, you don't like me - I don't like you: there is a way to avoid this nastiness (when you're rightly frustrated over your LEGITIMATE software) and here is a community answer to the corporate greed® and stupidity®: Greasemonkey + script.

I am confident that information provided in this post will not be used for anything but Cultural Learnings of Legitimate Software for Make Benefit Glorious Nation of Microsoft®.

Saturday, August 25, 2007

Configurable deployment of deployable configuration

Doing automated builds you will inevitably encounter a problem deploying an application to different environments. You can only avoid it in the case if you live in the happy world of single box, but then you most likely do not bother with continuous integration in the first place. The worst of deployment configuration is unleashed during the "death race", when all those nasty wrong configuration bugs are discovered during the client presentation.

If application is built the smart way, all environment-specific details are encapsulated. In case with ASP.NET application it is most likely the web.config file. It is no doubt that the solution is to handle a deployment task to the automated script. Aside from technical details, the trick is to have some kind of configuration template in one hand and environment-dependable variables in the other and smelt them together when time comes, rather automatically than manually. What are our choices to settle the shit configuration once and live happily ever after?

The easiest way seems to have multiple instances of the configuration file, so the relevant one is pulled to the deployment environment. There is a huge disadvantage, though, as you will become a victim of the main copy-paste curse - synchronization problem. It will quickly go out of hands if you have more than one project and more than two developers to worry about. Changes have to be tracked and reproduced scrupulously - that defeats the whole idea of laziness.

Another, a little bit exotic, but viable method, is to create a single config file with placeholders inside:

<add name="MainConnectionString" connectionString="[[MainConnectionString]]" providerName="System.Data.SqlClient" />

Then the deployment heavily relies on the build script, which will hold all necessary variables for different environments. The disadvantage is that the raw web-config file is unusable and you can not run the application without deploying it properly (even to the local environment) or tweaking the project build events to run it from the Visual Studio. On the bright site, the configuration lays in the hands of the Jedi build master and the real production settings are hidden from the rest of the team (attention, Sarbanes-Oxley-compliant companies!).

The third choice is to create a custom configuration section which will be controlled by a single key, changeable by the build script. It maybe a full-scale class or something more lightweight. The first approach will give you all flexibility you may need, but would require some kind of common library if you have multiple projects. Second approach, would require developers to learn the new way of retrieving configuration values.

Whatever way we choose, we should keep in mind: it is the laziest way that will be favored by your fellow developers. If we will be able to access the configuration through the ConfigurationManager class (which is 100% customary) and to create relevant configuration sections just slightly different from what we used to (let's say - 80% customary) - it will be the preferred combination. Finally, developers spend more time using the configuration, than creating or changing it.

I would love to hear about the other ways to automate configuration deployment.

UPDATED: Another approach is to have most environment-dependable sections (e.g. connectionsStrings and appSettings) "outsourced" to the satellite files grouped by environment, using "file" attribute:

<appSettings file="config\production\connectionStrings.config" />

The build script is sent over the web.config file and depending on a parameter it just replaces the middle part of URI - from "production" to "bat" or "dev". It is a relatively small amount of automatic changes and deployment error is visible right away. This approach we are using now and it seems to work well. At least we can be sure that once we perfected the web.config, it is unlikely that somebody will mess it up with environment changes, which are encapsulated and independent from each other. The production settings can be SOX-friendly isolated and hidden. The downside - the good'ol synchronization problem.

Death Race

The plot for the future Jason Statham's movie bears the most similarities to the project stage, which, I recently realized, is an unmistakable benefit of the waterfall development. It occurs at the very end of the project and consists of spasmodic attempts to do last-minute (and often the only) QA, bug fixing, deployment and redeployment. Unlike the "Death March", when the bleeding troops more or less steadily approaching a distant milestone, the Race is usually packed within a very tight timeframe, usually hours, when the client is arriving shortly or CEO is about to drop by between golf games. Nobody can stay sane even if they want to. It is too late to do the right things and it is the time for hacking, shortcutting, patching and failure acceptance. It is even too late for a fresh cannon fodder and asking "Didn't I tell you so?" will bring a deserved wrath on your head.

The worst residue of the Death Race (and the Death March) is a wrong idea that it has actually worked, if the project was small enough to be wrestled into place with some degree of success. Good practices and patterns used during the whole project seem to be unnecessary as they weren't needed for the last-second hackings, so often activists are blamed for time-wasting (which led to the Race, of course). Next time the team will try to undertake the larger task with the same approach. Then it would be a good idea to shake the dust off your resume...

Friday, August 10, 2007

Authorization with multiple role providers

Provider-based authorization and authentication is definitely a huge step forward from the primitive form authentication we've got in ASP.NET 1.0. Unfortunately Microsoft again didn't do a great job unifying the approaches. It seems that Membership, Role and Profile projects were handled by three geographically and developmentally separated teams who had no desire to communicate with each other. The Membership provider architecture and implementation is easily the best from the three while Profile - the worst (it is noteworthy that even a quite thorough book on security from the Microsoft itself ignores Profile features). Despite the similarities in implementation all three provider models are frustratingly different. Like in the case with Team System, the problem seems to be a feeble BA job.

Out-of-the-box implementations work good like Membership or fair, like others for a standard project. But what if we need an admin-type application which can service few client apps? Client membership databases can either be shared or separate, while admin application will be able to access them all. It can give us great flexibility and cut amount of code tremendously. Anyway, even for the other cause it is still nice to handle multiple providers.

Membership works out of the box without a glitch(surprise, surprise) - it is no brainier to authenticate against arbitrary provider using Membership.Providers collection. Profile sucks as usual - from my point of view it doesn't make any sense to use multiple providers if we can not inherit profile from different base classes. The Role provider imposes some challenge though. Wouldn't it be nice to change default provider programmaticaly once for all application?

Static Roles class will serve you a RolePrincipal associated with the default Role Provider, thus Context.User.IsInRole() will not give us what we want. There are plenty ways around but it is good idea to let developers use this customary method. The following code is based on a snippet from a "Professional ASP.NET 2.0 Security, Membership, and Role Management" - an excellent (but quite heavy, literally :) book by Stefan Schackow. The idea is to replace IPrincipal Context.User class with RolePrincipal one, which constructor accepts Role Provider name - that's exactly what we need. Stefan proposes to hook the method to the GetRoles event in the RoleManagerModule. In this case you should consider tricky business of passing along the desired provider name. The Session object is not accessible this early in the pipeline and other possible meanings, - query string and cookie - still may not be the weapon of choice. Query string will add an extra headache if we use URL rewriting or serve extensive amount of dynamic pages from Content Management System. Cookie should be guaranteed to stay untouched for the whole user session or something nasty could happen. The code can be placed in the application page controller class. If you use back-door for a seamless login, the gateway and front login page should run exactly the same logic.

public static void SetRoleProviderForCurrentUser(string applicationName, HttpContext context)
if (!context.User.Identity.IsAuthenticated) return;
if (string.IsNullOrEmpty(applicationName)) return;
RolePrincipal newPrincipal=null;
if (Roles.CacheRolesInCookie)
if ((!Roles.CookieRequireSSL || context.Request.IsSecureConnection))
HttpCookie cookie = context.Request.Cookies[Roles.CookieName];
if (cookie != null)
string cookieValue = cookie.Value;
if (cookieValue != null && cookieValue.Length > 4096)
//ensure proper casing
if (!String.IsNullOrEmpty(Roles.CookiePath) && Roles.CookiePath != "/")
cookie.Path = Roles.CookiePath;
cookie.Domain = Roles.Domain;
//create a new principal
newPrincipal = new RolePrincipal(
catch { /*no cookie? no problem, ignore the error*/ }
if (context.Request.Cookies[Roles.CookieName] != null) Roles.DeleteCookie();
if (newPrincipal==null)
newPrincipal = new RolePrincipal(
GetRoleProviderName(applicationName), context.User.Identity);
context.User = newPrincipal;
Thread.CurrentPrincipal = context.User;
Note that we manually synchronize the Thread.CurrentPrincipal with Context.User - the job normally done by DefaultAuthenticationModule when we login into the application. Dominick Baier had a great article about differences and similarities of these two objects quite a while ago.

Resharper 3.0 has memory leaks?

Somehow it seems that after installing Resharper 3.0 my computer started chocking on multiple instances of Visual Studio opened. I assume that VS stacks some information while being used but it never was a problem before - the memory usage seemed to be capped. Now if not recycled routinely, the memory utilization grows enormously - right now I have two VS instances open and they occupy more than a 900Mb in total and it's still increasing.
Also I noticed that after few days working with ASP HTML layout become unbearably slow and it definitely looks like Resharper is struggling to provide Intellisense.
I won't give up on Resharper anyway, but is this really a case? Has someone experienced the same?
UPDATE: Another suspect is TestDriven.NET. So the Resharper maybe innocent after all...
.NET Frameworks, their patches and Silverlight Betas are the usual suspects.
UPDATE: It is a Resharper. With possible help from .NET components but definitely not a TestDriven.NET. Looks like there are some problems with Intellisense which Resharper tries to provide for HTML layout...

Monday, August 06, 2007

Cleaner page validation testing with generics

The refactored and (hopefully) less confusing version of the page validation test fixture. Now there is no need to implement any abstract methods in your test fixture but just inherit the base class with proper generic type. Much lazier way to do things...

public abstract class BaseValidationTestFixture<T> where T : System.Web.UI.Page,
protected T testPage;

public virtual void SetUp()
testPage = Activator.CreateInstance();

private void ResurrectControls()
BindingFlags flags =
BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.FlattenHierarchy;
foreach (FieldInfo field in testPage.GetType().GetFields(flags))
if (typeof (WebControl).IsAssignableFrom(field.FieldType))
ConstructorInfo constructor = field.FieldType.GetConstructor(new Type[0]);
if (constructor != null)
WebControl toAdd = (WebControl)constructor.Invoke(new object[0]);
toAdd.ID = field.Name;
field.SetValue(testPage, toAdd);
if (toAdd is BaseValidator) testPage.Validators.Add((BaseValidator)toAdd);

As you can see, the ITestableValidationContainer interface is still in use. The concrete validation test will be something like this:

class ValidationFixture : BaseValidationTestFixture<MyPage>
UPDATE: We have to add validation controls to the Page Validators collecion manually. Also it makes sense to expose control accessor methods, like void SetControlValue(string id, object value) and WebControl GetControlValue(string id) so we can access controls whcih are protected on the page.

Wednesday, August 01, 2007

Multithreading publishing tendency

Some time ago I decided to close gaps in my multithreading knowledge once and for all (I am still pretty sure that I would stay away from multiple threads like from regular expressions :). I looked for the ultimate book on multithreading and to my surprise there were not a lot of them around. Eventually I found what I looked for but research gave me some interesting thoughts.

The graph (I like graphs!) represents a publishing dates of the first 20 books on multithreading from the This is an interesting tendency - the peak is on the 1997-1999. Of course there are some distracting factors, like IDE standardization and Internet, - blogging seems to chock the life out the technology publishing.

So the conclusion (biased enough) seems to be this: when powerful meanings to build software, complex enough to consider multiple threading, were unleashed upon programming public, the interest in multithreading books rose. Operating systems with parallel processing abilities become more affordable and more programmers were summoned to feed the software hunger. Multithreading ceased to be a sacred clandestine knowledge of the chosen few. Here is the essential timeline of operating systems and languages progress:

1991 Visual Basic Linux, Macintosh OS 7
1992 Borland Pascal Solaris 2.0, Windows 3.1
1993 Ruby FreeBSD, Windows NT 3.1
1995 Borland Delphi, Java, Ruby ColdFusion, Windows 95
1996 Mac OS 7.6
1997 PHP 3, JavaScript, J2SE 1.1 Mac OS 8, Windows NT 4
1998 ANSI/ISO standard C++ Solaris 7, Windows 98
1999 XSLY, GML, J2SE 1.2 Mac OS 9, Windows 98 SE
2000 .NET 1.0 Beta, J2SE 1.3 Windows 2000
2001 Ruby goes public Mac OS X v10.0, Windows XP
2002 .NET 1.0 RTM, J2SE 1.4 Mac OS X v10.2, Windows XP x64
2003 .NET 1.1 RTM Mac OS X v10.3, Windows Server 2003
2004 Ruby on Rails, J2SE 5.0
2005 .NET 2.0 RTM Mac OS X v10.4
2006 .NET 3.0 RTM, Java SE 6.0
2007 .NET 3.5 Beta 1 Windows Vista

By the 2000 IDE seem to make multithreading easy enough to implement without fundamental understanding the processes behind it and provided enough guiding through their own help. And once again - blogging provides more timely information on the subject than any book.

© 2008-2013 Michael Goldobin. All rights reserved