Posted by: craigtech | April 1, 2011

Libyan domain names

Much has been made of “Gadafi taking down the .ly domains” over the last few weeks, with many wondering what would happen to the likes of, and our service The common consensus was not to worry as of the five root DNS nameservers only two are based in Libya, with two in the US and one in the Netherlands. All five would need to be out of service for .ly domains to start failing.

But it turns out this was missing the point; starting at around 4pm yesterday (31st March) all traffic to our site stopped and no emails have arrived since. We don’t host anything in Libya, nor do we have any affiliation to it. We use google apps and have a dedicated server, both hosted in the US.

A US hosting company called host Libyan Spider, a Libyan ISP who manage a huge number of domain registrations for the .ly space. Softlayer have stopped all of Libyan Spiders services citing the recent UN sanctions of Libya.

From @LibyanSpider:
Our service provider: has shutdown all our dedicated servers with them because of the UN/US sanctions which doesnt apply to us

LibyanSpider is a private company and NOT a government affiliated company. has falsely shut us down for the wrong reasons! shut down our servers, this is why you can not access your website nor we can access ours.

We are sorry guys, but this is out of our hands. Please contact and let them know that YOU are affected by this. is still up, which could be for a number of reasons, either their DNS is managed in a different system or their DNS records have been setup to be cached for a long period of time (unfortunately we have just completed a migration to a new web host and our TTL – the time a DNS setting should be cached for – was only 1 hour. If was to be affected it would have a huge affect on internet usage, particularly twitter. I’m not sure who hosts bit.lys DNS but I hope it’s not Libyan Spider and I hope whoever’s hosting their DNS doesn’t get the same idea!

I have contacted Softlayer, no response yet. I hope they work this out quickly! Otherwise many innocent lives (startups) may be lost… If you want to support us and them drop them an email at

It’s also great timing following the launch of Startup UK and Startup America!!


I generated a pretty straightforward schema in the EF designer & generated the SQL.  If I refresh the model from the DB it throws errors along the lines of:

Condition cannot be specified for Column member <member name> because it is marked with a ‘Computed’ or ‘Identity’ StoreGeneratedPattern.

What’s that about?  I now need to make any changes manually in both EF & the DB.  What a pain!

Posted by: craigtech | March 20, 2010

Simple OAuth integration for Twitter in ASP.NET MVC

Speaking to @JacoPretorius earlier in the week the samples that come with the DotNetOpenAuth ( libraries weren’t clear.  To be honest I couldn’t even get the samples to run, firstly because the project types weren’t up to date & also because the references to the MVC libraries were wrong.  I also wanted to get it running in VS2010.  Rather than labour with getting it running in VS2008 I had a quick scan of the source code & set to work figuring it out.  The article that follows describes what needed to be done to get the MVC sample application integrating with Twitters OAuth mechanism.

I’m not going to detail the OAuth process here, there are plenty of articles around that do that already.  I will summarise it from a dev perspective though – you redirect the user to a 3rd part auth provider (e.g. twitter, google, and many others) using a standard http redirect.  The details at the auth provider end will vary, but may include the user authenticating themselves & authorizing the calling application (i.e. your app) to access there identity.  The auth provider then redirects back to the calling application passing an encrypted string, which contains the users identity.  The details here will vary depending on the auth provider.

I do have a couple of gripes with the implementation of the OAuth protocol based on what I’ve seen with twitter & google (this may not apply to all OAuth providers); it seems you need to register with each provider you want to support (I couldn’t even register with google because it’s integrated into the google webmaster tools & needs to verify against a real site), you then need to store credentials for each of those providers in your application, finally it doesn’t seem like there’s a standard set of attributes in the identity that gets returned to allow you to extract info such as user ID.  You basically get a set of name value pairs in an "Additional parameters" collection, on twitter the username is in a property called screen_name.

The code below is just to illustrate be refactored as I add more providers but for now

Onto the real work…

I’ve started with the basic MVC starter site:

ASP.NET MVC Starter site

First you need to set yourself up as an oath client with twitter at, the details here don’t seem to matter – I used a website URL that isn’t live yet & it was fine – when you’re debugging locally the redirect back from twitter after authenticating

Then reference the correct assemblies – you’ll need to find them in the samples, and put them wherever you’d normally put dependency libs:


I then setup the references I need:

using DotNetOpenAuth.ApplicationBlock;
using DotNetOpenAuth.OAuth;
using DotNetOpenAuth.OpenId;

And a static field & property to store the twitter credentials:

private static InMemoryTokenManager _tokenManager = new InMemoryTokenManager("Twitter Consumer key", "Twitter Consumer secret");

private InMemoryTokenManager TokenManager
        return _tokenManager;

Then modify the login view views/account/logon.aspxto include a "login via twitter" option as a simple link (I used an action called TwitterOAuth on the Account controller to wrap up my logic for preparing the request to twitter):

<a href="<%= Url.Action("TwitterOAuth") %>"><img src="../../Content/images/oauth/twitter/twitter_button.gif" alt="twitter oauth button" /></a>

It may look something like this:

Screenshot of login page

The server side component consists of two requests, one when the user clicks the button & one when the auth provider redirects the user back.  I have a TwitterOAuth action & a OAuth action.

In the TwitterOAuth action I use the dotnetauth library to construct a wrapper around the twitter OAuth API, prepare the URL that I want the user to be redirected back to  & send the request, which redirects the user to twitter to authenticate.

public ActionResult TwitterOAuth()
    var twitter = new WebConsumer(TwitterConsumer.ServiceDescription, this.TokenManager);
    //Create the URL that we want Twitter to redirect the user to
    var oAuthUrl = new Uri(Request.Url.Scheme + "://" + Request.Url.Authority + "/Account/OAuth");
    // If we don't yet have access, immediately request it.
    twitter.Channel.Send(twitter.PrepareRequestUserAuthorization(oAuthUrl, null, null));
    //This shouldn't happen!!
    return null;

In the OAuth action I’m going to process the response from Twitter and if successfully authenticated store the identity in the ASP.NET forms authentication cookie & redirect to the home page:

public ActionResult OAuth()
    var twitter = new WebConsumer(TwitterConsumer.ServiceDescription, this.TokenManager);

    // Process the response
    var accessTokenResponse = twitter.ProcessUserAuthorization();

    // Is Twitter calling back with authorization?
    if (accessTokenResponse != null)

        //Extract the access token & username for use throughout the site
        string accessToken = accessTokenResponse.AccessToken;
        string username = accessTokenResponse.ExtraData["screen_name"];

        CreateAuthCookie(username, accessToken);
        //If the request doesn't come with an auth token redirect to the login page
        return RedirectToAction("Login");

    //Authentication successful, redirect to the home page
    return RedirectToAction("Index", "Home");

private void CreateAuthCookie(string username, string token)
    //Get ASP.NET to create a forms authentication cookie (based on settings in web.config)~
    HttpCookie cookie = FormsAuthentication.GetAuthCookie(username, false);

    //Decrypt the cookie
    FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(cookie.Value);

    //Create a new ticket using the details from the generated cookie, but store the username &
    //token passed in from the authentication method
    FormsAuthenticationTicket newticket = new FormsAuthenticationTicket(
    ticket.Version, ticket.Name, ticket.IssueDate, ticket.Expiration,
    ticket.IsPersistent, token);

    // Encrypt the ticket & store in the cookie
    cookie.Value = FormsAuthentication.Encrypt(newticket);
    // Update the outgoing cookies collection.

That’s it! You should now see the logged in user in the top right corner & the logout method will work as expected:

Screenshot of user logged in
Posted by: craigtech | March 10, 2010

Mobile web application acid test

For me the acid test for any mobile webservice revolves around how much value I can get from an application in around 5 minutes: I need to be able to sign up, logon & get some noticable benefit from it in the time it takes me to queue, order & wait for a coffee (or my favourite time – while sat on the toilet) for it to catch my attention & warrant further investigation.

Yesterday I attempted to start using gowalla & imeem as alteratives to foursquare &; both were tested as native android apps on a HTC hero running Android 2.1

The sign up process on gowalla was a bit verbose with a few more fields than I would have liked, but it completed ok & I was online in a minute or so. Unfortunately despite the fact that foursquare had picked up my location & offered locations gowalla wasn’t able to (maybe due to all that sxsw traffic?). FAIL.

The sign in form on imeem was better, although not sure why it needs my sex & DOB, but when submitting the form I get “account creation failed.” – no more info on the problem so not sure if it’s connectivity, invalid data or whatever else. FAIL.

Looks like I’m sticking to foursquare & for now!

Posted by: craigtech | January 26, 2010

Providing analysis on top of the BrowserMob data

One of the things that the BrowserMob system doesn’t do too well is provide analytics on top of the data that they generate during their load tests.  Luckily they do give you a mysql install script that allows you to get the data into a local database & use whatever tools you want to get the data you need.

Unfortunately, being a Microsoft guy, I don’t want to use MySql if I can help it! Luckily, it isn’t too difficult to modify the script so you can use it in Sql Server.

I also want to keep the historical data, so that I can compare test runs against each other & track progress towards our goals over time.

To start with I’ve just created the tables by slightly modifying the Create table scripts :


test_id int NOT NULL,

[name] varchar (255),

unique_id varchar (255),




run_id int NOT NULL,

starttime datetime DEFAULT GETDATE(),

comments varchar (255),

run_by varchar (255),




[tx_id] [bigint] NOT NULL,

[run_id] [int] NULL,

[browser_num] [int] NOT NULL,

[bytes] [bigint] NOT NULL,

[end_time] [datetime] NOT NULL DEFAULT (getdate()),

[err_line_num] [int] NULL DEFAULT (NULL),

[err_msg] [varchar](255) DEFAULT (NULL),

[err_screenshot_id] [varchar](255) DEFAULT (NULL),

[instance_id] [varchar](255) NOT NULL,

[script_id] [varchar](255) NOT NULL,

[start_time] [datetime] NOT NULL DEFAULT (‘0000-00-00 00:00:00’),

[step_count] [int] NULL DEFAULT (NULL),

[success] [bit] NOT NULL,

[time_active] [bigint] NOT NULL,

[time_paused] [bigint] NOT NULL,





REFERENCES run (run_id)


step_id bigint NOT NULL,

bytes bigint NOT NULL,

end_time datetime NOT NULL DEFAULT GETDATE(),

obj_count int DEFAULT NULL,

start_time datetime NOT NULL DEFAULT ‘0000-00-00 00:00:00’,

step int NOT NULL,

time_active bigint NOT NULL,

time_paused bigint NOT NULL,

tx_id bigint NOT NULL,





REFERENCES tx (tx_id)


[obj_id] [bigint] NOT NULL,

[bytes] [bigint] NOT NULL,

[dns_lookup_time] [bigint] NULL DEFAULT (NULL),

[end_time] [datetime] NOT NULL DEFAULT (getdate()),

[err_msg] [varchar](256)  NULL DEFAULT (NULL),

[host] [varchar](256)  NOT NULL,

[ip_address] [varchar](256)  NULL DEFAULT (NULL),

[method] [varchar](16)  NULL DEFAULT (NULL),

[obj_num] [int] NOT NULL,

[path] [varchar](4096)  NOT NULL,

[protocol] [varchar](16)  NOT NULL,

[query_string] [varchar](4096)  NULL DEFAULT (NULL),

[start_time] [datetime] NOT NULL DEFAULT (‘0000-00-00 00:00:00’),

[status_code] [int] NOT NULL,

[time_active] [bigint] NULL DEFAULT (NULL),

[time_to_first_byte] [bigint] NULL DEFAULT (NULL),

[url] [varchar](4096)  NOT NULL,

[step_id] [bigint] NOT NULL,





REFERENCES step (step_id)

Then start cleaning the insert scripts & running them in, do so in the same order as the table order above.  To do this I’ve broken them out so that I just have the insert commands for each table in text files & start doctoring them like so :

  1. SQL server doesn’t support the ` character, i.e INSERT INTO `object` – easily solved by a quick find and replace, replacing it with nothing.
  2. Some of my data contains \’, so I needed to replace \’ with \”
  3. SQL Server doesn’t support multiple sets of values as part of INSERT INTO so in each file I replace ),( with a new INSERT INTO statement, e.g
    Replace : ),(
    With : )\nINSERT INTO step VALUES (
The tx tables inserts have a problem with the success bit value (the last but two column), so
Replace : ”
With zero
And replace the other value with 1.
As I have added a run table to track the historical data I need to modify the tx insert script to a new run_id column I do another replace on those lines:
With : INSERT INTO tx ([run_id],[tx_id],[browser_num],[bytes],[end_time],[err_line_num],[err_msg],[err_screenshot_id],[instance_id],[script_id],[start_time],[step_count],[success],[time_active],[time_paused])\nVALUES (1,
I will upload a zip file with the 3 files when I have a moment.
Posted by: craigtech | January 26, 2010

Load testing in the cloud with BrowserMob

I’m currently looking at how we can improve our load test process to dial out some of the problems we have & also to extend the coverage of our load test.

Most organisations have the same problem when it comes to load testing – it’s done at a specific part of the project as a one time exercise & when complete the tests are left to rot until the next time a load test is needed, often months or years later when they bear little or no relation to the application under test any more.

Many of the tools available for web testing contribute to this problem by forcing you to work at the http layer, often resulting in brittle tests that break based on the smallest change to the implementation in the web application. Functional test tools like selenium provide a much easier to use abstraction, allowing a simple point a click browser experience to record & validate tests.  With a functional test suite in place surely a load test is just selecting the ratios at which those tests need to run to simulate your real/estimate usage patterns and away you go?  This has the added benefit of not needing to maintain two sets of scripts so that as long as you’re keeping your functional test suite up to date, your load test is being kept up top date pretty much for free. The problem is that operating at this layer of abstraction brings with it a significant performance overhead; instantiating browser instances for each simulated user in the load test.

Cloud computing/virtualisation is a game changer here; BrowserMob will import your selenium scripts & allow you to configure a load test in their system, running it for you on the Amazon Elastic Computing cloud.  This is a great option & I will be posting more on how our efforts in this area pan out, but moving forward maybe this type of technology can be brought in house too – being primarily a UK based organisation we have a significant infrastructure investment sized to deal with the peak load during the UK day (and the peak days of the year).  During the quieter times of year, and the quieter times of the day this infrastructure could be used instead of the EC2 cloud to run our load tests.  I’m not sure if BrowserMob are looking at this type of solution yet, but if not it’s only a matter of time before someone does…

Posted by: craigtech | January 10, 2010

I read about this boxee alternative from the xconomy feed ( earlier today when I was away from home & rushed back to try it out.  An internet TV aggregator with a yahoo widget, exactly what I was looking for!

Sadly, I shouldn’t have bothered – the windows application looks very dated & I haven’t been able to get the yahoo widget.

The proposition is good – a profile in the cloud that you can share across all your devices, including your tv, and this is still a beta, so I expect it’ll improve, but right now I’m stuck downloading what I want onto a memory stick & plugging it into the tv.  It’s 2010 guys!  How about a neat silverlight/air client, rather than the current clunky interface?

As for the yahoo tv widget I’m yet to be convinced on this platform – again, the idea is great but I’ve yet to get anything useful out of it – a boxee or zinc tv widget, or even netflix or hulu directly that give access to the plethora of internet tv is exactly what I want in the living room, but it’s not there yet.  It’s probably not even zinc’s fault but who do you contact when these things aren’t working?  Who’s accountable for the experience?  The TV manufacturer, yahoo or the widget author?  Who controls what widgets are available for me to enable?

I’ve had a real problem with perceived “quality” on a project I’ve been working on recently – this is no slant on the developers, but in my opinionstems from the fact that the customer organisation has a testing team larger than our development and test teams put together, and this has put an unneccessary strain on the working relationship.

The only way I could think to get on top of this was to take as much of the repetitive regression testing as possible (a manual regression cycle was lasting around 4 hours), automate it, and run it frequently from our CI server.

My requirements are:

  • The tests themselves need to be maintainable by our testers (who have minimal development knowledge), with a simple point and click interface.
  • The tests need to be run in firefox, IE6 & 7.
  • I need to be able to collate and track the testing metrics; test counts/passes/fails, etc
  • I need to track screenshots during the test process so that testers can spot-validate the results of the automate tests.

Having researched the options out there Selenium seemed a good fit; so I began by writing a simple one-test suite.  I’ll describe the process I went through, the problems I came across, and the solutions I found here.

Selenium IDE is a great little firefox plugin that gives exactly what we need as far as the testers are concerned.   My only slight concern is that some of our applications are IE only, so we’ll need to look at a recording application for IE further down the line.

The first problem I came across was when attempting to run this html suite against the selenium server jar.   No matter how hard I tried I couldn’t get them to run.  I googled about and found it was a recurring theme, but there was a suggested patch for selenium, which I attempted to apply, but got no joy.  As a quick alternative a tried using the C# generated NUnit code instead, and it worked a treat.  Rather than waste too much time trying to track down the issue I decided that using C#coded tests would be a workable option – I made the simple change to move from NUnit to MSTest though.

Having decided this I then had the problem of how to allow the testers to maintain the tests in the selenium IDE HTML format, but have C# tests for run-time.  I wrote a simple parser that searches a given directory for test suite files, follows the links to the tests themselves, and generates the tests.  The generated code is then compiled down to a .net assembly to be run by MSTest.

So, I now had my automated tests in place, the next problem was to get it running under cruise  This brought another set of challenges:

  • Selenium RC server currently runs from the command line, but I’m intending to initially have two projects running these selenium tests; rather than fire up a selenium server for every project, or attempt to use the same server across multiple projects my preferred approach was to run selenium as a windows service.
  • Our cruisecontrol server is intended to be a lightweight system, and doesn’t have visual studio installed, I could have ducked out of this problem & used NUnit but I’d read about how to get it running standalone and decided to get that up & running at the same time (it turned out someone else had already tried to do this  on the server, but not got it to work).
  • Our selenium tests write out images to disk, and these images need to be included as part of the build report.  It turns out doesn’t deal with non XML output very well.

So getting selenium running as a windows service was relativelly easy in the end, using the Java Service Wrapper.   The two problems I came across here were:

  • You need to put the selenium jar inside the bin directory of the JSW – this was counter intuitive to me, but it worked.  So I ended up copying the contents of the JSW zip file into a directory D:\selenium\selenium-server-1.0-beta-2, and then copied the selenium-server.jar, from the Selenium RC download inside the bin directory of that, i.e. D:\selenium\selenium-server-1.0-beta-2\bin\selenium-server.jar.
  • The conf file D:\selenium\selenium-server-1.0-beta-2\conf\wrapper.conf required changes:
    –  Add another classpath “” below “” 
    –  Modify “″= to be “”
    –  Modify the, display name & description to be something relevant, i.e “”, “wrapper.ntservice.displayname=Selenium Server Beta 2”, “wrapper.ntservice.description=Selenium Server Beta 2”

With that setup you can run the install testwrapper.bat file, and start the service.  Selenium is then running and can be connected to using the .Net client (ThoughtWorks.Selenium.Core.dll another part of the Selenium RC download).

This article described how to get MsTest up and running standalone.  My only concern with this was the need to use the /noisolation flag when invoking MsTest, but for now this will be fine.

[2009-05-06 Update]
I’ve found the reason for needing the /noisolation flag – I was getting an error:

Failed to queue test run ‘cedmunds@ZEUS 2009-05-06 14:41:24′: Failed to get host
 process location. Host process is not available.

It was because the key below did not update correctly when I ran the reg file, and  after re-entering the key manually all worked fine



This article described how to include images in the crusie control build report.  I haven’t actually gotten around to including this yet, but I’ll report back when I do.

It’s also worth pointing out that the system uses projects in SVN to store the tests; the testers need to keep the SVN repository up to date (we use tortoisesvn for this purpose) and I think it would be a great enhancement to Selenium IDE to integrate with SVN/team system.

In short, with a relatively small amount of custom code I’ve managed to get these technologies working together in around 2-3 days.  If that can free up just 2 testers from half a day of mindless regression testing each, twice a week, the effort will have repaid itslef in a couple of weeks!  As long as you’re able to write your tests in Firefox (or hand craft them) then this is a perfect approach.

P.S. The selenium HTML -> C# transformation code isn’t completed yet, in fact I’m going to see if I can harness the code within Selenium IDE to do the transform. When I’ve finalised the approach I’ll update this, and share the code via CodePlex if applicable.

Posted by: craigtech | April 28, 2009

Moving home…

So I’ve decided to pick up my blogging bit – maybe more frequent than one a year from here on in…  but I’ve not been able to even sign into my old account at, oh well.

I’m working on an automated functional test framework at the moment, more on that later.