Architecture Posts

Ideas, Experiences & thoughts worth Sharing!!!

It Depends … (really???)

It is a very short blog and more like rant rather!

Nothing gets me more annoyed than people answering “It depends…”

Especially when asked about what is your organization reference architecture? Associated principles, guidelines, recommendations, etc.  To me it is one of the laziest answers and having no foresight whatsoever.

To start with, it depends on what?  I mean what are the variables that control the decision making process?  What are their ranges?  What are the constraints that could influence these decisions?  Considering the underlying infrastructure, have we done any benchmarking exercise and have recommendations so that not every project ends up doing the same thing over and over again? What are the limits of current systems, framework & infrastructure?  How do you govern architecture compliance if all you have is “It depends” – especially in a multiple vendor scenario?

Some of the key problems in leaving decisions to individual projects (otherwise called “it depends” operating mode) are,

  • Deviations from the organization level architecture strategy.
  • The quality & effectiveness of a solution is left to the competency of an individual/team, despite having an architecture practice (or) capability.
  • The by-product of the above two would be “Technical Debt”.
  • Wasted effort in repeating items and worst still end up with different solutions each time

So what can you do?

No individual organization will need to solve endless spectrum of problems (to leave the solution until the problem arises).  Depending the organization business, domain, capabilities, products, infrastructure, etc., the spectrum could well be defined from various dimensions.  Some of the dimensions / variables could be,

  • CRUD transaction volumes (low, med, high – sycn/async – consistency rqmt, etc),
  • Rule engine ,
  • Calculation engine,
  • Integration models (p2p, pub/sub, req/response),
  • User Interface models (web, mobile, desktop, apps, etc.)
  • MI capture/report
  • , could go on…

And based their values, and current capabilities status, have recommended solutions (templates/patterns/POCs) readily available, so that an individual projects could simply adopt them rather trying reinvent the wheel every time and worst still end up with completely different / inconsistent solution to a single problem.

One may question the need to solve or have a solution for a problem when it isn’t even arisen yet.  That is the investment you make in order not to end up with the longer term problems stated above.  The level of detail one need to go could be controlled depending on the likelihood of occurrence of a particular problem.  For e.g. a small scale retail organization need not go all the way down to having a fully working Proof of concept for high volume, real-time information management.  But could just define parameters on what would require such a solution so that when a project team encounters a requirement they know they need to build something new.

Version Controlling SharePoint Documents

With some of the clients I have worked with, have noticed that the “Edit Document” option (as below) is used widespread and very popular while editing documents stored on the SharePoint. This is okay but not ideal given they have access to office 2010 (on Windows 7) as min software configuration.


Screen clipping taken: 31/03/2015 11:46

The following are some of the issues/restrictions in the above approach.

  1. Creation of new history version is not exactly in your control – SharePoint decides this!
  2. There are no comments/notes added on historic versions on update done.
  3. No control on Semantic versions (i.e. when to overwrite an existing version, increment minor version or major version number etc.).

In order to overcome these issues and truly leverage the potential office 2010 integration with SP provides, when editing document instead of selecting the “Edit Document” option, open the document in read-only mode.

  1. And when you need to make a change, Select File Menu on top and check-out the document under “Manage versions” button (as below)

    Screen clipping taken: 31/03/2015 11:29

    Screen clipping taken: 31/03/2015 11:32

  2. This will check-out on SharePoint (Exclusively – so others can’t make updates at the same time), and enable the document for editing. Once done, to check the changes back in, again select the File menu (as above) and select the check-in option (as below).

    Screen clipping taken: 31/03/2015 11:34

    And this will prompt a dialog on version options (depending on the changes made) – as below

    Screen clipping taken: 31/03/2015 11:35

    Semantic versioning guidelines states that,

  3. Any insignificant changes (like typo errors, cosmetic updates, etc could simply re-use existing version – no need increment the minor version). (re-use 0.3 in above screenshot example)
  4. Any minor non-breaking changes (i.e. changes to apis that doesn’t break existing clients, or design changes at that level without impact to other modules, etc or no new functionalities) then these could be tracked using minor version increment. (0.4 in this case)
  5. Any major changes, like new functional module, major breaking changes to library, or TD approval of a version could be tracked as major version. (1.0 in this instance).

Also the version comments will provide crucial information on what exactly updated and why, etc.

All of these versions can then be obtained from SharePoint version history as below.

Every office application provides this feature (I.e. Excel, Word, PowerPoint, Visio etc.) – with some subtle differences to the check-out, check-in options.

Also there are other features like Comparing versions, etc. available as you can see in the check-out screen-shot above.

[Edit] – Also you could enforce version control using the “Require check-out” option for any SharePoint items, this will prevent users from Editing the document directly (without checking out).

Behavior Driven Development (BDD) – Best Practices

BDD is a relatively new method for developing software, which is essentially a refined version of test driven development (TDD). Where the TDD uses more traditional tools in developing the tests (QTP, etc), BDD introduces a new paradigm of testing framework & tools. In BDD the tests are written using a language called Gherkin, which follows a high level “Given … When…. Then..” format to define the test (requirement) criteria.

This framework is widely being adopted now and there are tools support in java, .Net and open source technologies. While I had my reservations initially in its practicality, after using it for a few weeks (in a real world financial domain project), it became clear that there is actually value in adopting BDD.

One of striking aspect was the discipline required for it to be effective and re-usable in large enterprises. The discipline comes from well-established best practices and a governance model (ensure compliance) to support the framework & tools. This post doesn’t go into the technical framework and tools which I found mostly covered in a number of blogs. I intend to cover some of the best practices we have created and followed to make this as beneficial as it can be.

Best Practices

Best practices are guidelines that are applicable at a certain time point and can change over a period of time depending on the organization strategy, project context, etc. This section should list the current active best practices at any point of time.

Test pyramid


BDD supports automating component & UI testing. And as a general good practice for maintenance and future enhancement it is recommended that the test pyramid approach is followed in terms of number of tests and the coverage across the spectrum of automated tests. (Unit, Component (Api) and UI tests).



And other numerous web resources.

Example Not applicable.

Test Background


Test background should be limited only to any pre-requisite activity that doesn’t have any individual test specific action or data.

These would activities such as logging in, cleaning up or setting up base data (Not specific to individual tests).



Having test specific data setup likely to change for individual test cases, resulting in complex background feature statements.



Consider a feature (test pack) requiring keying in screens 1 to 5 with 5 difference combination of data (scenarios).

If for all scenarios screens 1-3 remains identical, there would be an urge to move this statement to the background of the feature, hence resulting in less duplication.

But this is not recommended as even though the steps appear identical they may be processing different data set and in future one of these screens might need different action for some of the scenarios.   This will result in rework of the entire feature rather than just modifying the impacted scenarios.

The duplication could be avoided by abstracting at the right level. I.e. instead of having statement to separately key screen 1, screen 2 and screen 3, there could one generic feature statement to enter these details all in one feature statement.

Test Objectives


Establishing the test objectives is the key. In principle each scenario in a feature should cover a single objective. However this need not be strict rule, if it would be more suitable for specific scenarios to cover multiple objectives. (Validation & results for example).

As a guideline UI tests should only cover end to end happy paths. (Warning messages could be included as applicable).

Test objectives not to be confused with test conditions. Each objective can have multiple test conditions (verification points).


Also there is a general myth that each scenario should only have one Then statement(s). There is no valid reason not to have more than one Then statement.   I.e. a scenario could have When <do something> Then <test something> When <do something more> Then <test something else>. While it’s true that we want to keep scenarios small and manageable, having just one then statement as a restriction is not one of the best practices.   It just increases the number of scenarios and increased maintenance overhead.

Rational Separation of concern and easier maintenance.

For a scenario where keying details with missing information and submitting to continue, there are potentially two objectives,

a)       Test the missing information warnings are appearing as expected

b)       Test the submitted details are saved and confirmation displayed.

As the warning is part of the same end to end process where no additional actions are required, it could be included within the same scenario.

However if system required to key in additional details and click separate buttons that are only required for testing the warning and not so for completing the end to end flow, it will be good practice to keep these in two different scenarios.

Process reuse


A library of actions at individual component or UI object level is maintained in order not to repeat the same.

The library should provide the feature statement and details of corresponding actions. The SpecFlow framework (Visual Studio) already provides a natural hierarchy but delivery process must ensure the compliance.

Rational Prevents TA/BAs from writing the same actions in different formats. Since the BDD feature file is written in English like language (Gherkin), same actions could be written in a number of different ways

Must avoid scenarios as below, where product details are entered in multiple feature statements; however the underlying action remains the same.

Scenario: One

When I enter Product details

And I Enter Contact details



Scenario: Two

When I Enter Contact and Product Details click on Continue button



Scenario: Three

When I enter spare parts details


In order to prevent these, a library becomes essential. Visual Studio does provide a lookup like option to support it, but having this library pre-defined would help better management of common actions.

Data driven (inline vs external)


Ensure the feature scenarios are data driven as much as possible. Whenever multiple scenarios are varying only by the data being entered or the validation conditions, it must be designed as data driven scenario.

There are two approaches used for driving the tests using data.

a)       External Data as csv (s/sheet) – Are to be used when large amount of input data to be keyed in for test scenarios (like keying product details, etc)

b)       Inline data tables (as part of Gherkin language) – Recommended for verifying data driven expected results, or minimal data keying.


Avoids duplication of scenarios and allows greater flexibility for maintenance.



Instead of below,


Scenario: Add Spare parts

When I enter product details manually and navigate to results page

Then Verify that Product type is SpareParts



Scenario: Add Grocery

When I enter product details manually and navigate to results page

Then Verify that Product type is Grocery



Scenario Outline: Add Interior decorator test

When I enter product details manually and navigate to results page

Then Verify that product type is Interior Decorator

Use data driven approach as below,

@smoke @AddWorker

Scenario Outline: Add Product Manually

When I enter product details manually and navigate to results page

Then Verify that Product type is '<Product_Type>'



Examples: Add Spare parts

| Product_Type|

| Spare Parts |


Examples: Add Grocery

| Product_Type   |

| Grocery |



Examples: Add Interior decorator test

| Product_Type           |

| Interior Decorator |

This would result in common definition file and page objects hence less maintenance. Also additional tests could be added with ease.



Principles are best practices that are more generic than the best practices. These are applicable across the entire organization and all the time without any time bounds, while the best practices, could be specific to some division and may be applicable only during a certain time period. Best practices also evolve over the time whilst principles are fairly stable.


Description Scripts should be independent of other scripts, so that these could be run on their own.
Rational As the test pack keeps growing, it is important to be able to investigate individual failures & be able to enhance for the future changes. If a complex inter-dependency is created, it would become more difficult and may end up unmanageable for newer project teams.


Description Test pack or individual script should be re-runnable on a target environment and resulting in exact same behaviour as the first run.
Rational It would require running a test pack multiple times in an environment. And having to undo (clear) data updates would prove unmanageable in the longer run. Hence the script should either be designed to clean up its updates, or ensure existence of required state of data as part of background to carry out the test. This will ensure user who may be unfamiliar with a particular functionality is still able to run and analyse the results.
Example Not applicable

Business language

Description The Feature file should be written in business language without reference to technical objects.
Rational The ultimate objective of BDD is to have business understandable specifications are actually used for validating the system built. When these specifications become technical, it defeats the whole purpose of Behaviour Driven test pack.

The following Add Product scenario is entering product details manually by selecting the particular option. The incorrect way of doing this would be,


Scenario: Add product manually

When I Click on Add product image on Admin dashboard

And  I Click on Add product button in Add individually radio option

Then Product Details page is displayed

When Product details and click on Continue button

In this particular instance, reference to the browser page elements like image, button and actions like click are very low level and technical. It makes the understanding complex and also increases maintenance as these details could change over the application lifecycle. Hence this could be written as below.


Scenario: Add product manually

When I select Add product individually option

Then Product details form is presented

When I enter product details and submit

This is more open and not referring any technical objects. The abstraction level is also depends very much on the objective of the test. In this particular instance since the objective is to test the validation, these steps are scripted individually. If the objective is just to test the end to end process, the keying in process could be written in a single feature statement as required.



Middle Management

Over the years, I have had the opportunity to work with various organisation & stakeholders. I was lucky to have worked with some of the best and also the worst. You might think what one might gain when working with worst. It actually helps you identify the distinguishing factors that makes an organisation to what they are. If one has only worked and associated with best in class, streamlined organisation, it would be impossible to visualise what is missing and what were the reasons for some of the practices being followed. Well you might get in the long run, but having an opportunity first hand makes it huge learning and the ability to see how it thrives on challenges and issues while the other disintegrates and requires a reboot.

In my experience while it is clear that the senior leaders (Directors, VPs, CxOs) are key to set the strategies and cascade the organisation vision, mission into practical, achievable targets to the departments, a lot of the responsibility still lies in how effective the middle management actually are in implementing them. By middle management, I mean the department heads and program/project manages and how effective they are in terms of understanding the short & long term goals, setting out plans and utilising all available resources. Also they play major role in identifying and building new organisation capabilities in order to meet the objectives.

But in some of the organisation I have seen, the middle management ineffectiveness were apparent and although the impacts may not be visible at individual departments, when looking at the organisation as whole, it is proving to be disastrous. The ineffectiveness itself could be due to number of reasons and cannot all be attributed to the competency of the individuals who is playing the role. That is a separate subject and we can discuss that in another post. This post is specifically focuses on what the symptoms are to identify such scenario and likely impact it may create.


These are some of the symptoms to identify ineffectiveness.

Find a responsibility and hide behind

As the name suggests, they are happy doing routine jobs, they actively look for routine & mundane jobs and take ownership of doing them. For e.g.

  • Filling forms & doing day-to-day paperwork required for things like on boarding new staff, granting & revoking access permissions, etc
  • Taking ownership of circulating auto generated reports.
  • Providing approvals for day to day tasks.

While some of these are essential, the main difference is in trying to hold onto them rather than looking at the long run and proactively automate or delegate them. If you can sense a bit of insecurity in these communications it is a good sign that the role is not adding required value to the organisation.

No Initiatives / Innovation

There is unlikely to be request for additional budget for any new initiatives and innovation will be close to NIL for individual department. Even if there are few, it might be just to meet the objectives rather than a whole hearted passionate effort.

Ego & Power struggle

There is likely to be a bit of tension between department leads resulting in lack of co-operation between them. The lack of co-operation will show in terms of pouncing on others mistakes while playing down (or quite) own mistakes, making very strict and impractical demands on other departments to meet own objectives, etc.

Conflict of interest

Individual’s job security will take precedence over organisation objectives / interests. This will result in lot of the effort going in showing their importance rather than actually helping organisation. For e.g. a potential error which could have been caught earlier would be left too late to get the maximum exposure before finding them out.

Unable to get the best out of Vendors

In situation like these the 3rd party vendors will usually hold upper hand rather than in-house department leads. The business (stakeholders) seem to have more trust in vendors rather than their own departments and its leads. This impacts the organisation very badly as it is unlikely to get the value out of the spending on vendors.

Poor or No decision support system

When there is a need for some of these departments work together to meeting an immediate & urgent business need, there is likely to be chaos. For e.g. if the business needs a report which requires provisioning a new server (IT), installing applications which are maintained by separate departments and creating and releasing a new report, this will likely result in,

  • Out of control emails with most of the executives/stakeholders in the distribution list for every step of the journey.
  • Dependency on key individual within the department to run the show.
  • Involvement of senior leaders required to sort out basic issues. (statements like “I am happy to do it, provided x is happy to approve” – this is a killer. As a department head, one is expected to be expert and understanding impact of his actions and also the overall organisation objectives, and be able to make a judgement. When this fails, the obvious question raises need for the role itself?).


This is not a complete list but shows some of the dangerous symptoms and its effects. It is up to the senior leaders to look for these and align their organisations effectively in order to eradicate the root causes for these. I will try to list out some of the root causes for these and recommend corrective actions in subsequent posts.

Open 2 Test Vs Telerik Studio

Recently I had to do a test tool evaluation in trying to establish the right approach for a client. This involved comparing the Open 2 Test framework using Selenium web Driver and the Telerik test suite which the client already had in place for some of their projects. The objective is to aid establishing the key drivers and customizing a solution that achieves the maximum benefit for the investment. This is specifically done for testing web applications, hence the focus is on web functional testing, and however I have also listed other aspects separately below.

Web functional testing



Open 2 Test

Selenium Web Driver

Telerik Test Studio



Free. Open source software based. Uses common office software like MS Excel for scripting tests.

Commercial product licensed per duration. (To check if it supports unlimited users?)

Well open source is not all free, and will require an initial investment to extend and customize the tool as required. So this need to be considered in the right context.

Skills & Learning Requirement

Predefined set of simple keywords used for scripting.

UI based scripting language.

Record & Play mechanism.

Both are intuitive to large extent and may require some amount of learning & basic training.

Integration with TFS

Extensible. Java based, hence could leverage on numerous open source libraries available.

Built-in support

While Telerik supports full integration with Visual Studio & TFS, O2T could be extended & customized to integrate as per the specific requirement. (Logging defects in a specific TFS project, etc)

Multi browser support

Existing support for IE & FF. Could be extended for others, but will require framework enhancement effort.

IDE support only for FF

Built-in support for all major browsers for both recording (scripting) and testing.

Telerik’s feature enables cross browser testing automatically.

Test pack maintainability

Fully hand-crafted scripting, hence relatively easier to update & maintain.

Decouples the page DOM elements using object repository, hence easier to maintain page element changes.

Supports script re-usability.

Recorded scripts hence will require effort to update for maintenance.

Allows in-line script editing.

Allows grouping of tests into a re-usable block

As the scripts are fully hand-crafted and in an easy to use excel format, it would be more effective to manage and maintain the test scripts.


Basic reporting. Being enhanced to provide interactive links.

Fully customizable & extendable.

Basic built-in reporting. Not much option to extend the contents & format of the report

O2T is slightly better off due to the enhancements and customization support for the reporting.


Requires enhancements to the framework.

Built in support


Data driven Testing

Basic support

Full Support



Other Aspects



Open 2 Test

Selenium Web Driver

Telerik Test Studio



Load & Performance test support

Not supported

Built in support albeit with additional licensing cost.


Support for Non-browser based testing

Requires enhancement

Built in support



In Summary, the tools are pretty evenly matched on the attributes measured against. While Open 2 Test (Selenium) scores on the licensing cost and script maintainability, it requires some (one-off) enhancements & customization to the underlying framework to make this fully operational and integrated with the project delivery framework.

The recommendation is to apply weightage to the parameters that are used for comparison depending on the organization context (strategy, goals, immediate and long term delivery plans) and then arrive at a score card.

Open 2 Test

On a proof of concept I was working on for a new prospective client, I have had to get my hands on the Open 2 Test framework. This was an Open Source initiative sponsored by my (NTT DATA) organisation few years back and I never actually bought the value it could bring to an organisation. The main advantage as portrayed by the team is the interoperability across test tools (i.e. you could run the same test script using QTP or Selenium without any(?) changes hence prevents the organisation from any vendor lock-in. But seriously this alone doesn’t convince me to go learn a new scripting syntax, new tool etc. But when I looked at the tool hands on, I realised the true benefits of this tool.

This is a not a tutorial, but just discuss the key benefits of this framework. For detailed tutorial, there are many documentation available on the web site ( ).


The first thing that comes on top is the simplicity of the scripts. It uses a tabular form and works with s/sheet. The following screen shot shows a simple script.

This simple script launches a web browser (configurable), and performs a click and checks for values of couple of div tags. As can be see, it is very self-explanatory once the convention is understood, which is very minimal as well. This is a significant step when compared to the tool specific scripting languages. And the fact this can be done using s/sheet means, even business users could be trained to directly script tests rather than a need for a specific testing tools team.


Another key feature is the ability to extend the keywords and functionalities of testing as required by individual application and environment. For example, on the POC I was working on, there was a requirement to test for a specific text, which gets updated via an Ajax call. The use case was like adding a book to the shopping basket and it invokes an ajax call. And once the book has been added to the basket (at server), the ajax call updates the text on a particular (progress bar like) div tag to Item added to basket. The time lag between invoking the request and finally the text getting updated is unknown due to the nature of the ajax request. This need a wait function that need to wait till a particular text is updated on the div tag. This is supported by the underlying Selenium web driver. Hence I have introduced a new action keyword (waitfortext), which will wait until the element text changes to an expected text. There is an option to specific maximum timeout.

Looking at the source this was very straightforward as below.

It finds the element using one of identifiers or xpath. And then simply invokes the webdriver method to wait for its text value to be the one expected. (with max time limit)

Maintainability & Re-usability

Another key concept it uses is the decoupling of web elements and a logical identifier. In the test script you could refer to the web page elements using logical identifiers and then link these to the specific web elements using another s/sheet called Object repository. This gives a maintainability advantage and also allows re-usability. You maintain a s/sheet as below where you could associate logical names to the web elements.

Then object name would be used in the scripts, which prevents script updates whenever a page element changes. Also this enables re-using the object name across pages and serves as a complete data dictionary.

Open Source

Above all others, the framework is open source and very light weight. This allows complete freedom in terms enabling changes and customising as per specific needs. I don’t have to explain lot about the benefits of open source. However one down side is it doesn’t have a very active community, probably because of less awareness. But there is a small team still actively works on it and publishes periodic updates.


It is quite awesome, but it is due to its age, a bit old school. Now a days there are other open source alternatives available. .Net Canopy being one. But its ability to work with test scripts on s/sheet and be able to load & process data from s/sheet makes it very useful indeed. Perhaps this will require a bit of modernisation to be able to work with latest web technologies.


  • Conceptually strong
  • Open source
  • Others as above. (Simple, Extensible, etc)


  • Bit old style (although this could be improved)
  • The open source code isn’t very clean


ploeh blog

Ideas, Experiences & thoughts worth Sharing!!!

Passionate about data

Ideas, Experiences & thoughts worth Sharing!!!

You’ve Been Haacked

Ideas, Experiences & thoughts worth Sharing!!!

Swaroop C H - India, Startup, Technology, Life Skills

Conning people into thinking I'm intelligent. Since 1982.

Paul Graham: Essays

Ideas, Experiences & thoughts worth Sharing!!!

Martin Fowler

Ideas, Experiences & thoughts worth Sharing!!!

Scott Hanselman's Blog

Ideas, Experiences & thoughts worth Sharing!!!