Tag Archives: business

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

Read More

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Gartner Webinars – Ten Trends and Technologies to Impact IT Over the Next Five Years

Gartner Webinars

The web is full of great ways to learn. All the information you could ever need is out there and it is continually accumulating. In its current state with around 1.5 billion web pages it would take many lifetimes to read all of what has been posted online to date. It’s an incredible wealth of data and sifting out what is useful is becoming increasingly taxing on one’s filtering skills. So how best to gather relevant, useful and interesting information. Well, one way I have found that is particularly useful to grab snippets of information, distilled and presented by people who should know what they are talking about, is to register for and watch webinars. Webinars are essentially ‘seminars on the web’, presentations if you like, given by ‘experts’ in the field to an audience of listeners who can ask questions and interact in the usual way. In theory this should be time better spent than trawling the web attempting to collate the same information, some of which may be incorrect or outdated.

Recently, I logged into Gartner [1] and watched a webinar about anticipated trends that would change technology over the next five years. Gartner described the webinar in the following summary paragraph; “Strategic planners have long realized that efficient planning must be accomplished by looking from the outside in. Internal trends, market trends and societal trends are rapidly converging, and many of these will have dramatic effects on infrastructure and operations planning. This presentation will highlight the most crucial trends to watch over the next five years.”

The pace of change of technology never ceases to amaze me. In the mobile device era for example, it is customer demand that is driving a lot of that change and this demand has inevitably made its presence felt in the workplace. However, the method by which IT (in general) moves forward in time isn’t just about technology, it’s also about market forces, social trends and even climate change. There are many factors to consider and from the bottom up people should continually look for ways to broaden their understanding of the multitude of influencing factors. It has been shown that the more desirable/useful IT staff have a broad ranging skill set and whilst they may have cut their teeth in development, database management, or networking; having the ability to look across verticals, organise people, and ultimately know where to look for problems are potentially more important to a business. In doing so one must also consider the future; the technologies, the demands and the trends. Where are they likely to come from and how can you, as a business best position yourself to reap maximum reward? Here, I put forward my spin on David Capuccio’s excellent webinar and present my thoughts in response to the topics discussed on the day.

1. Organisational Entrenchment and Disruptions
This is clearly a two-point problem. On the one hand this is about an organisation’s ability to respond positively to disruptive technology and use it to good effect. Of course this also means a certain amount of risk-taking, perhaps going out on a limb to embrace new tech, train staff and develop new business with interested customers. It’s a big ask, but one I feel is worth it since the alternative is not pretty, i.e. to remain rooted in old technology, potentially lose custom and nourish a culture of nonchalance in the workplace. Cultural changes are definitely required for success in a world where technology is the product, however things can go wrong and move backwards. For example Carpuccio quoted that “By 2014, 30% of organizations using SaaS Operations Management tools will switch to OnPremise due to poor service levels.” And this is predicted when we really should have seen continual growth in this area.

2. Software Networks
The first technology point in the series talks about SDNs or software-defined networks that abstracts away elements of the networks. This means entire networks can be built on-the-fly without having to provision them manually, or node-by-node. Parameters for monitoring and controlling information and flow can be effected via a centrally located software program and there are a number of advantages of having the control logic removed from the actual network. Another example of being driven by customer demand, the SDN offers less time to provision, better up-time performance, infrastructure savings etc. so definitely one to look out for in the near future.

3. Bigger Data and Storage
Big data has been around for a while, but what does this really mean for us? Well from the perspective of a business, data continues to grow, regardless of budget and effectively never ending. From a user perspective, as more people move to the internet and mobile device usage, the increase in demand will in turn generate an increase in data. What does all this mean? The answer is big data, i.e. “so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications” [http://en.wikipedia.org/wiki/Big_data]. It’s pretty obvious that this brings it’s own problems, auditing, back-up and of course analyses. Big data is not industry specific and spans many verticals including defence, academia, banking and other private sector industries. Big data will change how data is managed and stored but it should also offer-up many advantages.  Bigger is better, right?

4. Hybrid Cloud Services
It is anticipated that private clouds will dominate in the next five years, but there will still be a requirement for public clouds and this combination of private-public (and/or community), cloud-based service availability from vendors, each tailored to individual organisations is known as the ‘hybrid cloud’. The general advantages are pretty much the same as any cloud offering, but there are some specific ones including; the private cloud will be more versatile, responsive and secure. For example, organisations who couldn’t previously leverage cloud services at all due to regulatory or compliance issues, should be able to utilise the private cloud and still comply with regulation whilst at the same time making use of the public cloud for use with non-compliance data.

5. Client & Server Architectures
The development of both client and server architectures will continue and the variation celebrated. It is accepted that one size does not fit all and there is a need for specialised clients (and servers), and the OS that runs on them. One approach for servers is to make them more modular so that individual components can be swapped out for new versions without having to upgrade the whole machine. A driving force will also be environmental considerations, exceptionally low power machines will be in demand as will the development of specialist tools to monitor and report on energy usage. With BYOD also coming more and more into play, the client/server partnership has never been more varied and this can should be extremely beneficial to both business and consumer.

6. The Internet of Things
What does this mean?  Well simply, it means that in the future many ‘things’ will be connected to the internet via smart objects, monitoring devices, radio transmission, near field devices etc. At the moment within the sporting community many athletes regularly collect, monitor and upload data and compare with other athletes in the same sport, for example. Imagine the same principle applied to numerous other household devices, the fridge that ordered food automatically, the heating system that is controlled from the mobile phone, the car that emails you when it is due for a service etc. Note the feedback loop to big data and potentially the hybrid cloud, it goes without saying that many of the points in this list are interdependant and intradependant. This particular point is the one that the consumer will be most aware of, the one that truly disrupts their lives and deliveries a society that is ‘always on’.

7. IT/OT and Appliance Madness
This point refers to the sheer multitude of appliances that are currently used in the industry and the trend that has seen that number explode in fairly recent times. From consumer-based PCs, Macs, laptops, tablets and mobile devices, to business-focused backend machines like standard servers and blade servers, the growth has been phenomenal. It also includes devices that can be virtualised by using software from the ever growing number of vendors, essentially if it can be built, it can be simulated. This growth is set to continue and it is again driven by consumer demand. It is not without its concerns however, since it is estimated that “Through 2014, employee-owned devices will be compromised by malware at more than double the rate of corporate-owned devices.” Clearly there are new challenges to be met, but knowing that this explosive trend in appliance diversification is set to continue will no doubt encourage new and innovative ways to offset these problems.

8. Virtual Data Centres
This is really the next logical step in virtualisation and the advantages it offers. With virtualised data centres, workloads could be moved from one site to another, literally anywhere in the globe in response to a demand. Virtual storage is combined with virtual servers and networking to generate an entire data centre that can be accessed through a single portal and parameters such as capacity and pooling of resources can all be changed in real-time. This is a powerful resource and will surely be at the forefront of virtualisation trends in the next few years.

9. Operational Complexity
Points 1 through 9 have all contributed to operational complexity in one way or another and according to Glass’ Law (applied to IT), “for every 25% increase in functionality in a system there is a 100% increase in the complexity of that system. [http://www.examiner.com/article/breaking-glass-s-law-of-complexity]. I don’t find this statement too surprising but it does raise a conundrum; just how complex can systems get and still be usable? It’s an interesting point and one I think that could be defended by NASA during their operation of the space shuttles, cited as many as the single most complicated system ever built. Nevertheless, complexity is par for the course during periods of rapid development and it should be recognised that the IT industry is no exception.

10.IT Demand
A really simple one to finish with and I will summarise with Carpuccio’s bulleted list of web stats:
Over 1.5 billion Web pages (and growing)
450,000 iPhone apps
Over 200,000 Android apps
10,500 radio stations
5,500 magazines
Over 300 TV networks

This is a trend that even the most dispassionate of internet futurists couldn’t fail to see, the question is; how do we respond?

Consulting – Build Your Own Bap!


Sometimes there are things in life, work and business that remind me of a funny anecdote I occasionally recount to friends over a beer or two, and it relates a previous place of employment, their rather antiquated basement, a military-style canteen that opened every morning for breakfast and the humble bap. You may or may not be familiar with the term ‘bap’, Miriam Webster’s online dictionary offers up a somewhat short description; ‘a small bun or roll’. In my world baps are a little more involved than that, and they are definitely not small; especially not breakfast baps or late night, on the way home from a night out, scooby-snack type baps. No. The baps I am talking about are designed to comfortably accommodate a fairly substantial meal whilst at the same time acting as a field-dressing type of absorbent material for the interlaced layers of ketchup, mustard and whatever selection of condiments that may have taken your fancy. That is what I call a bap.

The funny part comes (hopefully) when I describe the notice board which was presented each morning in that same basement. It was a chalk board  looking rather resplendent and sitting just beside the breakfast bar. The board was there to helpfully inform hungry breakfasters what was on the menu that particular morning.  This in itself was actually a little bit odd, since the breakfast was the same every single morning, it was a cooked breakfast consisting of the ever faithful; bacon, sausage, eggs, mushrooms, toast, beans, hash browns and a military standard mug o’ tea. Below that however, and this is where the bap comes in, there was a fairly long list which listed items from the breakfast menu that could also be obtained, ‘in a bap’. The board appeared to be lovingly recreated every morning and a breakfaster typically had the choice of:

Bacon bap
Bacon and egg bap
Sausage bap
Sausage and egg bap
Bacon, sausage and egg bap
Egg bap
Double egg bap
Bacon and hash brown bap
Hash brown bap
Egg and hash brown bap
And so on…

The absolute killer blow for me was, at the bottom of this list, there was another option mentioned…

“Build your own bap!” … Ha!

Now, having seen the various bap options, combinations and permutations on the board, I was always in fits of laughter when I saw that final option giving an adventurous breakfaster the ability to create their own, fully customised breakfast bap. Later, when I gave this some consideration and pondered the reasons why this extra option was there, I began to realise what was actually on offer. The simple fact is, that even though there were numerous suitable options available, one additional option was to provide the customer with the ability to create a unique option, a special bap that was particular to each individual. Very cool indeed when you apply the same premise to other scenarios.

Some years later having worked as a consultant for numerous clients on-premise, with near-source teams, as a developer, as a manager, etc. I am beginning to see the value of extending the bap analogy to my clients. Clients may well know what they want, and a good consultancy may know how to offer it, but truly excellent consultancies offer that something extra, that ‘build your own bap’ option. Consultancies can provide the ingredients (people, skills, technology for example) and they can even provide the pre-made baps (outsourcing, managed services, procurement, development teams, BI, mobile apps for example), but ultimately they should also provide the ability for a customer to ‘build their own bap’. So what are we talking about here?

Well simply, we are talking about consultancies having the ability to release resources as and when they are required so that clients can pick and choose the resources, skills, technologies, they require; when they require them and for the desired length of engagement. We are also talking about presenting the ability to ramp-up teams for short or long assignments, about individual consultants working closely with clients on-premise, or remotely. It’s about going the extra mile to provide that something extra for clients and giving them the freedom to choose.

Hopefully you can now see my simple but effective analogy, and that the important point is that any consultancy worth its salt should offer the ‘design your own solution’ as well as the ‘pre-made’ or ‘bundled’ solutions. Clients should have the freedom to choose as they see fit, the consultancy business is after all, full of choices. Now go, and build your own bap!