Category Archives: Cloud Computing

Blockchain For Dummies

Blockchain for Dummies

There has been quite a lot of chatter on the net in the last 12 to 18 months about blockchain. I’ll attempt to demystify some of the concepts in this blog as well as outline some potential applications. Blockchain for dummies does what it says on the tin and presents an introductory, lightweight guide, hopefully whetting your appetite of an often misunderstood technology.

Read More

What is Blockchain?
Wikipedia’s definition of a blockchain is given as… “A blockchain, originally, block chain is a distributed database that maintains a continuously-growing list of data records secured from tampering and revision. It consists of data structure blocks—which hold exclusively data in initial blockchain implementations and both data and programs in some more recent implementations—with each block holding batches of individual transactions and the results of any blockchain executables. Each block contains a timestamp and a link to a previous block.” [1]

So, what does this mean in ‘dummies’ terms? Basically, (and this is my definition in as much of a nutshell as I can make it); a blockchain is a mechanism that allows businesses that are separated across a network, to instantaneously complete and verify transactions without having to refer to a central overseeing authority. It really doesn’t sound like a big deal, but as more and more applications built on blockchain technology emerge, it appears to be set to revolutionise the internet (again).

Delving into the weeds a little more, so that we can get a grasp of the underlying technology,  we can say that; blockchain is a data structure which enables a digital record of commercial accounts (ledger), to be created and shared across a number of computers (usually located some distance apart), and connected via a network. In semi-tech parlance this is known as distributed computing. So, basically, we are saying that a ledger is created and shared across a network simultaneously to a number of partners. This is more or less the basic concept. Remember this is a ‘Blockchain for Dummies’ guide, and whilst there are various flavours of blockchain implementations, they won’t be covered here. One question remains then, in this brief introduction. If it is such a simple concept, and clearly devoid of rocket science, what exactly is all the noise about?

The key word of course is ‘distributed’, or to put it another way, decentralised. Looking at the vast majority of technology-enabled businesses these days, the status quo suggests that many platforms rely on databases that are centralised with a single point of failure. Yes, of course we have many measures in place to prevent the loss or theft of data, but each database regardless of whether it is an original, a copy or backup or a cloud-based replica is in itself a centralised container which is potentially vulnerable to failure, tampering, theft etc.

The big advantage that blockchain technology brings to the table is a means to ensure that we have freedom from 3rd parties and complete control over who has access to our data. We see that the effect of decentralisation is a powerful one in that, the reduction in the use of intermediaries in record keeping has increased security and control. In order to continue the blockchain each block must be signed and verified by multiple verification agents who must also then agree upon the transaction time stamp, which is indelible. The possibility of forgery has gargantuan odds stacked against it since the sheer amount of data being processed simultaneously creates an obstacle that is nigh on impossible to overcome.

Who Invented it?
From what I can tell, the true identity of the inventor(s) of blockchain has not been credited with the idea and its most famous (to date) product, the bitcoin. Some say this is because of the far reaching consequences that blockchain could deliver and hence endanger the life of the inventor(s). It is generally recognised that a person or group of people known as Satoshi Nakamoto were the fist to publish a paper describing bitcoin. [2] Whatever the case, most media articles seem to agree that it is a substantial opportunity to change the way we do business across the internet.

How is it used Today?
Almost everyday now, we see newly emerging ideas and applications of the blockchain model. At the time of writing a very recent article in the media from a government source states – “blockchain technology is going to become more important if the UK is to be fully automated in the future, including delivery fulfillment and increased proliferation of the internet of things”. [3]

It’s difficult to see just how far reaching blockchain will be, but for sure, it will at least initially be inextricably tied-up with a number of financial, contractual and payments related sectors, including the obvious one, currency (bitcoin and others). Blockchain can be used to ensure that data is verifiable. Take a look at ‘Proof of Existence‘ to see how this simple application works for example.

Onename is a web app built on blockchain that allows a unique and verifiable identity to be registered for purposes such as digitally signing documents, safely and securely signing into websites and apps etc. Here is mine. Others such as real estate are relatively new to blockchain, but will soon leverage some of its unique application selling points such as smart contracts.

The financial sector is the one making the most noise since it may be set to reap the initial rewards. The Financial Times recently reported – “A group of seven banks including Santander, CIBC and UniCredit is claiming a breakthrough, ranking among the first financial institutions in the world to move real money across borders using blockchain-based technology.” [4] Forbes are posing the question “Will Blockchain Become The Internet Of Finance?” [5]  and have suggested that as much as $1 billion has already been invested in the technology since its inception.

How will it be used Tomorrow?
Looking to the future, a number of other areas have been identified as possible applications. Indeed startups have already begun to exploit opportunities in car rental, home internet ready appliances, reduction in cyber risk, social welfare, stock market prediction, salary administration,  and others. The CEO and founder of Everledger, was quoted in ‘Wired’ as saying “We can apply this technology to solve very big problems: ivory poaching, blood diamonds, all these big ’blood problems’ that are helping cartels, terrorists and criminals”. [6] This is amazing if there really are real-world applications that not just disrupt industries, but change lives at the granular level.

What’s clear is that many of the applications are under-developed. Some are just ideas, others have attracted millions in start-up funding. The next few years will really see the technology develop and experiment. Blockchain is a game changer and it’s here to stay. Because of its very nature, ‘certainty-as-a-service’, then it has to be a power for the good. How it will affect me personally? I am not yet sure, but if it provides guarantees, increases transparency and evolves security along the way, it’s definitely worth investigating further.

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

Read More

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Moving to the Cloud – Part 2 of 3

Part Two – Hybrid Cloud Benefits

In Part 1, I presented a brief definition of the hybrid cloud and hinted at why it could be a useful instrument for enterprises wishing to move their agile Dev and Test environments to a public cloud, but still retain their Prod systems in a local, private cloud. In Part 2, I will consider a number of key areas where substantial benefit can be leveraged using current cloud technologies and why this should be  considered as a serious move towards a more efficient and secure development strategy. That said, like any IT initiative, cloud computing is not without risks and they too will be considered, leaving the reader to weigh-up the options.

Read More

It is useful to bear in mind from Part 1 that we are primarily considering cloud providers that offer IAAS solutions, consequently entire environments can be provisioned and tested (via automation) in minutes rather than days or hours and that in itself is massive boon. This concept alludes to the ‘end goal’ of this type of cloud-based setup, i.e. the design of infrastructures with automation in mind and not just the introduction of automation techniques to current processes, but that’s a topic for another discussion.

There are obvious economic benefits to be had from using public clouds since Dev, and especially Test environments (in the cloud) do not necessarily need to be provisioned and available 24/7 as they normally are with on-premise environments. From a testing point-of-view, many enterprises have a monthly release cycle for example where the Test environment is much more demand compared to other times of the month. In this case it is possible to envisage a scenario where the Test environment is only instantiated when required and can lie dormant at other times.

The phrase ‘business agility’ has been applied to the way that a hybrid cloud can offer the controls of a private cloud whilst at the same time providing scalability via the public cloud and this is also a prime benefit. A relatively new term in this arena is ‘cloud bursting’. Offered by public clouds this refers to short but highly intensive peaks of activity that are representative of cyclical trends in businesses that see periodic rises and falls in demands for their services. For those business that anticipate this type and intensity of activity, this kind of service can be invaluable.

For the troops on the ground, an HP white paper describes clear benefits to developers and testers; “Cloud models are well suited to addressing developer and tester requirements today. They allow for quick and inexpensive stand-up and teardown of complex development and testing environments. They put hardware resources to work for the development and testing phases that can be repurposed once a project is complete”. [1]

Once properly provisioned and integrated, cloud infrastructures will usually offer faster time-to-market and increased productivity through continuous delivery and test automation, however these particular benefits may take a little time to manifest themselves since implementing full-scale Dev and Test environments with associated IDE and build integrations, and an automated test facility, is a relatively complex exercise requiring a range of skills from code development to domain admin, to QA and release automation.

Clearly to achieve and deliver this kind of flexibility a substantial tool set is required. Additionally, developers need to work harmoniously with operations (admin) in a partnership that has become known as DevOps, and this is what I meant by stating in Part 1 that a new mindset was required. The ultimate goal of adopting cloud based Dev and Test environments is continuous delivery through application release automation. This kind of agile approach is seen as a pipe dream by many enterprises and I believe the current perception is that too many barriers, both physical and cerebral exist to adopting the hybrid cloud model for effective product delivery.

These barriers include the obvious candidates, such as security and privacy in the cloud leading to a potential increase in vulnerability. This can be addressed by commissioning a private cloud for Prod systems and ensuring that any data and code in public clouds is not confidential nor does it compromise the company in any way. Another drawback that is often raised is vendor ‘lock-in’ and this simply relates to the terms and conditions of the cloud provider. With so many companies now offering cloud services, I personally think that ‘shopping around’ can mitigate this risk completely and can actually be a seen as a positive factor instead. Switching between cloud providers is becoming less and less of a problem and this in turn offers up a competitive advantage to the cloud consumer as they move their business to take advantage of lower costs.

I do accept that technical difficulties and associated downtime could form a barrier, but this can be said about any new, large tech venture. Since a large tool set is required and there will certainly be a lead time for the newly created DevOps team to get up to speed with continuous integration, test and release automation. Since applications are running in remote VMs (public cloud), there is an argument that businesses have less control over their environments. This may be true in some cases but again proper research should lead to a partnership where effective control can be established by the cloud consumer using appropriate tools that effectively leverage what the vendor has on offer.

I would like to think that in Part 2 of this three-part blog article I have managed to convey that in most cases the benefits of migrating Dev and Test to the cloud outweigh the drawbacks. In Part 3, I will look at how Dev and Test could be implemented at a fairly high level. There is a plethora of tools available to choose from, free open source, bespoke, bleeding edge whatever route you choose there is almost certainly a tool for the purpose. Integrating them could prove challenging, but that’s part of the fun, right?

Moving to the Cloud – Part 1 of 3

Part One – Defining the Hybrid Cloud

Earlier this year when I blogged ‘ten trends to influence IT in the next five years‘, one of the trends I mentioned has been written about on quite a few occasions in the last few months in various web articles and white papers. That particular trend is the use of the ‘Hybrid Cloud’ and it seems to be increasingly catching the attention of the tech evangelists who are keen to spread the word and radicalise the non-believers as I discovered in a recent Cloudshare webinar.

Read More

A little more research on the topic led me to discover that there is a sort of reluctance to adopt cloud (development) initiatives in general. Like most people I had just assumed that cloud-based architectures were creating a new technology storm and in a few years almost everything would be built, developed, hosted and run in a multitude of geographical locations by thousands of VM instances, created and destroyed in seconds. It appears this may not be the case and I find that seven months later, (that’s a long time in IT), the transition has simply not happened, or to be more precise, not at the ‘rate’ expected by cloud aficionados who have been talking about a grandiose move of technology in that direction for the last few years.

My gut feeling is that in general, cloud computing in the tech community is still not a well understood concept, at least from a ‘development tool’, point-of-view and this has logically hindered the move away from traditional development to cloud-centric infrastructures and environments. I have spent some time reading about the pros and cons of moving development to a cloud-based solution and whilst I am a avid supporter of the concept, the best approach for an organisation isn’t always straightforward and will almost certainly involve one of the toughest challenges that can be faced in IT, a cultural change in the workplace and a paradigm shift in the mindset of the developers.

To make a move to the cloud for development and test purposes, people have to think about doing things in a different way. There are other potential barriers, but this is likely to be the one that poses the greatest threat to starting out on the road to eventual development, deployment and testing in the cloud. In general Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers [1]. Whilst many of the large public cloud service providers also provide a private cloud facility, I expect that many organisations still prefer to provide their own private cloud implementation since this appears to give a higher degree of security, or at the very least facilitates the storage of data in a local datacentre.

There are quite a few benefits of a hybrid cloud but the obvious one is that it enables the owner take advantage of the larger resources that a public cloud might offer, but still store and maintain private data in a manner where it should be safe from malicious attack and/or theft. Of course there are some organisations whose entire business could exist in a public cloud, but based on my experience this is still not a concept that businesses are truly happy with and certainly within a lot of enterprise or government organisations there is a preference to at least have their production system hosted in a private cloud.

In summary, my concept of a hybrid cloud is one where an organisation has developed its own private cloud for their production (Prod) system and are happy to use the services of a public cloud to develop, host and run their development (Dev) and test (Test) environments. Really, what I am taking about here is moving each of these infrastructures to a cloud environment and that will form the basis of Part 3 of this blog. Part 2 coming up next, will further elaborate on the widely accepted benefits and introduce some of the negative aspects perceived with cloud computing.

Doing it the SaaS Way – Coupa

SaaS- Old Dogs and New Tricks

They say you can’t teach an old dog new tricks, but I’d beg to differ. I have been involved with IT in one way or another, pretty much all of my working life, from initially learning Fortran during my first degree to utilising that skill during my career in research; building, maintaining and modifying helicopter and tilt-rotor aircraft mathematical models. As an Software/Aerospace Engineer I progressed to VB6 and VB.NET, modelling airborne missiles and their flyout trajectories before moving to consultancy, the betting industry, more .NET and agile methodologies and principles. I would consider myself a bit of a technology evangelist and lover of gadgets, so when the opportunity arose to get involved within a new area of our business and its associated tech, I was happy to say yes.  The business is eProcurement and the technology is Coupa, as briefly mentioned in my previous post.

So what is it, and what does it do?  Well, in a nutshell, it is a spend management tool that “manages indirect purchases and expenses in real-time” [1].  Coupa is a multi-tenant, Software as a Service (SaaS) tool and therefore comes with all of the benefits that this architecture system has to offer such as;
1. Comparatively fast implementation – Coupa requires only configuration, no development is required.
2. Increased collaboration – all relevant departments within an organisation using Coupa have access to the same data.
3. Cost – fairly obvious one, but important. No need for hardware, licensed development software and developers etc., just a Coupa subscription fee.
4. Potential for increased agility – Coupa enables a company to have the information it needs to respond more quickly to a changing market situation.
5. Reporting – Coupa has the ability to instantly report across all areas of the procurement business as well as permit a company’s own metrics to be compared to the world-wide average.
6. Immediate upgrades and updates – Companies benefit from bug-fixes and new features straight away without having to wait for a new install from the IT department.  Coupa provides four upgrades per year and users have to subscribe to at least every other one.
7. Security – SaaS is renowned for being extremely secure and Coupa is no different.
8. Low maintenance – There is little or no maintenance required for an SaaS solution, again this applies to Coupa. There will be support tasks to ensure that all relevant base-data is kept up-to-date, but that is outsider of the platform itself.

It’s obvious to see that SaaS offers many benefits for the ‘right’ sort of business and during my first experience assisting with a Coupa implementation that is certainly the case.  Scalability is of course another of the big benefits that at lot of companies may not see straight away, but if necessary Coupa have the ability to scale-up and down with the business needs and requirements, which in-turn respond to market trends.  Coupa has a number of different packages that can be purchased on top of vanilla platform, including eProcurement (eProc), Invoicing, Contracts, Expenses and Spend Optimizer, so it offers plenty in the way of options and with the new Release 8 just out the door, a whole new set of features are ready for the end user.

The basic process of enabling Coupa is relatively straightforward and involves capturing information essential to configuring the system, and then uploading other relevant data via CSV files. More complex situations arise out of identifying and integrating other related systems to ensure that the purchase-to-pay (P2P) process remains intact within the business. For those of you relatively new to the P2P arena, the ever-faithful Wikipedia provides the following short description; “Purchase-to-pay systems automate the full purchase-to-payment process, connecting procurement and invoicing operations through an intertwined business flow that automates the process from identification of a need, planning and budgeting, through to procurement and payment”, [2] .

One of the cool aspects of the implementation process itself is that it is run in an agile manner so the company that is being ‘enabled’ is involved from the outset taking part in workshops and ‘train-the-trainer’ sessions, with each iteration enabling a new part of the business and/or refining those parts already using the tool. I found this approach particularly refreshing, and after a few one-day workshops in which key business decisions and configuration options were decided upon, the remaining part of the implementation focused on working alongside the business on-site or remotely to ensure that the system administrators understood the system and were competent configuring it going forward.

So, what can I say, but I am generally a big fan of the current SaaS model and the way it can effectively deliver tools to manage a wide variety of business needs and requirements. It’s not actually a new concept and the origins of ‘hosted solutions’ go back to the sixties, so it’s no surprising to see that the UK’s most popular SaaS providers are growing in popularity.

Windows Azure – First High-level Taster


With all the hype about Windows Azure and having Visual Studio 10 at my disposal, I decided to exercise a little of my inquisitive nature and work through a demo project using a little of my spare time. I was curious to see how it all fitted together and what new project options might be available in Visual Studio. If you’re thinking of taking your very first step in the Azure world, this post could be for you, but you will be required to have knowledge of Visual Studio since I do not explain many of the basic steps in creating projects etc. within a Visual Studio solution.

To get started with Azure, the first thing I did was sign up for an account and whilst this required a credit card, there were no initial charges and I was able to create and deploy an application for free for a limited period of time. The free trial package contained 750 small compute hours per month, a 1GB web edition SQL Azure relational database, 20GB of storage with 1,000,000 storage transactions, a 20GB limit on Outbound bandwith and unlimited Inbound traffic. This seemed like a reasonable offer for getting started and having at least some time to examine some of features. I did notice that after one month Microsoft asked me to upgrade the account if I wished to continue using my demo application and they provided a number of pre-paid or pay as you go options, although the free period is actually for 90 days. Since my application was demo only, I declined, however I would seriously consider it, if I had a purposeful application to deploy.

Azure Management Portal

Azure Management Portal

Before I started following an example Azure project in Visual Studio, I had to install the Windows Azure SDK for .NET kit which included tools for Visual Studio and some client libraries for .NET. There were no problems with the executable download and subsequent installation. It’s worth noting that an installation of SQL Server (or Express) is also useful for working with your databases although not (to my knowledge) strictly necessary, since Azure provides the capability to create, manage and delete databases in the cloud with the Azure Management Portal and interact with the databases via the SQL Azure Management Portal.

Azure Project in Visual Studio Solution Explorer

Azure Solution Explorer

The next section of the exercise involved creating a SQL Azure database, i.e. a cloud-hosted database using the Azure Management Portal. This was very simple to do, the portal itself was easy to navigate around and provided options for creating hosted services, storage accounts, database servers and networks. I found that configuring the required options by following the demo project instructions was a sinch and it was satisfying to see that it actually worked. Now that the database was created I logged into the SQL Azure Management Portal which is specifically used to work with the databases created using the Azure Management Portal. By a combination of the SQL Azure Management Portal and my local installation of SQL Server 2008 Express I was able to create and deploy my first cloud-based database, which contained a simple list of fake employee details, in minutes.

From the Visual Studio side, I selected the Windows Azure Project from the Cloud template and then selected the ASP.NET Web Role, adding that to the solution. This generated a default ASP.NET project and a default Azure project in the solution. After that it was a simple matter of connecting the database already in the cloud and adding a GridView Control to the ASP.NET page. My SQL Azure database (in the cloud) was then defined as the data source object for the GridView control. Again following the demo I entered a simple SQL statement to select the first and last names from the database when configuring the Data Source object and tested this to ensure it returned the correct values. Anyone familiar with Visual Studio and the GridView control will know that this is ‘meat and potatoes’ stuff.

The GridView and Attached SQL Azure Database Object

Azure Demo ASP

The penultimate section in the Azure example I followed then gave an overview of how to connect to the SQL Azure database via a number of different methods; ADO.NET, ODBC, OLEDB, and LINQ as well as other technologies such as Java and PHP. The point of this exercise was to illustrate that connecting to a SQL Azure database is largely the same as connecting to a standard SQL Server database, with the exception that the type of connection and connection strings are different. Microsoft did seem to be pushing the idea with this section of the exercise that it is a fairly trivial matter to connect to a SQL Azure database and it is open to a variety of well-know technologies.

When it comes to publishing an Azure application to the cloud using Visual Studio, there are a number of different options, but one of the easiest has to be the ‘Publish’ option which is available by right-clicking on the Azure project selecting Publish. This does take several minutes, but when it completes a link is available to the website and it is effectively available for use provided the user has selected that it is a ‘Production’ website (there is an option for ‘Staging’ as well).

So there we have it, a short, sweet and very high level overview of my first experience with Azure and I have to say, it was quite pleasant.