Category Archives: Enterprise

Blockchain For Dummies

Blockchain for Dummies

There has been quite a lot of chatter on the net in the last 12 to 18 months about blockchain. I’ll attempt to demystify some of the concepts in this blog as well as outline some potential applications. Blockchain for dummies does what it says on the tin and presents an introductory, lightweight guide, hopefully whetting your appetite of an often misunderstood technology.

Read More

What is Blockchain?
Wikipedia’s definition of a blockchain is given as… “A blockchain, originally, block chain is a distributed database that maintains a continuously-growing list of data records secured from tampering and revision. It consists of data structure blocks—which hold exclusively data in initial blockchain implementations and both data and programs in some more recent implementations—with each block holding batches of individual transactions and the results of any blockchain executables. Each block contains a timestamp and a link to a previous block.” [1]

So, what does this mean in ‘dummies’ terms? Basically, (and this is my definition in as much of a nutshell as I can make it); a blockchain is a mechanism that allows businesses that are separated across a network, to instantaneously complete and verify transactions without having to refer to a central overseeing authority. It really doesn’t sound like a big deal, but as more and more applications built on blockchain technology emerge, it appears to be set to revolutionise the internet (again).

Delving into the weeds a little more, so that we can get a grasp of the underlying technology,  we can say that; blockchain is a data structure which enables a digital record of commercial accounts (ledger), to be created and shared across a number of computers (usually located some distance apart), and connected via a network. In semi-tech parlance this is known as distributed computing. So, basically, we are saying that a ledger is created and shared across a network simultaneously to a number of partners. This is more or less the basic concept. Remember this is a ‘Blockchain for Dummies’ guide, and whilst there are various flavours of blockchain implementations, they won’t be covered here. One question remains then, in this brief introduction. If it is such a simple concept, and clearly devoid of rocket science, what exactly is all the noise about?

The key word of course is ‘distributed’, or to put it another way, decentralised. Looking at the vast majority of technology-enabled businesses these days, the status quo suggests that many platforms rely on databases that are centralised with a single point of failure. Yes, of course we have many measures in place to prevent the loss or theft of data, but each database regardless of whether it is an original, a copy or backup or a cloud-based replica is in itself a centralised container which is potentially vulnerable to failure, tampering, theft etc.

The big advantage that blockchain technology brings to the table is a means to ensure that we have freedom from 3rd parties and complete control over who has access to our data. We see that the effect of decentralisation is a powerful one in that, the reduction in the use of intermediaries in record keeping has increased security and control. In order to continue the blockchain each block must be signed and verified by multiple verification agents who must also then agree upon the transaction time stamp, which is indelible. The possibility of forgery has gargantuan odds stacked against it since the sheer amount of data being processed simultaneously creates an obstacle that is nigh on impossible to overcome.

Who Invented it?
From what I can tell, the true identity of the inventor(s) of blockchain has not been credited with the idea and its most famous (to date) product, the bitcoin. Some say this is because of the far reaching consequences that blockchain could deliver and hence endanger the life of the inventor(s). It is generally recognised that a person or group of people known as Satoshi Nakamoto were the fist to publish a paper describing bitcoin. [2] Whatever the case, most media articles seem to agree that it is a substantial opportunity to change the way we do business across the internet.

How is it used Today?
Almost everyday now, we see newly emerging ideas and applications of the blockchain model. At the time of writing a very recent article in the media from a government source states – “blockchain technology is going to become more important if the UK is to be fully automated in the future, including delivery fulfillment and increased proliferation of the internet of things”. [3]

It’s difficult to see just how far reaching blockchain will be, but for sure, it will at least initially be inextricably tied-up with a number of financial, contractual and payments related sectors, including the obvious one, currency (bitcoin and others). Blockchain can be used to ensure that data is verifiable. Take a look at ‘Proof of Existence‘ to see how this simple application works for example.

Onename is a web app built on blockchain that allows a unique and verifiable identity to be registered for purposes such as digitally signing documents, safely and securely signing into websites and apps etc. Here is mine. Others such as real estate are relatively new to blockchain, but will soon leverage some of its unique application selling points such as smart contracts.

The financial sector is the one making the most noise since it may be set to reap the initial rewards. The Financial Times recently reported – “A group of seven banks including Santander, CIBC and UniCredit is claiming a breakthrough, ranking among the first financial institutions in the world to move real money across borders using blockchain-based technology.” [4] Forbes are posing the question “Will Blockchain Become The Internet Of Finance?” [5]  and have suggested that as much as $1 billion has already been invested in the technology since its inception.

How will it be used Tomorrow?
Looking to the future, a number of other areas have been identified as possible applications. Indeed startups have already begun to exploit opportunities in car rental, home internet ready appliances, reduction in cyber risk, social welfare, stock market prediction, salary administration,  and others. The CEO and founder of Everledger, was quoted in ‘Wired’ as saying “We can apply this technology to solve very big problems: ivory poaching, blood diamonds, all these big ’blood problems’ that are helping cartels, terrorists and criminals”. [6] This is amazing if there really are real-world applications that not just disrupt industries, but change lives at the granular level.

What’s clear is that many of the applications are under-developed. Some are just ideas, others have attracted millions in start-up funding. The next few years will really see the technology develop and experiment. Blockchain is a game changer and it’s here to stay. Because of its very nature, ‘certainty-as-a-service’, then it has to be a power for the good. How it will affect me personally? I am not yet sure, but if it provides guarantees, increases transparency and evolves security along the way, it’s definitely worth investigating further.

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

Read More

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Moving to the Cloud – Part 2 of 3

Part Two – Hybrid Cloud Benefits

In Part 1, I presented a brief definition of the hybrid cloud and hinted at why it could be a useful instrument for enterprises wishing to move their agile Dev and Test environments to a public cloud, but still retain their Prod systems in a local, private cloud. In Part 2, I will consider a number of key areas where substantial benefit can be leveraged using current cloud technologies and why this should be  considered as a serious move towards a more efficient and secure development strategy. That said, like any IT initiative, cloud computing is not without risks and they too will be considered, leaving the reader to weigh-up the options.

Read More

It is useful to bear in mind from Part 1 that we are primarily considering cloud providers that offer IAAS solutions, consequently entire environments can be provisioned and tested (via automation) in minutes rather than days or hours and that in itself is massive boon. This concept alludes to the ‘end goal’ of this type of cloud-based setup, i.e. the design of infrastructures with automation in mind and not just the introduction of automation techniques to current processes, but that’s a topic for another discussion.

There are obvious economic benefits to be had from using public clouds since Dev, and especially Test environments (in the cloud) do not necessarily need to be provisioned and available 24/7 as they normally are with on-premise environments. From a testing point-of-view, many enterprises have a monthly release cycle for example where the Test environment is much more demand compared to other times of the month. In this case it is possible to envisage a scenario where the Test environment is only instantiated when required and can lie dormant at other times.

The phrase ‘business agility’ has been applied to the way that a hybrid cloud can offer the controls of a private cloud whilst at the same time providing scalability via the public cloud and this is also a prime benefit. A relatively new term in this arena is ‘cloud bursting’. Offered by public clouds this refers to short but highly intensive peaks of activity that are representative of cyclical trends in businesses that see periodic rises and falls in demands for their services. For those business that anticipate this type and intensity of activity, this kind of service can be invaluable.

For the troops on the ground, an HP white paper describes clear benefits to developers and testers; “Cloud models are well suited to addressing developer and tester requirements today. They allow for quick and inexpensive stand-up and teardown of complex development and testing environments. They put hardware resources to work for the development and testing phases that can be repurposed once a project is complete”. [1]

Once properly provisioned and integrated, cloud infrastructures will usually offer faster time-to-market and increased productivity through continuous delivery and test automation, however these particular benefits may take a little time to manifest themselves since implementing full-scale Dev and Test environments with associated IDE and build integrations, and an automated test facility, is a relatively complex exercise requiring a range of skills from code development to domain admin, to QA and release automation.

Clearly to achieve and deliver this kind of flexibility a substantial tool set is required. Additionally, developers need to work harmoniously with operations (admin) in a partnership that has become known as DevOps, and this is what I meant by stating in Part 1 that a new mindset was required. The ultimate goal of adopting cloud based Dev and Test environments is continuous delivery through application release automation. This kind of agile approach is seen as a pipe dream by many enterprises and I believe the current perception is that too many barriers, both physical and cerebral exist to adopting the hybrid cloud model for effective product delivery.

These barriers include the obvious candidates, such as security and privacy in the cloud leading to a potential increase in vulnerability. This can be addressed by commissioning a private cloud for Prod systems and ensuring that any data and code in public clouds is not confidential nor does it compromise the company in any way. Another drawback that is often raised is vendor ‘lock-in’ and this simply relates to the terms and conditions of the cloud provider. With so many companies now offering cloud services, I personally think that ‘shopping around’ can mitigate this risk completely and can actually be a seen as a positive factor instead. Switching between cloud providers is becoming less and less of a problem and this in turn offers up a competitive advantage to the cloud consumer as they move their business to take advantage of lower costs.

I do accept that technical difficulties and associated downtime could form a barrier, but this can be said about any new, large tech venture. Since a large tool set is required and there will certainly be a lead time for the newly created DevOps team to get up to speed with continuous integration, test and release automation. Since applications are running in remote VMs (public cloud), there is an argument that businesses have less control over their environments. This may be true in some cases but again proper research should lead to a partnership where effective control can be established by the cloud consumer using appropriate tools that effectively leverage what the vendor has on offer.

I would like to think that in Part 2 of this three-part blog article I have managed to convey that in most cases the benefits of migrating Dev and Test to the cloud outweigh the drawbacks. In Part 3, I will look at how Dev and Test could be implemented at a fairly high level. There is a plethora of tools available to choose from, free open source, bespoke, bleeding edge whatever route you choose there is almost certainly a tool for the purpose. Integrating them could prove challenging, but that’s part of the fun, right?