Category Archives: Technology

Blockchain For Dummies

Blockchain for Dummies

There has been quite a lot of chatter on the net in the last 12 to 18 months about blockchain. I’ll attempt to demystify some of the concepts in this blog as well as outline some potential applications. Blockchain for dummies does what it says on the tin and presents an introductory, lightweight guide, hopefully whetting your appetite of an often misunderstood technology.

Read More

What is Blockchain?
Wikipedia’s definition of a blockchain is given as… “A blockchain, originally, block chain is a distributed database that maintains a continuously-growing list of data records secured from tampering and revision. It consists of data structure blocks—which hold exclusively data in initial blockchain implementations and both data and programs in some more recent implementations—with each block holding batches of individual transactions and the results of any blockchain executables. Each block contains a timestamp and a link to a previous block.” [1]

So, what does this mean in ‘dummies’ terms? Basically, (and this is my definition in as much of a nutshell as I can make it); a blockchain is a mechanism that allows businesses that are separated across a network, to instantaneously complete and verify transactions without having to refer to a central overseeing authority. It really doesn’t sound like a big deal, but as more and more applications built on blockchain technology emerge, it appears to be set to revolutionise the internet (again).

Delving into the weeds a little more, so that we can get a grasp of the underlying technology,  we can say that; blockchain is a data structure which enables a digital record of commercial accounts (ledger), to be created and shared across a number of computers (usually located some distance apart), and connected via a network. In semi-tech parlance this is known as distributed computing. So, basically, we are saying that a ledger is created and shared across a network simultaneously to a number of partners. This is more or less the basic concept. Remember this is a ‘Blockchain for Dummies’ guide, and whilst there are various flavours of blockchain implementations, they won’t be covered here. One question remains then, in this brief introduction. If it is such a simple concept, and clearly devoid of rocket science, what exactly is all the noise about?

The key word of course is ‘distributed’, or to put it another way, decentralised. Looking at the vast majority of technology-enabled businesses these days, the status quo suggests that many platforms rely on databases that are centralised with a single point of failure. Yes, of course we have many measures in place to prevent the loss or theft of data, but each database regardless of whether it is an original, a copy or backup or a cloud-based replica is in itself a centralised container which is potentially vulnerable to failure, tampering, theft etc.

The big advantage that blockchain technology brings to the table is a means to ensure that we have freedom from 3rd parties and complete control over who has access to our data. We see that the effect of decentralisation is a powerful one in that, the reduction in the use of intermediaries in record keeping has increased security and control. In order to continue the blockchain each block must be signed and verified by multiple verification agents who must also then agree upon the transaction time stamp, which is indelible. The possibility of forgery has gargantuan odds stacked against it since the sheer amount of data being processed simultaneously creates an obstacle that is nigh on impossible to overcome.

Who Invented it?
From what I can tell, the true identity of the inventor(s) of blockchain has not been credited with the idea and its most famous (to date) product, the bitcoin. Some say this is because of the far reaching consequences that blockchain could deliver and hence endanger the life of the inventor(s). It is generally recognised that a person or group of people known as Satoshi Nakamoto were the fist to publish a paper describing bitcoin. [2] Whatever the case, most media articles seem to agree that it is a substantial opportunity to change the way we do business across the internet.

How is it used Today?
Almost everyday now, we see newly emerging ideas and applications of the blockchain model. At the time of writing a very recent article in the media from a government source states – “blockchain technology is going to become more important if the UK is to be fully automated in the future, including delivery fulfillment and increased proliferation of the internet of things”. [3]

It’s difficult to see just how far reaching blockchain will be, but for sure, it will at least initially be inextricably tied-up with a number of financial, contractual and payments related sectors, including the obvious one, currency (bitcoin and others). Blockchain can be used to ensure that data is verifiable. Take a look at ‘Proof of Existence‘ to see how this simple application works for example.

Onename is a web app built on blockchain that allows a unique and verifiable identity to be registered for purposes such as digitally signing documents, safely and securely signing into websites and apps etc. Here is mine. Others such as real estate are relatively new to blockchain, but will soon leverage some of its unique application selling points such as smart contracts.

The financial sector is the one making the most noise since it may be set to reap the initial rewards. The Financial Times recently reported – “A group of seven banks including Santander, CIBC and UniCredit is claiming a breakthrough, ranking among the first financial institutions in the world to move real money across borders using blockchain-based technology.” [4] Forbes are posing the question “Will Blockchain Become The Internet Of Finance?” [5]  and have suggested that as much as $1 billion has already been invested in the technology since its inception.

How will it be used Tomorrow?
Looking to the future, a number of other areas have been identified as possible applications. Indeed startups have already begun to exploit opportunities in car rental, home internet ready appliances, reduction in cyber risk, social welfare, stock market prediction, salary administration,  and others. The CEO and founder of Everledger, was quoted in ‘Wired’ as saying “We can apply this technology to solve very big problems: ivory poaching, blood diamonds, all these big ’blood problems’ that are helping cartels, terrorists and criminals”. [6] This is amazing if there really are real-world applications that not just disrupt industries, but change lives at the granular level.

What’s clear is that many of the applications are under-developed. Some are just ideas, others have attracted millions in start-up funding. The next few years will really see the technology develop and experiment. Blockchain is a game changer and it’s here to stay. Because of its very nature, ‘certainty-as-a-service’, then it has to be a power for the good. How it will affect me personally? I am not yet sure, but if it provides guarantees, increases transparency and evolves security along the way, it’s definitely worth investigating further.

Rise of the Drones

Rise of the Drones

In one of my recent LinkedIn posts, I dropped a link to another post about drones. The suggestion in that article was that in a very short space of time we have moved (in non-military applications) from drones as expensive toys, to drones with some serious capability and the prospect of having some innovative applications to modern day problems.

Read More

I’m certainly one of the people who believe that we are currently standing at the precipice of a major technical revolution in this area. The world of drones has the potential to open up innumerable possibilities and real world applications. Media coverage would suggest startups are taking full advantage since drones are now easily accessible, affordable, relatively simple to fly and have the capability to carry meaningful payloads due to advancements in tech miniaturisation. There are now even books available on the subject.

The FAA estimates that more than one million people received drones as Xmas presents this year, and one can only assume that this figure is set to rise next Xmas as drone popularity increases and hobbyists as well as entrepreneurs start investigating the technologies required to power, fly and guide drones. Additionally, the FAA now requires that all drone owners register their aircraft before flying them in US airspace. For sure, this approach will be adopted here in the UK as more UAV flyers take to the air and risk colliding with other aircraft in restricted airspace.

So, how did it all come about, why drones and why now? I have alluded to it above (tech miniaturisation). Simply, several incremental advancements  in technology, mostly focused on either cost reduction, weight reduction (often both), or an increase in access to a particular component have resulted in the ability to produce decent performing drones, at a price that won’t melt the credit card.

A great example of this kind of research can be seen at The University of Glasgow where an ex-colleague of mine (Dr David Anderson) who supervises the Micro Air Systems Technology (MAST) laboratory has been using 3-D printing to design and build miniature UAVs for “research and investigation of small-scale autonomous vehicles and their associated technologies”.

It’s also no surprise that smart phone tech has played its part. Low cost accelerometers and gyroscopes been available for years, both of which are necessary for  stabilisation, attitude and referencing systems. Satellite technology too, has improved. The Russian made Global Navigation Satellite System (GLONASS) is system that works alongside Global Positioning System (GPS) to provide position information to compatible devices.

With an additional 24 satellites to utilize, GLONASS compatible receivers can acquire satellites up to 20% faster than devices that rely on GPS alone. There is no coincidence that the world’s market leader in low-cost, civilian drone manufacturer, DJI has GLONASS capability in it’s most popular drones.

What are the applications?
You don’t have to look too far to see that there is simply a plethora of potential applications. National Geographic has a great article on 5 surprising uses of drones: hurricane hunting, 3-D mapping, wildlife protection, farming and search and rescue.

Of course Amazon have been talking about drone delivery for a while now and in Mumbai, Francesco’s Pizzeria has successfully delivered a pizza using a drone. Techworld lists a further 16 uses in our day-to-day lives from mail delivery, through oil platform inspection to construction, media and government. It’s difficult to see where drones couldn’t play a useful part to some degree in our lives.

What’s next?
An exponential-type rise in production and adoption of drones and associated technology I think is a high probability. A whole new infrastructure will have to be put in place to facilitate drone usage in industry. Drone ports could become commonplace. Houses may have drone landing pads and/or capture and secure systems built onto the rooftops. Parents might hire drone firms to keep watch on their kids (or spouses) from a safe and invisible distance. Undoubtedly UK air law will have to set regulations and the CCA already offers a UAV licence which is currently required before drones can be used commercially.

Conferences such as Interdrone are springing up all over the place. There is literally a buzz in the air in tech communities and for the first time in a few years there is again something to be super excited about in civilian aerospace.

Forums and sub-reddits, already have thousands of enthusiastic contributors. Instead of build-your-own PC, it’s now build-your-own drone. Geeks are now digging into flight controllers, rotor configurations and writing apps for real-world vehicles and real-life applications. Drone Meetups are showcasing new models and awesome flying skills. Hopefully a new generation of post-millenials, tech-kids will grow up with aspirations of flight and aerospace engineering. It would be great to see UK businesses at the forefront of this new mini-revolution.

The rise of the drones is upon us. Most likely they won’t make the world any safer or more dangerous; but they might just change the rules of the game.

The Internet Life (and Death) of Me

The Internet Life (and Death) of Me

I recently came across an article about a fairly new gizmo on the market, the Amazon ‘Dash’ button. It’s unsurprisingly, a button-shaped device which can be attached to virtually any household surface and which has the ability to magically, order-up goodies direct from Amazon, quite literally, from the touch-of-a-button. How neat, right?

Read More

Well, kind of. One of the up-sides, obviously, is that the Dash (or Dashes?) could be placed in a location convenient for ordering your fast moving consumable goods so you don’t have to remind yourself to add it to the shopping list. Instead, one press of the Dash and it would automatically order an appropriate consignment of your favourite brand. Another advantage, albeit rather more subversively, is the fact that the Dash ordering system conveniently circumnavigates the (oh so tedious and lengthy – sarcasm) process of actually logging on and buying an item in the usual manner. I did think however that with Amazon One-Click buying, which is already impossibly easy, what could be the reason for bypassing the simple and easy internet step.

Well it turns out that by ensuring that you don’t have to log on to Amazon this rather sneaky little ploy draws customers away from the conscious thought of purchasing. The jury is probably still out on this one, personally I like the idea, but I’m not keen on the idea of playing mind games and quite frankly no one likes to be taken for a mug, though in the battle of wits, I tend to think that the industry marketing experts win hands-down. I’m sure that we all like to have a bit more awareness of our purchasing habits and not just blindly running around the homestead, manically pressing buttons all in aid of completing the weekly shop.

One thing that is clear, usage of the button is also a bit like wearable technology, you are freely giving away scads of personal information each time you thumb the Dash. Instead of a wearable device being strapped to your wrist, chest, or wherever; the Dash is attached to cupboards or rooms in your house or garage etc. Clearly, usage of the Dash contributes to Amazon’s already growing ‘picture’ of you as a consumer.

As a digital shopper you’re telling the online retail behemoth (who are able to sell you almost anything), everything about the products you use, (and maybe more importantly, which ones you don’t). Information about how much you use, your brand preferences, shopping habits etc. is captured with each touch-of-a-button. It could be argued that this micro-segmentation-type behaviour can genuinely be a good thing. Increasingly, targeted advertising, cross-selling etc. are seen as benefits to the customer and are effectively the end-user consequences of big data consumption and analysis.

Generally, I am fascinated with the whole concept of the internet of things (IoT) and have blogged a bit about it before, however I’m aiming to delve a little deeper into my perceived future of what it means to have gadgets like the Amazon-Dash about the house, especially in terms of what it says about you, the consumer, and the message that is picked up by industry giants like Amazon.

The thing is, the more you interact with the internet of things, then more you create a virtual ‘image’ of yourself, ‘the internet of me’, a digital persona which, in today’s world is currently under construction, but in tomorrow’s world will no doubt be stark reality. With many agents across the internet, from online banks, to online retailers, email and ISP providers, health and social websites,  it’s all there. Metadata which is at least capable of capturing your music and TV preferences, your favourite countries to travel to, or even your most ordered five pizza toppings. It is all being generated by you and it all ends up somewhere in the cloud. It is increasing in size on a monumental scale and is being cleansed, structured and analysed at a rate never before seen in the history of data processing.

It doesn’t take a lunar leap of the imagination to envisage a time when all of the meta data that describes you, is  mashed together. It’s easy to see a world with an entirely virtual representation, (or at least a virtual representation of your likes and dislikes, habits, cash flow, location, health etc.) being one of the main the drivers of our everyday lives. Consider just a few of the online apps and facilities that we currently use on a daily basis and some of the data associated with them: Finance (Banking, Investing, Pension, Insurance); Media (Film, TV, Books, Music); Food (Restaurants, Tastes); Health (Fitness, Illnesses, Lifestyle); Travel (Destinations, Hotels); Work (LinkedIn, Blog); Social (Facebook, Instagram, Pinterest); etc. etc. etc.

If a particular organisation were capable of assembling all of the above information for any single person, I think we can agree that would be a pretty good starting point for a virtual description of you. All online, all digital (and in the future, undoubtedly) all available either at the ‘right’ price or to the ‘right’ organisations. The irony is, that we freely give up all of this info now and a lot of it is to socialise and interact with our friends and families online. This is likely only one aspect of what is actually being recorded to build up profiles of our digital-doppelgängers.

Many websites including Facebook capture information such as how long you linger on a photo of your ex, or to whose events you RSVP “attending”. The New York Times has dubbed this effect the online echo chamber, where it states that the “Internet creates personalized e-comfort zones for each one of us”.

Search results that you get back from Google are tailored to your location, initially, but over time (as I’m sure most of you have spotted), these tailored search results (and ads) become uncannily accurate, as if they really are reading your thoughts. I really don’t mind having tailored ads, since this could reduce my search time and help lead me to better deals for example. However, when it gets to a stage where a single organisation, be it Amazon, Facebook or another, has literally and virtually ALL of your personal information, this may be a bit too much, for comfort.

Can I expect to receive a digital dream notification whilst I sleep which offers an umbrella and porridge to be delivered first thing in the morning (since it will be raining and I usually skip breakfast), for my journey way to work? It sounds good, and maybe a little scary, especially if my porridge arrives hot and steaming, accompanied by a note from the local undertaker asking if I would be interested in an eco-friendly coffin. It seems the same online data has indicated an imminent departure from mother earth, swept away by a vulgar little tumour. They were right after all, ignorance is no longer bliss.

The digital frontier is upon us, where will it take you?

Darwinism & Digital Transformation – Adapt or Die

Darwinism and the Digital Transformation Age

The Theory
The so-called ‘theory’ of evolution as we all know is not a theory at all, but rather a scientific explanation of our entire existence which is supported by a wealth of evidence solidly rooted in experimental rigour provided by a wide variety of agencies, including paleontology, geology, genetics and developmental biology.

We now unambiguously understand that; “natural selection is the gradual process by which biological traits become either more or less common in a population as a function of the effect of inherited traits on the differential reproductive success of organisms interacting with their environment. It is a key mechanism of evolution.” [1]

Recently, I was thinking about the whole natural selection thing and how this manifests itself in a multitude of adaptations, evolved over eons. Given time and the right conditions, almost any feature could become advantageous and ultimately prevalent in an ecosystem or society. The thing with natural selection is that it happens imperceptibly slowly. In evolutionary terms, time is not just of the essence, it is the essence or medium through which change travels, rarely in a direct or pre-determined fashion and not always for the good. It just is.

Read More

Arguably, at the heart of it all is the fact that all organisms exhibit some sort of variation, possibly created through a random mutation or maybe through an adaptation to change. This individual trait could then be subject to environmental conditions which in turn could enhance or diminish the advantage leading to an increase or decrease in the natural populace depending on their ability to exploit that advantage. It’s interesting to note that mutations can even be deliberately induced in order to adapt to a rapidly changing environment.

You may have observed that some important corollaries of the previous paragraph could be;
1. There is chance of a random mutation,
2. Mutations can be forced,
3. Mutations can occur as a result of interaction with the environment.

I have deliberately reiterated this point to help develop the next piece of narrative, i.e. how can we link a well understood area of scientific study (Darwinism) with a business’s desire and ability to digitally transform.

OK, let’s try this on for size.

1. Take the example where an individual, say some bright spark takes up a position of responsibility within an equally bright spark-type company. A chance encounter one might speculate. I am stating that this is analogous to a chance mutation in nature, at least from the point of view of the business.

2. Take another example where the company is well aware of its current operating environment and has a desire to modify its behaviour to its advantage. This, second analogy is aligned with the forced mutation concept. For whatever reason some internal driving force is pushing for change and ultimately a better position in the food chain, or business sector.

3. The last examples is where change is forced upon an entity by external, non-controllable factors. In nature this could be climate change, or an unnaturally high increase in predation etc. Whatever the cause, the only solution is to adapt, improvise and overcome; or face disaster.

Sound familiar? Well maybe. It guess you could also say that sounds a little far-fetched, but I’m still pretty certain I can build this out into something viable. How so? Well, there are steps involved in digital transformation, just as there are steps to evolve. Equally, there are typically a set of actors and processes involved in digital transformation, all of which are pretty obvious, when you think about it.

The Team
1. The visionaries (actors), i.e. the ones with the idea behind the change and often the instigators of change. We can see the visionary as coming from within the organisation, or someone new joining the organisation to effect change. The response is the same in that the organisation is on the receiving end of a forced mutation.

2. The ‘not so visionaries’ (reactors), i.e. the ones who react (usually too late) to some change forced upon them by external factors, such as market, employment or economic forces. From a digital transformation point-of-view these people are generally ineffective at best and can often be obstructers to change.

3. The changers, i.e. the ones who will implement the change, the adapters, the improvisers and the people who overcome difficulty in the face of the forced transformation, regardless of whether it has been decreed by a visionary or forced by circumstance. This is the change team and we will hear more about them later, but essentially they are the people involved with change usually adopt some method, i.e. a process by which change will come about to the ultimate benefit of the organisation. In Darwinian terms this is something akin to natural selection.

The Process
Generally in digital transformation there is the  benefit of being able to predetermine a roadmap with ideas, processes, tools and data. This helps to guide and smooth the change management process or natural selection in Darwinian terms. Fortunately, such changes are not completely left to the laws of nature, but are subject to a degree of intelligence and planning. It could be argued of course that natural selection is also guided by intelligent forces, but that’s another story. Hopefully the analogy is somewhat clear by now.

Change comes in many forms and is usually a force for the better although it is rarely welcomed. Perhaps it is time for you to think about where you are on the road to digital transformation and how this has come about. Are you truly ready for the digital age and have you got the vision and process to see it through?

Prototyping and the SDLC

The Prototyping Model Applied to the SDLC

Embarking on any development project in a new supplier/customer relationship can be a daunting experience for all parties involved. There is a lot of trust to be built in what is usually a fairly short time and it is sensible to select an approach that improves the chances of the project startup succeeding and progressing as planned to the next phase.

Read More

In my experience, there is no, single, ‘correct’ method to do this, though clear dialogue and an experience with project management methodologies can help immensely. Depending on the school of thought, project type and customer requirements, any one of a number of project management methods can be employed and it usually falls to the project manager to select an approach that also best suits the needs of the business case.

One such approach that has worked well for me in the past is the ‘prototyping model’ approach to the software development lifecycle (SDLC). Software prototyping itself, of course, isn’t a new concept, but it is becoming more popular and should be seriously considered when starting-out on a new project where it is recognised that there are other risk factors involved, such as fresh partnership agreements, subjective designs and/or new technologies.

An obvious reason prototyping is becoming more popular is its relatively risk averse nature. In a short space of time, a customer has an opportunity to perform a comprehensive skills assessment on the supplier before deciding to move forward (or withdraw) with the main project. This substantially reduces cost and quality risks at the outset.

In-turn, a supplier can ascertain if the customer has a decent grasp of their product vision and an ability to specify a clear set of requirements, so the prototype partnership is usually a mutually beneficial one. If the conclusion to prototyping is that it has been a positive experience for both parties then there is good reason to remain confident in the project partnership going forward.

There are a number of benefits to prototyping which can suit either party, but one that is of particular benefit to the customer is using prototyping as a vehicle to choose between a number of suppliers who are all bidding for the same project. Again there is less risk, certainly to the customer and potentially to the supplier as well, since neither party would wish to continue with a project that has failed in its first initiative.

So really, what I am saying here is that prototyping is a cheap, powerful assessment tool, and depending on the approach could form the foundation of the main project. Code developed in the prototype phase could be reused, so the time taken to complete the prototype is not lost in the overall project timescale.

Additionally, prototyping is tool for building successful working relationships quickly and it can prove invaluable as a supplier capability yard stick. Generally speaking, a prototype SDLC model has an overriding advantage over other SDLC models since it doesn’t rely on what is supposed to happen, i.e. what has been written in technical design documentation. Instead it canvasses the users directly and asks them what they would really like to see from the product. Gradually, the product is developed through a number of iterations and addresses the needs of the users directly in that phase of the project.

The Prototyping SDLC Model

The prototyping model starts out with a initial phase of requirements gathering, pretty much like any other software development process, however, it quickly moves to development after an initial, simple design is produced. A first iteration is released and given to the customer for review and feedback and this in turn may elicit more requirements as well as alter the design and function of the application.

This process continues until the customer accepts the prototype as complete and the project moves to the next phase of development. At this point the prototype can become redundant, or it can be continued to be used as a tool for examining various design options and/or functional changes; or it can be incorporated into the project main, as it is.

Since the prototype is developed in what is largely an agile process, there is no reason that the main application cannot be developed in the same way, although purists may argue that this is an inherently waterfall approach, I would argue that any purist approach for the SDLC can cause issues and practically speaking one should adopt an approach that suits the customer and project, i.e. flexibility is key.

Prototyping – Pros and Cons

Prototyping offers a number of advantages:

1. Time-boxed development ensures a minimum viable product
2. Users are encouraged to participate in the design process
3. Changes can be accommodated quickly and easily
4. The application evolves as a consequence of a series iterations of corrective feedback loops. Generally, this leads to a much more widely accepted product
5. Defects can usually be detected and fixed early on, possibly saving resources later on
6. Areas of functionality that are missing and/or confusing to users can be easily identified and remedied

There are however a few disadvantages as follows:

1. Most models are never fully completed and may exhibit technical debt which is never fully addressed. This is especially important if the prototype is literally used as the basis for the real application
2. Documentation, if required, can be a bit of a nightmare since changes are usually frequent and can be substantial
3. There can be a reluctance within developers to change from the initial prototype design. Instilling a new design mindset can be difficult
4. If integration is required and the prototype is unstable, this can cause other issues
5. An over enthusiasm in the iterative stages can result in too much time being invested in this phase

Will it work for you?

Before you can answer that question, it will probably be a useful exercise to ask yourself what do you need to get out of the prototype and, what do you intend to do with it afterwards?

The prototype model can be exercised in a few different ways and this can have a substantial impact on the project as a whole. More often than not prototypes fall into 1, of about 4 different categories:

1. Prototypes that are meant to be thrown away after the prototype phase
2. The evolutionary prototype where iterations continue and the prototype evolves into bigger and more functional prototypes and ultimately, the final application
3. The incremental prototype which is a more modular approach. Each subsystem is prototyped and then all of the sub-systems are integrated to build the final application
4. A web-based technique known as extreme prototyping where only the web pages are developed initially and then the functional aspects are added in the final stage. An intermediate stage is used to simulate the data processing as the prototype evolves.

The technology used to develop the application could also have an impact on how development proceeds, for example with mobile apps, most IDEs have built-in simulators to help with rapid design and demonstration of the app, so prototyping is almost implicit in the overall build approach.

Whichever SDLC approach you chose, prototyping should be considered as a useful tool in application development. It can save time, cost and be an invaluable indicator as to the future success of your project. Priceless!

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

Read More

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Moving to the Cloud – Part 2 of 3

Part Two – Hybrid Cloud Benefits

In Part 1, I presented a brief definition of the hybrid cloud and hinted at why it could be a useful instrument for enterprises wishing to move their agile Dev and Test environments to a public cloud, but still retain their Prod systems in a local, private cloud. In Part 2, I will consider a number of key areas where substantial benefit can be leveraged using current cloud technologies and why this should be  considered as a serious move towards a more efficient and secure development strategy. That said, like any IT initiative, cloud computing is not without risks and they too will be considered, leaving the reader to weigh-up the options.

Read More

It is useful to bear in mind from Part 1 that we are primarily considering cloud providers that offer IAAS solutions, consequently entire environments can be provisioned and tested (via automation) in minutes rather than days or hours and that in itself is massive boon. This concept alludes to the ‘end goal’ of this type of cloud-based setup, i.e. the design of infrastructures with automation in mind and not just the introduction of automation techniques to current processes, but that’s a topic for another discussion.

There are obvious economic benefits to be had from using public clouds since Dev, and especially Test environments (in the cloud) do not necessarily need to be provisioned and available 24/7 as they normally are with on-premise environments. From a testing point-of-view, many enterprises have a monthly release cycle for example where the Test environment is much more demand compared to other times of the month. In this case it is possible to envisage a scenario where the Test environment is only instantiated when required and can lie dormant at other times.

The phrase ‘business agility’ has been applied to the way that a hybrid cloud can offer the controls of a private cloud whilst at the same time providing scalability via the public cloud and this is also a prime benefit. A relatively new term in this arena is ‘cloud bursting’. Offered by public clouds this refers to short but highly intensive peaks of activity that are representative of cyclical trends in businesses that see periodic rises and falls in demands for their services. For those business that anticipate this type and intensity of activity, this kind of service can be invaluable.

For the troops on the ground, an HP white paper describes clear benefits to developers and testers; “Cloud models are well suited to addressing developer and tester requirements today. They allow for quick and inexpensive stand-up and teardown of complex development and testing environments. They put hardware resources to work for the development and testing phases that can be repurposed once a project is complete”. [1]

Once properly provisioned and integrated, cloud infrastructures will usually offer faster time-to-market and increased productivity through continuous delivery and test automation, however these particular benefits may take a little time to manifest themselves since implementing full-scale Dev and Test environments with associated IDE and build integrations, and an automated test facility, is a relatively complex exercise requiring a range of skills from code development to domain admin, to QA and release automation.

Clearly to achieve and deliver this kind of flexibility a substantial tool set is required. Additionally, developers need to work harmoniously with operations (admin) in a partnership that has become known as DevOps, and this is what I meant by stating in Part 1 that a new mindset was required. The ultimate goal of adopting cloud based Dev and Test environments is continuous delivery through application release automation. This kind of agile approach is seen as a pipe dream by many enterprises and I believe the current perception is that too many barriers, both physical and cerebral exist to adopting the hybrid cloud model for effective product delivery.

These barriers include the obvious candidates, such as security and privacy in the cloud leading to a potential increase in vulnerability. This can be addressed by commissioning a private cloud for Prod systems and ensuring that any data and code in public clouds is not confidential nor does it compromise the company in any way. Another drawback that is often raised is vendor ‘lock-in’ and this simply relates to the terms and conditions of the cloud provider. With so many companies now offering cloud services, I personally think that ‘shopping around’ can mitigate this risk completely and can actually be a seen as a positive factor instead. Switching between cloud providers is becoming less and less of a problem and this in turn offers up a competitive advantage to the cloud consumer as they move their business to take advantage of lower costs.

I do accept that technical difficulties and associated downtime could form a barrier, but this can be said about any new, large tech venture. Since a large tool set is required and there will certainly be a lead time for the newly created DevOps team to get up to speed with continuous integration, test and release automation. Since applications are running in remote VMs (public cloud), there is an argument that businesses have less control over their environments. This may be true in some cases but again proper research should lead to a partnership where effective control can be established by the cloud consumer using appropriate tools that effectively leverage what the vendor has on offer.

I would like to think that in Part 2 of this three-part blog article I have managed to convey that in most cases the benefits of migrating Dev and Test to the cloud outweigh the drawbacks. In Part 3, I will look at how Dev and Test could be implemented at a fairly high level. There is a plethora of tools available to choose from, free open source, bespoke, bleeding edge whatever route you choose there is almost certainly a tool for the purpose. Integrating them could prove challenging, but that’s part of the fun, right?

Moving to the Cloud – Part 1 of 3

Part One – Defining the Hybrid Cloud

Earlier this year when I blogged ‘ten trends to influence IT in the next five years‘, one of the trends I mentioned has been written about on quite a few occasions in the last few months in various web articles and white papers. That particular trend is the use of the ‘Hybrid Cloud’ and it seems to be increasingly catching the attention of the tech evangelists who are keen to spread the word and radicalise the non-believers as I discovered in a recent Cloudshare webinar.

Read More

A little more research on the topic led me to discover that there is a sort of reluctance to adopt cloud (development) initiatives in general. Like most people I had just assumed that cloud-based architectures were creating a new technology storm and in a few years almost everything would be built, developed, hosted and run in a multitude of geographical locations by thousands of VM instances, created and destroyed in seconds. It appears this may not be the case and I find that seven months later, (that’s a long time in IT), the transition has simply not happened, or to be more precise, not at the ‘rate’ expected by cloud aficionados who have been talking about a grandiose move of technology in that direction for the last few years.

My gut feeling is that in general, cloud computing in the tech community is still not a well understood concept, at least from a ‘development tool’, point-of-view and this has logically hindered the move away from traditional development to cloud-centric infrastructures and environments. I have spent some time reading about the pros and cons of moving development to a cloud-based solution and whilst I am a avid supporter of the concept, the best approach for an organisation isn’t always straightforward and will almost certainly involve one of the toughest challenges that can be faced in IT, a cultural change in the workplace and a paradigm shift in the mindset of the developers.

To make a move to the cloud for development and test purposes, people have to think about doing things in a different way. There are other potential barriers, but this is likely to be the one that poses the greatest threat to starting out on the road to eventual development, deployment and testing in the cloud. In general Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers [1]. Whilst many of the large public cloud service providers also provide a private cloud facility, I expect that many organisations still prefer to provide their own private cloud implementation since this appears to give a higher degree of security, or at the very least facilitates the storage of data in a local datacentre.

There are quite a few benefits of a hybrid cloud but the obvious one is that it enables the owner take advantage of the larger resources that a public cloud might offer, but still store and maintain private data in a manner where it should be safe from malicious attack and/or theft. Of course there are some organisations whose entire business could exist in a public cloud, but based on my experience this is still not a concept that businesses are truly happy with and certainly within a lot of enterprise or government organisations there is a preference to at least have their production system hosted in a private cloud.

In summary, my concept of a hybrid cloud is one where an organisation has developed its own private cloud for their production (Prod) system and are happy to use the services of a public cloud to develop, host and run their development (Dev) and test (Test) environments. Really, what I am taking about here is moving each of these infrastructures to a cloud environment and that will form the basis of Part 3 of this blog. Part 2 coming up next, will further elaborate on the widely accepted benefits and introduce some of the negative aspects perceived with cloud computing.

PhraseExpress – Save Keystrokes


If you find yourself keying in the same phrase or word combination everyday, there is a neat little tool called PhraseExpress. It’s a time saver and can be used to capture anything from single words to complex phrases or sentences, or other commonly typed pieces of text such as names, addresses, telephones numbers etc. PhraseExpress is triggered via a certain set of keystrokes or manually via hotkey combinations and is incredibly easy to use.

Having researched other methods of text auto-completing, this appeared to be one of the best free tools out there on the web. It can be configured to start automatically with Windows, or it can be started manually, however the footprint is very small so is of little consequence to the drain on resources.

Since I consider it good practice to log out of websites when I am finished and I prefer not to have my browser store any information, on any given day when I then go to log into several websites, it’s really useful to email address usernames that can be called upon with a hotkey combination. Rather cynically (or sensibly) I considered that it may be some sort of spy or malware, but the page Phrase Express website has this to say about their product;

“Of course, PhraseExpress needs to have keyboard access in order to provide the desired functionality (hotkey support, autotext feature). A helper utility must be authorized to help.

Unfortunately, a few “AntiVirus” and “Security” programs may generally claim any program as potentially dangerous which access the keyboard, regardless if it is a malicious keylogger or a harmless keyboard utility.

Please be assured that never send any personal data over the internet. We are a registered company and surely have no interest to ruin the reputation of our 14 years history or to mess with one of the strictest privacy laws world wide. We also do not hide our place of business.”

This is still no guarantee, but it does appear to be legitimate and provided that the text stored is not of a sensitive nature, passwords and the like, I believe this is a nifty product that can save thousands of unnecessary keystrokes when used effectively.

A free download can be found here .

Medicine Apps – The Future?

Medicine Apps

I used to be a big fan of BBC’s Horizon programme. Back in the day it was at times controversial, ground breaking and one of the few programmes on air that appealed to science-heads, engineers, futurists and like-minded people. Recently though, I think it has gone down hill a bit, becoming less high tech and ultimately attempting to gain more traction with a less tech-savy audience. However, one episode I was impressed by, really hit the nail on the head, striking a great balance between the interest of the masses (smart phones), science (medicine) and the future (healing yourself). The idea of a GP-less future is appealing, I for one am not a big fan of visiting the local GP and generally rely on a self-diagnosed blend of common sense and Paracetamol to overcome any and all ailments or sicknesses that blight my otherwise healthy’ish lifestyle.

I am aware of smart phone apps that can measure heart rate using the camera flash and camera, or the step counters that have been around for a while that tell you just how lazy you have been in only achieving 3500 steps of your daily, recommended 10000. Additionally doctors actually use apps like Epocrates or Medscape to help them prescribe drugs and of course there are loads of reference materials type apps. But this isn’t what I’m talking about, well not completely, this is only the start and when we introduce near field communication technology then the potential for apps and medical applications, sky rockets.

Horizon presented the case of one particular doctor who had a whole host of apps and associated gadgets that allows for a direct, quick and accurate monitoring of various body variables (for lack of a better term). A smart phone with metal connectors on the back was a point-in-case where the user upon pressing their thumbs on the contacts was presented with a real-time electrocardiogram readout of their heart rate and associated functions. In the case where the user is a sufferer of diabetes a near-field device containing a tiny needle could be used to monitor blood sugar levels and present this on the screen of the smart phone in an app that looked familiar and was intuitive to use. The important thing here to recognise is that users are already very familiar with their apps and so introducing an app for medical/self-diagnosis purposes, even if it also requires the use of a near-field technology device shouldn’t to be an overwhelming technology experience. The point is, people are already well accustomed to using their smart phone for a wide variety of things and as near-field communication becomes increasing popular people may turn to using their smart phones for self-diagnosis and potentially alerting medical services in the event of an emergency.

Froma  fitness point-of -view, some of my friends have tried the ‘couch to 5k’ app or as it is now known ‘C25K’ and it has actually worked, providing a scheduled plan of running workouts that ensure new runners do not push too much and injure themselves, but also guiding them to their goal of running 5k without stopping. There are calorie counters and diet plans, kettlebell workouts and yoga instructors all available to give advice on how to get healthy, stay fit, or just inform you that you can avoid an array of common illnesses brought about by unhealthy lifestyles; in other words preventative medicine. With knowledge comes power and with power comes the chance to change one’s life, so maybe it’s time we all tried to ‘get physical’ with an app.