The Internet Life (and Death) of Me

The Internet Life (and Death) of Me

I recently came across an article about a fairly new gizmo on the market, the Amazon ‘Dash’ button. It’s unsurprisingly, a button-shaped device which can be attached to virtually any household surface and which has the ability to magically, order-up goodies direct from Amazon, quite literally, from the touch-of-a-button. How neat, right?

Read More

Well, kind of. One of the up-sides, obviously, is that the Dash (or Dashes?) could be placed in a location convenient for ordering your fast moving consumable goods so you don’t have to remind yourself to add it to the shopping list. Instead, one press of the Dash and it would automatically order an appropriate consignment of your favourite brand. Another advantage, albeit rather more subversively, is the fact that the Dash ordering system conveniently circumnavigates the (oh so tedious and lengthy – sarcasm) process of actually logging on and buying an item in the usual manner. I did think however that with Amazon One-Click buying, which is already impossibly easy, what could be the reason for bypassing the simple and easy internet step.

Well it turns out that by ensuring that you don’t have to log on to Amazon this rather sneaky little ploy draws customers away from the conscious thought of purchasing. The jury is probably still out on this one, personally I like the idea, but I’m not keen on the idea of playing mind games and quite frankly no one likes to be taken for a mug, though in the battle of wits, I tend to think that the industry marketing experts win hands-down. I’m sure that we all like to have a bit more awareness of our purchasing habits and not just blindly running around the homestead, manically pressing buttons all in aid of completing the weekly shop.

One thing that is clear, usage of the button is also a bit like wearable technology, you are freely giving away scads of personal information each time you thumb the Dash. Instead of a wearable device being strapped to your wrist, chest, or wherever; the Dash is attached to cupboards or rooms in your house or garage etc. Clearly, usage of the Dash contributes to Amazon’s already growing ‘picture’ of you as a consumer.

As a digital shopper you’re telling the online retail behemoth (who are able to sell you almost anything), everything about the products you use, (and maybe more importantly, which ones you don’t). Information about how much you use, your brand preferences, shopping habits etc. is captured with each touch-of-a-button. It could be argued that this micro-segmentation-type behaviour can genuinely be a good thing. Increasingly, targeted advertising, cross-selling etc. are seen as benefits to the customer and are effectively the end-user consequences of big data consumption and analysis.

Generally, I am fascinated with the whole concept of the internet of things (IoT) and have blogged a bit about it before, however I’m aiming to delve a little deeper into my perceived future of what it means to have gadgets like the Amazon-Dash about the house, especially in terms of what it says about you, the consumer, and the message that is picked up by industry giants like Amazon.

The thing is, the more you interact with the internet of things, then more you create a virtual ‘image’ of yourself, ‘the internet of me’, a digital persona which, in today’s world is currently under construction, but in tomorrow’s world will no doubt be stark reality. With many agents across the internet, from online banks, to online retailers, email and ISP providers, health and social websites,  it’s all there. Metadata which is at least capable of capturing your music and TV preferences, your favourite countries to travel to, or even your most ordered five pizza toppings. It is all being generated by you and it all ends up somewhere in the cloud. It is increasing in size on a monumental scale and is being cleansed, structured and analysed at a rate never before seen in the history of data processing.

It doesn’t take a lunar leap of the imagination to envisage a time when all of the meta data that describes you, is  mashed together. It’s easy to see a world with an entirely virtual representation, (or at least a virtual representation of your likes and dislikes, habits, cash flow, location, health etc.) being one of the main the drivers of our everyday lives. Consider just a few of the online apps and facilities that we currently use on a daily basis and some of the data associated with them: Finance (Banking, Investing, Pension, Insurance); Media (Film, TV, Books, Music); Food (Restaurants, Tastes); Health (Fitness, Illnesses, Lifestyle); Travel (Destinations, Hotels); Work (LinkedIn, Blog); Social (Facebook, Instagram, Pinterest); etc. etc. etc.

If a particular organisation were capable of assembling all of the above information for any single person, I think we can agree that would be a pretty good starting point for a virtual description of you. All online, all digital (and in the future, undoubtedly) all available either at the ‘right’ price or to the ‘right’ organisations. The irony is, that we freely give up all of this info now and a lot of it is to socialise and interact with our friends and families online. This is likely only one aspect of what is actually being recorded to build up profiles of our digital-doppelgängers.

Many websites including Facebook capture information such as how long you linger on a photo of your ex, or to whose events you RSVP “attending”. The New York Times has dubbed this effect the online echo chamber, where it states that the “Internet creates personalized e-comfort zones for each one of us”.

Search results that you get back from Google are tailored to your location, initially, but over time (as I’m sure most of you have spotted), these tailored search results (and ads) become uncannily accurate, as if they really are reading your thoughts. I really don’t mind having tailored ads, since this could reduce my search time and help lead me to better deals for example. However, when it gets to a stage where a single organisation, be it Amazon, Facebook or another, has literally and virtually ALL of your personal information, this may be a bit too much, for comfort.

Can I expect to receive a digital dream notification whilst I sleep which offers an umbrella and porridge to be delivered first thing in the morning (since it will be raining and I usually skip breakfast), for my journey way to work? It sounds good, and maybe a little scary, especially if my porridge arrives hot and steaming, accompanied by a note from the local undertaker asking if I would be interested in an eco-friendly coffin. It seems the same online data has indicated an imminent departure from mother earth, swept away by a vulgar little tumour. They were right after all, ignorance is no longer bliss.

The digital frontier is upon us, where will it take you?

Darwinism & Digital Transformation – Adapt or Die

Darwinism and the Digital Transformation Age

The Theory
The so-called ‘theory’ of evolution as we all know is not a theory at all, but rather a scientific explanation of our entire existence which is supported by a wealth of evidence solidly rooted in experimental rigour provided by a wide variety of agencies, including paleontology, geology, genetics and developmental biology.

We now unambiguously understand that; “natural selection is the gradual process by which biological traits become either more or less common in a population as a function of the effect of inherited traits on the differential reproductive success of organisms interacting with their environment. It is a key mechanism of evolution.” [1]

Recently, I was thinking about the whole natural selection thing and how this manifests itself in a multitude of adaptations, evolved over eons. Given time and the right conditions, almost any feature could become advantageous and ultimately prevalent in an ecosystem or society. The thing with natural selection is that it happens imperceptibly slowly. In evolutionary terms, time is not just of the essence, it is the essence or medium through which change travels, rarely in a direct or pre-determined fashion and not always for the good. It just is.

Read More

Arguably, at the heart of it all is the fact that all organisms exhibit some sort of variation, possibly created through a random mutation or maybe through an adaptation to change. This individual trait could then be subject to environmental conditions which in turn could enhance or diminish the advantage leading to an increase or decrease in the natural populace depending on their ability to exploit that advantage. It’s interesting to note that mutations can even be deliberately induced in order to adapt to a rapidly changing environment.

You may have observed that some important corollaries of the previous paragraph could be;
1. There is chance of a random mutation,
2. Mutations can be forced,
3. Mutations can occur as a result of interaction with the environment.

I have deliberately reiterated this point to help develop the next piece of narrative, i.e. how can we link a well understood area of scientific study (Darwinism) with a business’s desire and ability to digitally transform.

OK, let’s try this on for size.

1. Take the example where an individual, say some bright spark takes up a position of responsibility within an equally bright spark-type company. A chance encounter one might speculate. I am stating that this is analogous to a chance mutation in nature, at least from the point of view of the business.

2. Take another example where the company is well aware of its current operating environment and has a desire to modify its behaviour to its advantage. This, second analogy is aligned with the forced mutation concept. For whatever reason some internal driving force is pushing for change and ultimately a better position in the food chain, or business sector.

3. The last examples is where change is forced upon an entity by external, non-controllable factors. In nature this could be climate change, or an unnaturally high increase in predation etc. Whatever the cause, the only solution is to adapt, improvise and overcome; or face disaster.

Sound familiar? Well maybe. It guess you could also say that sounds a little far-fetched, but I’m still pretty certain I can build this out into something viable. How so? Well, there are steps involved in digital transformation, just as there are steps to evolve. Equally, there are typically a set of actors and processes involved in digital transformation, all of which are pretty obvious, when you think about it.

The Team
1. The visionaries (actors), i.e. the ones with the idea behind the change and often the instigators of change. We can see the visionary as coming from within the organisation, or someone new joining the organisation to effect change. The response is the same in that the organisation is on the receiving end of a forced mutation.

2. The ‘not so visionaries’ (reactors), i.e. the ones who react (usually too late) to some changed forced upon them by external factors, such as market, employment or economic forces. From a digital transformation point-of-view these people are generally ineffective at best and can often be obstructers to change.

3. The changers, i.e. the ones who will implement the change, the adapters, the improvisers and the people who overcome difficulty in the face of the forced transformation, regardless of whether it has been decreed by a visionary or forced by circumstance. This is the change team and we will hear more about them later, but essentially they are the people involved with change usually adopt some method, i.e. a process by which change will come about to the ultimate benefit of the organisation. In Darwinian terms this is something akin to natural selection.

The Process
Generally in digital transformation there is the  benefit of being able to predetermine a roadmap with ideas, processes, tools and data. This helps to guide and smooth the change management process or natural selection in Darwinian terms. Fortunately, such changes are not completely left to the laws of nature, but are subject to a degree of intelligence and planning. It could be argued of course that natural selection is also guided by intelligent forces, but that’s another story. Hopefully the analogy is somewhat clear by now.

Change comes in many forms and is usually a force for the better although it is rarely welcomed. Perhaps it is time for you to think about where you are on the road to digital transformation and how this has come about. Are you truly ready for the digital age and have you go the vision and process to see it through?

Remote Working: Office Not Required

Remote Working in Modern Times

One of the workplace topics that I have seen hotly debated on occasion, is the issue of remote working. Can it truly work? Are employees genuinely motivated? How can quality, working relationships be established and maintained? And so on. I have been involved with remote working and managing remote teams for a number of years, but I was still intrigued when I received a copy of ‘Remote: Office Not Required‘, written by 37 signals co-founders Jason Fried and David Heinemeier Hansson. I knew of the company and their products, and that they have a very successful business run entirely by remote workers.

Read More

I have long since finished the book and have only just now gotten around to blogging about the concept of the remote worker. It is not my intention to review the book here, suffice to say it is an exceptionally easy read and has a fluidity of prose rarely associated with tech-related books.

It currently has four stars on Amazon, and what could have been a potentially mundane subject to write about, is actually a well-crafted, enlightening and sometimes an amusing read.

So what’s the big deal? Why does the concept of remote working elicit widely differing responses and why do some people say they could never remote work whilst others are highly successful, mini-managers? The reason is really simple, it’s all a matter of trust. You have to trust your colleagues and trust yourself to fulfil your end of the deal.

What do I mean? Simply, you have to trust that you have the drive, motivation, desire and ability to be able to work away from the rest of your team, not on your own, but not co-located either. I really believe it’s that simple. Colleagues should be assured that you are the type of person that given a lull in the general day-to-day activities, will look for something to do, an improvement in process, finishing off the operations manual from 2 projects ago, or simply chasing down customer responses to emails.

Still, there is one area where I think remote working is lacking. Ironically enough it’s the tech needed for a typical remote team operation. It’s definitely not up-to-scratch in certain areas, but not to the point where the experience is unpleasant’. It’s just clear to me that it can be improved upon. Although tools such as Skype, TeamViewer and have improved by leaps-and-bounds over the last few years, I still find that they are easily affected by bandwidth/signal/wi-fi issues and in some cases lack a clear, intuitive interface.

With 4G availability and coverage ever expanding, and devices offering ever more facilities through powerful, rich apps this should be the impetus telecoms companies need to provide a true internet society with always on, always connected and always secure technology. Fast, secure, intuitive apps are key components to happy, remote workers.

37 Signals’ secret to success is pretty evident from the manner in which they openly recruited remote developers from all over the globe. By ensuring that each member of the team had appropriate tools and a solid service provider, there are few, if any complaints in the book that the lack of tools and/or suitable internet connection hindered their progress in any way. It’s also clear in the book that the interview process is rigorous and aimed at weeding out candidates that don’t suit the remote worker profile they have studiously crafted over the lifetime of their business.

And the secret for you as a remote worker? This excellent article by Zapier outlines a number of traits that remote workers need to possess in order for them to be successful and happy. They include more obvious things like being trustworthy and having an ability to communicate effectively via the written word, but also, and I think, just as important is having a local support system. This means of course having a life outside of work where interaction with people occurs on a level different to Skype and TeamViewer conversations. Clearly all digital work does indeed make Jack a dull boy.

Proverbs aside, it is a vital point and one not to be taken lightly if you are thinking about entering the remote working arena. Having a good network of friends and family is really necessary to help fulfill the void potentially created by the remote working environment.

I have found that working remotely can be, and is, just as rewarding as going to the office. Having successfully managed a number of remote teams I can say with a sense of achievement and satisfaction that being co-located isn’t that important, but having the right attitude is.

Prototyping and the SDLC

The Prototyping Model Applied to the SDLC

Embarking on any development project in a new supplier/customer relationship can be a daunting experience for all parties involved. There is a lot of trust to be built in what is usually a fairly short time and it is sensible to select an approach that improves the chances of the project startup succeeding and progressing as planned to the next phase.

Read More

In my experience, there is no, single, ‘correct’ method to do this, though clear dialogue and an experience with project management methodologies can help immensely. Depending on the school of thought, project type and customer requirements, any one of a number of project management methods can be employed and it usually falls to the project manager to select an approach that also best suits the needs of the business case.

One such approach that has worked well for me in the past is the ‘prototyping model’ approach to the software development lifecycle (SDLC). Software prototyping itself, of course, isn’t a new concept, but it is becoming more popular and should be seriously considered when starting-out on a new project where it is recognised that there are other risk factors involved, such as fresh partnership agreements, subjective designs and/or new technologies.

An obvious reason prototyping is becoming more popular is its relatively risk averse nature. In a short space of time, a customer has an opportunity to perform a comprehensive skills assessment on the supplier before deciding to move forward (or withdraw) with the main project. This substantially reduces cost and quality risks at the outset.

In-turn, a supplier can ascertain if the customer has a decent grasp of their product vision and an ability to specify a clear set of requirements, so the prototype partnership is usually a mutually beneficial one. If the conclusion to prototyping is that it has been a positive experience for both parties then there is good reason to remain confident in the project partnership going forward.

There are a number of benefits to prototyping which can suit either party, but one that is of particular benefit to the customer is using prototyping as a vehicle to choose between a number of suppliers who are all bidding for the same project. Again there is less risk, certainly to the customer and potentially to the supplier as well, since neither party would wish to continue with a project that has failed in its first initiative.

So really, what I am saying here is that prototyping is a cheap, powerful assessment tool, and depending on the approach could form the foundation of the main project. Code developed in the prototype phase could be reused, so the time taken to complete the prototype is not lost in the overall project timescale.

Additionally, prototyping is tool for building successful working relationships quickly and it can prove invaluable as a supplier capability yard stick. Generally speaking, a prototype SDLC model has an overriding advantage over other SDLC models since it doesn’t rely on what is supposed to happen, i.e. what has been written in technical design documentation. Instead it canvasses the users directly and asks them what they would really like to see from the product. Gradually, the product is developed through a number of iterations and addresses the needs of the users directly in that phase of the project.

The Prototyping SDLC Model

The prototyping model starts out with a initial phase of requirements gathering, pretty much like any other software development process, however, it quickly moves to development after an initial, simple design is produced. A first iteration is released and given to the customer for review and feedback and this in turn may elicit more requirements as well as alter the design and function of the application.

This process continues until the customer accepts the prototype as complete and the project moves to the next phase of development. At this point the prototype can become redundant, or it can be continued to be used as a tool for examining various design options and/or functional changes; or it can be incorporated into the project main, as it is.

Since the prototype is developed in what is largely an agile process, there is no reason that the main application cannot be developed in the same way, although purists may argue that this is an inherently waterfall approach, I would argue that any purist approach for the SDLC can cause issues and practically speaking one should adopt an approach that suits the customer and project, i.e. flexibility is key.

Prototyping – Pros and Cons

Prototyping offers a number of advantages:

1. Time-boxed development ensures a minimum viable product
2. Users are encouraged to participate in the design process
3. Changes can be accommodated quickly and easily
4. The application evolves as a consequence of a series iterations of corrective feedback loops. Generally, this leads to a much more widely accepted product
5. Defects can usually be detected and fixed early on, possibly saving resources later on
6. Areas of functionality that are missing and/or confusing to users can be easily identified and remedied

There are however a few disadvantages as follows:

1. Most models are never fully completed and may exhibit technical debt which is never fully addressed. This is especially important if the prototype is literally used as the basis for the real application
2. Documentation, if required, can be a bit of a nightmare since changes are usually frequent and can be substantial
3. There can be a reluctance within developers to change from the initial prototype design. Instilling a new design mindset can be difficult
4. If integration is required and the prototype is unstable, this can cause other issues
5. An over enthusiasm in the iterative stages can result in too much time being invested in this phase

Will it work for you?

Before you can answer that question, it will probably be a useful exercise to ask yourself what do you need to get out of the prototype and, what do you intend to do with it afterwards?

The prototype model can be exercised in a few different ways and this can have a substantial impact on the project as a whole. More often than not prototypes fall into 1, of about 4 different categories:

1. Prototypes that are meant to be thrown away after the prototype phase
2. The evolutionary prototype where iterations continue and the prototype evolves into bigger and more functional prototypes and ultimately, the final application
3. The incremental prototype which is a more modular approach. Each subsystem is prototyped and then all of the sub-systems are integrated to build the final application
4. A web-based technique known as extreme prototyping where only the web pages are developed initially and then the functional aspects are added in the final stage. An intermediate stage is used to simulate the data processing as the prototype evolves.

The technology used to develop the application could also have an impact on how development proceeds, for example with mobile apps, most IDEs have built-in simulators to help with rapid design and demonstration of the app, so prototyping is almost implicit in the overall build approach.

Whichever SDLC approach you chose, prototyping should be considered as a useful tool in application development. It can save time, cost and be an invaluable indicator as to the future success of your project. Priceless!

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Moving to the Cloud – Part 2 of 3

Part Two – Hybrid Cloud Benefits

In Part 1, I presented a brief definition of the hybrid cloud and hinted at why it could be a useful instrument for enterprises wishing to move their agile Dev and Test environments to a public cloud, but still retain their Prod systems in a local, private cloud. In Part 2, I will consider a number of key areas where substantial benefit can be leveraged using current cloud technologies and why this should be  considered as a serious move towards a more efficient and secure development strategy. That said, like any IT initiative, cloud computing is not without risks and they too will be considered, leaving the reader to weigh-up the options.

It is useful to bear in mind from Part 1 that we are primarily considering cloud providers that offer IAAS solutions, consequently entire environments can be provisioned and tested (via automation) in minutes rather than days or hours and that in itself is massive boon. This concept alludes to the ‘end goal’ of this type of cloud-based setup, i.e. the design of infrastructures with automation in mind and not just the introduction of automation techniques to current processes, but that’s a topic for another discussion.

There are obvious economic benefits to be had from using public clouds since Dev, and especially Test environments (in the cloud) do not necessarily need to be provisioned and available 24/7 as they normally are with on-premise environments. From a testing point-of-view, many enterprises have a monthly release cycle for example where the Test environment is much more demand compared to other times of the month. In this case it is possible to envisage a scenario where the Test environment is only instantiated when required and can lie dormant at other times.

The phrase ‘business agility’ has been applied to the way that a hybrid cloud can offer the controls of a private cloud whilst at the same time providing scalability via the public cloud and this is also a prime benefit. A relatively new term in this arena is ‘cloud bursting’. Offered by public clouds this refers to short but highly intensive peaks of activity that are representative of cyclical trends in businesses that see periodic rises and falls in demands for their services. For those business that anticipate this type and intensity of activity, this kind of service can be invaluable.

For the troops on the ground, an HP white paper describes clear benefits to developers and testers; “Cloud models are well suited to addressing developer and tester requirements today. They allow for quick and inexpensive stand-up and teardown of complex development and testing environments. They put hardware resources to work for the development and testing phases that can be repurposed once a project is complete”. [1]

Once properly provisioned and integrated, cloud infrastructures will usually offer faster time-to-market and increased productivity through continuous delivery and test automation, however these particular benefits may take a little time to manifest themselves since implementing full-scale Dev and Test environments with associated IDE and build integrations, and an automated test facility, is a relatively complex exercise requiring a range of skills from code development to domain admin, to QA and release automation.

Clearly to achieve and deliver this kind of flexibility a substantial tool set is required. Additionally, developers need to work harmoniously with operations (admin) in a partnership that has become known as DevOps, and this is what I meant by stating in Part 1 that a new mindset was required. The ultimate goal of adopting cloud based Dev and Test environments is continuous delivery through application release automation. This kind of agile approach is seen as a pipe dream by many enterprises and I believe the current perception is that too many barriers, both physical and cerebral exist to adopting the hybrid cloud model for effective product delivery.

These barriers include the obvious candidates, such as security and privacy in the cloud leading to a potential increase in vulnerability. This can be addressed by commissioning a private cloud for Prod systems and ensuring that any data and code in public clouds is not confidential nor does it compromise the company in any way. Another drawback that is often raised is vendor ‘lock-in’ and this simply relates to the terms and conditions of the cloud provider. With so many companies now offering cloud services, I personally think that ‘shopping around’ can mitigate this risk completely and can actually be a seen as a positive factor instead. Switching between cloud providers is becoming less and less of a problem and this in turn offers up a competitive advantage to the cloud consumer as they move their business to take advantage of lower costs.

I do accept that technical difficulties and associated downtime could form a barrier, but this can be said about any new, large tech venture. Since a large tool set is required and there will certainly be a lead time for the newly created DevOps team to get up to speed with continuous integration, test and release automation. Since applications are running in remote VMs (public cloud), there is an argument that businesses have less control over their environments. This may be true in some cases but again proper research should lead to a partnership where effective control can be established by the cloud consumer using appropriate tools that effectively leverage what the vendor has on offer.

I would like to think that in Part 2 of this three-part blog article I have managed to convey that in most cases the benefits of migrating Dev and Test to the cloud outweigh the drawbacks. In Part 3, I will look at how Dev and Test could be implemented at a fairly high level. There is a plethora of tools available to choose from, free open source, bespoke, bleeding edge whatever route you choose there is almost certainly a tool for the purpose. Integrating them could prove challenging, but that’s part of the fun, right?

Moving to the Cloud – Part 1 of 3

Part One – Defining the Hybrid Cloud
Earlier this year when I blogged ‘ten trends to influence IT in the next five years‘, one of the trends I mentioned has been written about on quite a few occasions in the last few months in various web articles and white papers. That particular trend is the use of the ‘Hybrid Cloud’ and it seems to be increasingly catching the attention of the tech evangelists who are keen to spread the word and radicalise the non-believers as I discovered in a recent Cloudshare webinar.

A little more research on the topic led me to discover that there is a sort of reluctance to adopt cloud (development) initiatives in general. Like most people I had just assumed that cloud-based architectures were creating a new technology storm and in a few years almost everything would be built, developed, hosted and run in a multitude of geographical locations by thousands of VM instances, created and destroyed in seconds. It appears this may not be the case and I find that seven months later, (that’s a long time in IT), the transition has simply not happened, or to be more precise, not at the ‘rate’ expected by cloud aficionados who have been talking about a grandiose move of technology in that direction for the last few years.

My gut feeling is that in general, cloud computing in the tech community is still not a well understood concept, at least from a ‘development tool’, point-of-view and this has logically hindered the move away from traditional development to cloud-centric infrastructures and environments. I have spent some time reading about the pros and cons of moving development to a cloud-based solution and whilst I am a avid supporter of the concept, the best approach for an organisation isn’t always straightforward and will almost certainly involve one of the toughest challenges that can be faced in IT, a cultural change in the workplace and a paradigm shift in the mindset of the developers.

To make a move to the cloud for development and test purposes, people have to think about doing things in a different way. There are other potential barriers, but this is likely to be the one that poses the greatest threat to starting out on the road to eventual development, deployment and testing in the cloud. In general Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers [1]. Whilst many of the large public cloud service providers also provide a private cloud facility, I expect that many organisations still prefer to provide their own private cloud implementation since this appears to give a higher degree of security, or at the very least facilitates the storage of data in a local datacentre.

There are quite a few benefits of a hybrid cloud but the obvious one is that it enables the owner take advantage of the larger resources that a public cloud might offer, but still store and maintain private data in a manner where it should be safe from malicious attack and/or theft. Of course there are some organisations whose entire business could exist in a public cloud, but based on my experience this is still not a concept that businesses are truly happy with and certainly within a lot of enterprise or government organisations there is a preference to at least have their production system hosted in a private cloud.

In summary, my concept of a hybrid cloud is one where an organisation has developed its own private cloud for their production (Prod) system and are happy to use the services of a public cloud to develop, host and run their development (Dev) and test (Test) environments. Really, what I am taking about here is moving each of these infrastructures to a cloud environment and that will form the basis of Part 3 of this blog. Part 2 coming up next, will further elaborate on the widely accepted benefits and introduce some of the negative aspects perceived with cloud computing.

PhraseExpress – Save Keystrokes


If you find yourself keying in the same phrase or word combination everyday, there is a neat little tool called PhraseExpress. It’s a time saver and can be used to capture anything from single words to complex phrases or sentences, or other commonly typed pieces of text such as names, addresses, telephones numbers etc. PhraseExpress is triggered via a certain set of keystrokes or manually via hotkey combinations and is incredibly easy to use.

Having researched other methods of text auto-completing, this appeared to be one of the best free tools out there on the web. It can be configured to start automatically with Windows, or it can be started manually, however the footprint is very small so is of little consequence to the drain on resources.

Since I consider it good practice to log out of websites when I am finished and I prefer not to have my browser store any information, on any given day when I then go to log into several websites, it’s really useful to email address usernames that can be called upon with a hotkey combination. Rather cynically (or sensibly) I considered that it may be some sort of spy or malware, but the page Phrase Express website has this to say about their product;

“Of course, PhraseExpress needs to have keyboard access in order to provide the desired functionality (hotkey support, autotext feature). A helper utility must be authorized to help.

Unfortunately, a few “AntiVirus” and “Security” programs may generally claim any program as potentially dangerous which access the keyboard, regardless if it is a malicious keylogger or a harmless keyboard utility.

Please be assured that never send any personal data over the internet. We are a registered company and surely have no interest to ruin the reputation of our 14 years history or to mess with one of the strictest privacy laws world wide. We also do not hide our place of business.”

This is still no guarantee, but it does appear to be legitimate and provided that the text stored is not of a sensitive nature, passwords and the like, I believe this is a nifty product that can save thousands of unnecessary keystrokes when used effectively.

A free download can be found here .

Medicine Apps – The Future?

Medicine Apps

I used to be a big fan of BBC’s Horizon programme. Back in the day it was at times controversial, ground breaking and one of the few programmes on air that appealed to science-heads, engineers, futurists and like-minded people. Recently though, I think it has gone down hill a bit, becoming less high tech and ultimately attempting to gain more traction with a less tech-savy audience. However, one episode I was impressed by, really hit the nail on the head, striking a great balance between the interest of the masses (smart phones), science (medicine) and the future (healing yourself). The idea of a GP-less future is appealing, I for one am not a big fan of visiting the local GP and generally rely on a self-diagnosed blend of common sense and Paracetamol to overcome any and all ailments or sicknesses that blight my otherwise healthy’ish lifestyle.

I am aware of smart phone apps that can measure heart rate using the camera flash and camera, or the step counters that have been around for a while that tell you just how lazy you have been in only achieving 3500 steps of your daily, recommended 10000. Additionally doctors actually use apps like Epocrates or Medscape to help them prescribe drugs and of course there are loads of reference materials type apps. But this isn’t what I’m talking about, well not completely, this is only the start and when we introduce near field communication technology then the potential for apps and medical applications, sky rockets.

Horizon presented the case of one particular doctor who had a whole host of apps and associated gadgets that allows for a direct, quick and accurate monitoring of various body variables (for lack of a better term). A smart phone with metal connectors on the back was a point-in-case where the user upon pressing their thumbs on the contacts was presented with a real-time electrocardiogram readout of their heart rate and associated functions. In the case where the user is a sufferer of diabetes a near-field device containing a tiny needle could be used to monitor blood sugar levels and present this on the screen of the smart phone in an app that looked familiar and was intuitive to use. The important thing here to recognise is that users are already very familiar with their apps and so introducing an app for medical/self-diagnosis purposes, even if it also requires the use of a near-field technology device shouldn’t to be an overwhelming technology experience. The point is, people are already well accustomed to using their smart phone for a wide variety of things and as near-field communication becomes increasing popular people may turn to using their smart phones for self-diagnosis and potentially alerting medical services in the event of an emergency.

Froma  fitness point-of -view, some of my friends have tried the ‘couch to 5k’ app or as it is now known ‘C25K’ and it has actually worked, providing a scheduled plan of running workouts that ensure new runners do not push too much and injure themselves, but also guiding them to their goal of running 5k without stopping. There are calorie counters and diet plans, kettlebell workouts and yoga instructors all available to give advice on how to get healthy, stay fit, or just inform you that you can avoid an array of common illnesses brought about by unhealthy lifestyles; in other words preventative medicine. With knowledge comes power and with power comes the chance to change one’s life, so maybe it’s time we all tried to ‘get physical’ with an app.

Unwanted Ads – Protect Your Browser

Unwanted Ads

Probably everyone has suffered form this at some point, i.e. unwanted ads appearing in the browser when you navigate to another site. It’s extremely annoying and also potentially dangerous. Ads can be a symptom of malware, or ‘malicious software’ which is designed to infiltrate your PC and perform all kinds of nasty things including gathering sensitive data. Obviously having your AV up-to-date can help to diagnose and remove them, but this isn’t always the case and you many need to take further action. I’m using Firefox, so this tip is really only aimed at that browser, but there are similar fixes for IE and Chrome.

The first thing I did to remove the malware and associated files, registry keys etc. was to run an adware cleaner tool, found here. This tool quickly found the guilty files and promptly removed them. It was easy to use and presented a log with all problems found. After deleteing all of the adware associated files and restarting the problem had disappeared immediately. I ran a full AV scan for a few hours just to see if anything else was picked-up, however since the AV didn’t originally pick it up I didn’t expect to see anything, this was more out of peace of mind. Feeling pretty confident that I had successfully removed the culprit, I then decided tto implement some positive, preventative, anti-adware action and discovered a couple of neat little add-ons in the process.

1. Adblock Plus
Adblock Plus allows you to regain control of the internet and view the web the way you want to. The add-on is supported by over forty filter subscriptions in dozens of languages which automatically configure it for purposes ranging from removing online advertising to blocking all known malware domains. Adblock Plus also allows you to customize your filters with the assistance of a variety of useful features, including a context option for images, a block tab for Flash and Java objects, and a list of blockable items to remove scripts and stylesheets.

2. Ghostery
Ghostery sees the “invisible” web, detecting trackers, web bugs, pixels, and beacons placed on web pages by Facebook, Google Analytics, and over 1,000 other ad networks, behavioural data providers, web publishers – all companies interested in your activity.

By installing each of the add-ons, I have now appear to be ad-free and I see small messages appearing every time ads are blocked, or trackers are found on the web page.