Remote Working: Office Not Required

Remote Working in Modern Times

One of the workplace topics that I have seen hotly debated on occasion, is the issue of remote working. Can it truly work? Are employees genuinely motivated? How can quality, working relationships be established and maintained? And so on. I have been involved with remote working and managing remote teams for a number of years, but I was still intrigued when I received a copy of ‘Remote: Office Not Required‘, written by 37 signals co-founders Jason Fried and David Heinemeier Hansson. I knew of the company and their products, and that they have a very successful business run entirely by remote workers.

Read More

I have long since finished the book and have only just now gotten around to blogging about the concept of the remote worker. It is not my intention to review the book here, suffice to say it is an exceptionally easy read and has a fluidity of prose rarely associated with tech-related books.

It currently has four stars on Amazon, and what could have been a potentially mundane subject to write about, is actually a well-crafted, enlightening and sometimes an amusing read.

So what’s the big deal? Why does the concept of remote working elicit widely differing responses and why do some people say they could never remote work whilst others are highly successful, mini-managers? The reason is really simple, it’s all a matter of trust. You have to trust your colleagues and trust yourself to fulfil your end of the deal.

What do I mean? Simply, you have to trust that you have the drive, motivation, desire and ability to be able to work away from the rest of your team, not on your own, but not co-located either. I really believe it’s that simple. Colleagues should be assured that you are the type of person that given a lull in the general day-to-day activities, will look for something to do, an improvement in process, finishing off the operations manual from 2 projects ago, or simply chasing down customer responses to emails.

Still, there is one area where I think remote working is lacking. Ironically enough it’s the tech needed for a typical remote team operation. It’s definitely not up-to-scratch in certain areas, but not to the point where the experience is unpleasant’. It’s just clear to me that it can be improved upon. Although tools such as Skype, TeamViewer and have improved by leaps-and-bounds over the last few years, I still find that they are easily affected by bandwidth/signal/wi-fi issues and in some cases lack a clear, intuitive interface.

With 4G availability and coverage ever expanding, and devices offering ever more facilities through powerful, rich apps this should be the impetus telecoms companies need to provide a true internet society with always on, always connected and always secure technology. Fast, secure, intuitive apps are key components to happy, remote workers.

37 Signals’ secret to success is pretty evident from the manner in which they openly recruited remote developers from all over the globe. By ensuring that each member of the team had appropriate tools and a solid service provider, there are few, if any complaints in the book that the lack of tools and/or suitable internet connection hindered their progress in any way. It’s also clear in the book that the interview process is rigorous and aimed at weeding out candidates that don’t suit the remote worker profile they have studiously crafted over the lifetime of their business.

And the secret for you as a remote worker? This excellent article by Zapier outlines a number of traits that remote workers need to possess in order for them to be successful and happy. They include more obvious things like being trustworthy and having an ability to communicate effectively via the written word, but also, and I think, just as important is having a local support system. This means of course having a life outside of work where interaction with people occurs on a level different to Skype and TeamViewer conversations. Clearly all digital work does indeed make Jack a dull boy.

Proverbs aside, it is a vital point and one not to be taken lightly if you are thinking about entering the remote working arena. Having a good network of friends and family is really necessary to help fulfill the void potentially created by the remote working environment.

I have found that working remotely can be, and is, just as rewarding as going to the office. Having successfully managed a number of remote teams I can say with a sense of achievement and satisfaction that being co-located isn’t that important, but having the right attitude is.

Prototyping and the SDLC

The Prototyping Model Applied to the SDLC

Embarking on any development project in a new supplier/customer relationship can be a daunting experience for all parties involved. There is a lot of trust to be built in what is usually a fairly short time and it is sensible to select an approach that improves the chances of the project startup succeeding and progressing as planned to the next phase.

Read More

In my experience, there is no, single, ‘correct’ method to do this, though clear dialogue and an experience with project management methodologies can help immensely. Depending on the school of thought, project type and customer requirements, any one of a number of project management methods can be employed and it usually falls to the project manager to select an approach that also best suits the needs of the business case.

One such approach that has worked well for me in the past is the ‘prototyping model’ approach to the software development lifecycle (SDLC). Software prototyping itself, of course, isn’t a new concept, but it is becoming more popular and should be seriously considered when starting-out on a new project where it is recognised that there are other risk factors involved, such as fresh partnership agreements, subjective designs and/or new technologies.

An obvious reason prototyping is becoming more popular is its relatively risk averse nature. In a short space of time, a customer has an opportunity to perform a comprehensive skills assessment on the supplier before deciding to move forward (or withdraw) with the main project. This substantially reduces cost and quality risks at the outset.

In-turn, a supplier can ascertain if the customer has a decent grasp of their product vision and an ability to specify a clear set of requirements, so the prototype partnership is usually a mutually beneficial one. If the conclusion to prototyping is that it has been a positive experience for both parties then there is good reason to remain confident in the project partnership going forward.

There are a number of benefits to prototyping which can suit either party, but one that is of particular benefit to the customer is using prototyping as a vehicle to choose between a number of suppliers who are all bidding for the same project. Again there is less risk, certainly to the customer and potentially to the supplier as well, since neither party would wish to continue with a project that has failed in its first initiative.

So really, what I am saying here is that prototyping is a cheap, powerful assessment tool, and depending on the approach could form the foundation of the main project. Code developed in the prototype phase could be reused, so the time taken to complete the prototype is not lost in the overall project timescale.

Additionally, prototyping is tool for building successful working relationships quickly and it can prove invaluable as a supplier capability yard stick. Generally speaking, a prototype SDLC model has an overriding advantage over other SDLC models since it doesn’t rely on what is supposed to happen, i.e. what has been written in technical design documentation. Instead it canvasses the users directly and asks them what they would really like to see from the product. Gradually, the product is developed through a number of iterations and addresses the needs of the users directly in that phase of the project.

The Prototyping SDLC Model

The prototyping model starts out with a initial phase of requirements gathering, pretty much like any other software development process, however, it quickly moves to development after an initial, simple design is produced. A first iteration is released and given to the customer for review and feedback and this in turn may elicit more requirements as well as alter the design and function of the application.

This process continues until the customer accepts the prototype as complete and the project moves to the next phase of development. At this point the prototype can become redundant, or it can be continued to be used as a tool for examining various design options and/or functional changes; or it can be incorporated into the project main, as it is.

Since the prototype is developed in what is largely an agile process, there is no reason that the main application cannot be developed in the same way, although purists may argue that this is an inherently waterfall approach, I would argue that any purist approach for the SDLC can cause issues and practically speaking one should adopt an approach that suits the customer and project, i.e. flexibility is key.

Prototyping – Pros and Cons

Prototyping offers a number of advantages:

1. Time-boxed development ensures a minimum viable product
2. Users are encouraged to participate in the design process
3. Changes can be accommodated quickly and easily
4. The application evolves as a consequence of a series iterations of corrective feedback loops. Generally, this leads to a much more widely accepted product
5. Defects can usually be detected and fixed early on, possibly saving resources later on
6. Areas of functionality that are missing and/or confusing to users can be easily identified and remedied

There are however a few disadvantages as follows:

1. Most models are never fully completed and may exhibit technical debt which is never fully addressed. This is especially important if the prototype is literally used as the basis for the real application
2. Documentation, if required, can be a bit of a nightmare since changes are usually frequent and can be substantial
3. There can be a reluctance within developers to change from the initial prototype design. Instilling a new design mindset can be difficult
4. If integration is required and the prototype is unstable, this can cause other issues
5. An over enthusiasm in the iterative stages can result in too much time being invested in this phase

Will it work for you?

Before you can answer that question, it will probably be a useful exercise to ask yourself what do you need to get out of the prototype and, what do you intend to do with it afterwards?

The prototype model can be exercised in a few different ways and this can have a substantial impact on the project as a whole. More often than not prototypes fall into 1, of about 4 different categories:

1. Prototypes that are meant to be thrown away after the prototype phase
2. The evolutionary prototype where iterations continue and the prototype evolves into bigger and more functional prototypes and ultimately, the final application
3. The incremental prototype which is a more modular approach. Each subsystem is prototyped and then all of the sub-systems are integrated to build the final application
4. A web-based technique known as extreme prototyping where only the web pages are developed initially and then the functional aspects are added in the final stage. An intermediate stage is used to simulate the data processing as the prototype evolves.

The technology used to develop the application could also have an impact on how development proceeds, for example with mobile apps, most IDEs have built-in simulators to help with rapid design and demonstration of the app, so prototyping is almost implicit in the overall build approach.

Whichever SDLC approach you chose, prototyping should be considered as a useful tool in application development. It can save time, cost and be an invaluable indicator as to the future success of your project. Priceless!

Moving to the Cloud – Part 3 of 3

Part 3 – Implementing the Hybrid Cloud for Dev and Test

In Part 2, I presented an overview of the main benefits and drawbacks of using a hybrid cloud infrastructure for Dev and Test environments whilst Part 1 defined my interpretation of a hybrid cloud in modern day parlance. In the third and final part, I will talk about the processes involved when implementing Dev and Test cloud-based environments and how they can be integrated to achieve application release automation through continuous build and testing.

An obvious starting point is the selection of a public cloud provider and it appears that Amazon is currently winning that race, though Microsoft, HP and Google are in contention creating the ‘big four’ up front, with a multitude of SME cloud providers bringing up the rear. Before selecting a public cloud vendor there are a number of important aspects (based on your requirements) to consider and decisions to be made around things like; value for money, network and/or VM speed (and configuration), datacentre storage etc.

Perhaps a simple pay-as-you-go model will suffice or alternatively there may be benefits to be had from reserving infrastructure resources up front. Since the public cloud offers scaling, then some sort of inherent and easily invoked auto-scaling facility should also be provided as should the option to deploy a load-balancer for example. Even if it initially appears that the big players offer all of the services required, the final choice of provider is still not all plain sailing, since other factors can come into play.

For example, whilst Amazon is the a clear market leader and a understandable vendor choice, if conforming to technology standards is a requirement this could pose a problem, since large vendors can and do impose their own standards. On top of that SLAs can be unnecessarily complicated, difficult to interpret and unwieldy. Not surprisingly, to counter the trend of large consortium vendors, there has been substantial growth in open source, cloud environments such as OpenStack, Cloudstack and Eucalyptus. Openstack for example, describe themselves as “a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds” [1].

By it’s very nature, IAAS implies that many VMs exist in a networked vLAN and there is an innate ability to share and clone VM configurations very quickly. This implies that there is a need for some sort of API which supports the requirement to create VMs, share them (as whole environments) via REST-based web services. This point retraces its way back to my remark in Part 2 where I mentioned that new infrastructures should be built with automation in mind. This approach would utilise the customisable APIs that vendors generally provide and would normally support automatic provisioning, source control, archive and audit operations.

Having settled upon a public cloud provider, the private cloud is likely to be created using whatever means are available, i.e. Windows or Ubuntu Server for example could serve as a basis for creating the infrastructure though other tools such as VirtualBox or VMWare may be required. In an ideal world the technology stack in the Private cloud should be the same as that in the Public cloud, so examining the in-house technology stack could shape the decision about the choice of public vendor.

‘Integrate at least daily’ has become one of the mantras of the proponents of new agile methodologies, and like cloud vendors there is a wealth of continuous integration and delivery (CI/CD) tools on the market. It isn’t easy to choose between them and whilst some general considerations should be taken into account, the online advice seems to be to ‘dive-in’, see what works and what doesn’t.

A lot of the tools are free so the main cost is time taken for setup and benefit realisation, however the advantages of any CI/CD system that works properly will almost always outweigh the drawbacks, whatever the technology. Jenkins and Hudson appear to be market leaders but there are a number of others to consider and quite often they will include additional components to configure for continuous delivery.

Test automation is clearly fundamental to moving to a CI/CD approach and is key to accelerating software quality. Assuming that development is test-driven, enterprises implementing the hybrid cloud architecture can expect to produce higher quality software faster by eliminating traditional barriers between QA, developers, and ops personnel. In instances where there is substantial code development, several test environments may be required in order to profit from the expandable nature of the public cloud by running several regression test suites in parallel.

Again there is a large number tools (or frameworks) available for test automation  available on the market. Selenium Webdriver, Watir and TFS (coded UI tests) are three of the more widely used. For testing APIs there is SOAP UI and WebAPI, and for load testing, JMeter. The frameworks and associated tools selected will likely compliment available team skills and current technology stack. Whatever the choice, there is still the significant challenge of integrating and automating tools and frameworks effectively before the benefits of automation will be properly realised.

As well as a fairly large set of development, source control, build, release and test automation tools a typical agile team will also typically require some sort of project management tool which should ideally have a method to track and monitor defects as well as plan and control sprints during the lifecycle of the application. Tools such as Rally or Jira are suitable for this and offer varying levels of complexity based on project requirements and available budget.

Clearly, there is a lot to consider when making the move to cloud development and this is likely to be one of the reasons why more businesses have not embraced cloud technologies for anything other than storage. My advice would be think big, but start small and take it one step at a time, understanding and integrating each new element of technology along the way is key to the final setup. Ultimately, the end goal should be well worth it and it may shape your business for years to come. The cloud technology curve is here and here to stay, the question is, are you on it?

Moving to the Cloud – Part 2 of 3

Part Two – Hybrid Cloud Benefits

In Part 1, I presented a brief definition of the hybrid cloud and hinted at why it could be a useful instrument for enterprises wishing to move their agile Dev and Test environments to a public cloud, but still retain their Prod systems in a local, private cloud. In Part 2, I will consider a number of key areas where substantial benefit can be leveraged using current cloud technologies and why this should be  considered as a serious move towards a more efficient and secure development strategy. That said, like any IT initiative, cloud computing is not without risks and they too will be considered, leaving the reader to weigh-up the options.

It is useful to bear in mind from Part 1 that we are primarily considering cloud providers that offer IAAS solutions, consequently entire environments can be provisioned and tested (via automation) in minutes rather than days or hours and that in itself is massive boon. This concept alludes to the ‘end goal’ of this type of cloud-based setup, i.e. the design of infrastructures with automation in mind and not just the introduction of automation techniques to current processes, but that’s a topic for another discussion.

There are obvious economic benefits to be had from using public clouds since Dev, and especially Test environments (in the cloud) do not necessarily need to be provisioned and available 24/7 as they normally are with on-premise environments. From a testing point-of-view, many enterprises have a monthly release cycle for example where the Test environment is much more demand compared to other times of the month. In this case it is possible to envisage a scenario where the Test environment is only instantiated when required and can lie dormant at other times.

The phrase ‘business agility’ has been applied to the way that a hybrid cloud can offer the controls of a private cloud whilst at the same time providing scalability via the public cloud and this is also a prime benefit. A relatively new term in this arena is ‘cloud bursting’. Offered by public clouds this refers to short but highly intensive peaks of activity that are representative of cyclical trends in businesses that see periodic rises and falls in demands for their services. For those business that anticipate this type and intensity of activity, this kind of service can be invaluable.

For the troops on the ground, an HP white paper describes clear benefits to developers and testers; “Cloud models are well suited to addressing developer and tester requirements today. They allow for quick and inexpensive stand-up and teardown of complex development and testing environments. They put hardware resources to work for the development and testing phases that can be repurposed once a project is complete”. [1]

Once properly provisioned and integrated, cloud infrastructures will usually offer faster time-to-market and increased productivity through continuous delivery and test automation, however these particular benefits may take a little time to manifest themselves since implementing full-scale Dev and Test environments with associated IDE and build integrations, and an automated test facility, is a relatively complex exercise requiring a range of skills from code development to domain admin, to QA and release automation.

Clearly to achieve and deliver this kind of flexibility a substantial tool set is required. Additionally, developers need to work harmoniously with operations (admin) in a partnership that has become known as DevOps, and this is what I meant by stating in Part 1 that a new mindset was required. The ultimate goal of adopting cloud based Dev and Test environments is continuous delivery through application release automation. This kind of agile approach is seen as a pipe dream by many enterprises and I believe the current perception is that too many barriers, both physical and cerebral exist to adopting the hybrid cloud model for effective product delivery.

These barriers include the obvious candidates, such as security and privacy in the cloud leading to a potential increase in vulnerability. This can be addressed by commissioning a private cloud for Prod systems and ensuring that any data and code in public clouds is not confidential nor does it compromise the company in any way. Another drawback that is often raised is vendor ‘lock-in’ and this simply relates to the terms and conditions of the cloud provider. With so many companies now offering cloud services, I personally think that ‘shopping around’ can mitigate this risk completely and can actually be a seen as a positive factor instead. Switching between cloud providers is becoming less and less of a problem and this in turn offers up a competitive advantage to the cloud consumer as they move their business to take advantage of lower costs.

I do accept that technical difficulties and associated downtime could form a barrier, but this can be said about any new, large tech venture. Since a large tool set is required and there will certainly be a lead time for the newly created DevOps team to get up to speed with continuous integration, test and release automation. Since applications are running in remote VMs (public cloud), there is an argument that businesses have less control over their environments. This may be true in some cases but again proper research should lead to a partnership where effective control can be established by the cloud consumer using appropriate tools that effectively leverage what the vendor has on offer.

I would like to think that in Part 2 of this three-part blog article I have managed to convey that in most cases the benefits of migrating Dev and Test to the cloud outweigh the drawbacks. In Part 3, I will look at how Dev and Test could be implemented at a fairly high level. There is a plethora of tools available to choose from, free open source, bespoke, bleeding edge whatever route you choose there is almost certainly a tool for the purpose. Integrating them could prove challenging, but that’s part of the fun, right?

Moving to the Cloud – Part 1 of 3

Part One – Defining the Hybrid Cloud
Earlier this year when I blogged ‘ten trends to influence IT in the next five years‘, one of the trends I mentioned has been written about on quite a few occasions in the last few months in various web articles and white papers. That particular trend is the use of the ‘Hybrid Cloud’ and it seems to be increasingly catching the attention of the tech evangelists who are keen to spread the word and radicalise the non-believers as I discovered in a recent Cloudshare webinar.

A little more research on the topic led me to discover that there is a sort of reluctance to adopt cloud (development) initiatives in general. Like most people I had just assumed that cloud-based architectures were creating a new technology storm and in a few years almost everything would be built, developed, hosted and run in a multitude of geographical locations by thousands of VM instances, created and destroyed in seconds. It appears this may not be the case and I find that seven months later, (that’s a long time in IT), the transition has simply not happened, or to be more precise, not at the ‘rate’ expected by cloud aficionados who have been talking about a grandiose move of technology in that direction for the last few years.

My gut feeling is that in general, cloud computing in the tech community is still not a well understood concept, at least from a ‘development tool’, point-of-view and this has logically hindered the move away from traditional development to cloud-centric infrastructures and environments. I have spent some time reading about the pros and cons of moving development to a cloud-based solution and whilst I am a avid supporter of the concept, the best approach for an organisation isn’t always straightforward and will almost certainly involve one of the toughest challenges that can be faced in IT, a cultural change in the workplace and a paradigm shift in the mindset of the developers.

To make a move to the cloud for development and test purposes, people have to think about doing things in a different way. There are other potential barriers, but this is likely to be the one that poses the greatest threat to starting out on the road to eventual development, deployment and testing in the cloud. In general Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers [1]. Whilst many of the large public cloud service providers also provide a private cloud facility, I expect that many organisations still prefer to provide their own private cloud implementation since this appears to give a higher degree of security, or at the very least facilitates the storage of data in a local datacentre.

There are quite a few benefits of a hybrid cloud but the obvious one is that it enables the owner take advantage of the larger resources that a public cloud might offer, but still store and maintain private data in a manner where it should be safe from malicious attack and/or theft. Of course there are some organisations whose entire business could exist in a public cloud, but based on my experience this is still not a concept that businesses are truly happy with and certainly within a lot of enterprise or government organisations there is a preference to at least have their production system hosted in a private cloud.

In summary, my concept of a hybrid cloud is one where an organisation has developed its own private cloud for their production (Prod) system and are happy to use the services of a public cloud to develop, host and run their development (Dev) and test (Test) environments. Really, what I am taking about here is moving each of these infrastructures to a cloud environment and that will form the basis of Part 3 of this blog. Part 2 coming up next, will further elaborate on the widely accepted benefits and introduce some of the negative aspects perceived with cloud computing.

PhraseExpress – Save Keystrokes


If you find yourself keying in the same phrase or word combination everyday, there is a neat little tool called PhraseExpress. It’s a time saver and can be used to capture anything from single words to complex phrases or sentences, or other commonly typed pieces of text such as names, addresses, telephones numbers etc. PhraseExpress is triggered via a certain set of keystrokes or manually via hotkey combinations and is incredibly easy to use.

Having researched other methods of text auto-completing, this appeared to be one of the best free tools out there on the web. It can be configured to start automatically with Windows, or it can be started manually, however the footprint is very small so is of little consequence to the drain on resources.

Since I consider it good practice to log out of websites when I am finished and I prefer not to have my browser store any information, on any given day when I then go to log into several websites, it’s really useful to email address usernames that can be called upon with a hotkey combination. Rather cynically (or sensibly) I considered that it may be some sort of spy or malware, but the page Phrase Express website has this to say about their product;

“Of course, PhraseExpress needs to have keyboard access in order to provide the desired functionality (hotkey support, autotext feature). A helper utility must be authorized to help.

Unfortunately, a few “AntiVirus” and “Security” programs may generally claim any program as potentially dangerous which access the keyboard, regardless if it is a malicious keylogger or a harmless keyboard utility.

Please be assured that never send any personal data over the internet. We are a registered company and surely have no interest to ruin the reputation of our 14 years history or to mess with one of the strictest privacy laws world wide. We also do not hide our place of business.”

This is still no guarantee, but it does appear to be legitimate and provided that the text stored is not of a sensitive nature, passwords and the like, I believe this is a nifty product that can save thousands of unnecessary keystrokes when used effectively.

A free download can be found here .

Medicine Apps – The Future?

Medicine Apps

I used to be a big fan of BBC’s Horizon programme. Back in the day it was at times controversial, ground breaking and one of the few programmes on air that appealed to science-heads, engineers, futurists and like-minded people. Recently though, I think it has gone down hill a bit, becoming less high tech and ultimately attempting to gain more traction with a less tech-savy audience. However, one episode I was impressed by, really hit the nail on the head, striking a great balance between the interest of the masses (smart phones), science (medicine) and the future (healing yourself). The idea of a GP-less future is appealing, I for one am not a big fan of visiting the local GP and generally rely on a self-diagnosed blend of common sense and Paracetamol to overcome any and all ailments or sicknesses that blight my otherwise healthy’ish lifestyle.

I am aware of smart phone apps that can measure heart rate using the camera flash and camera, or the step counters that have been around for a while that tell you just how lazy you have been in only achieving 3500 steps of your daily, recommended 10000. Additionally doctors actually use apps like Epocrates or Medscape to help them prescribe drugs and of course there are loads of reference materials type apps. But this isn’t what I’m talking about, well not completely, this is only the start and when we introduce near field communication technology then the potential for apps and medical applications, sky rockets.

Horizon presented the case of one particular doctor who had a whole host of apps and associated gadgets that allows for a direct, quick and accurate monitoring of various body variables (for lack of a better term). A smart phone with metal connectors on the back was a point-in-case where the user upon pressing their thumbs on the contacts was presented with a real-time electrocardiogram readout of their heart rate and associated functions. In the case where the user is a sufferer of diabetes a near-field device containing a tiny needle could be used to monitor blood sugar levels and present this on the screen of the smart phone in an app that looked familiar and was intuitive to use. The important thing here to recognise is that users are already very familiar with their apps and so introducing an app for medical/self-diagnosis purposes, even if it also requires the use of a near-field technology device shouldn’t to be an overwhelming technology experience. The point is, people are already well accustomed to using their smart phone for a wide variety of things and as near-field communication becomes increasing popular people may turn to using their smart phones for self-diagnosis and potentially alerting medical services in the event of an emergency.

Froma  fitness point-of -view, some of my friends have tried the ‘couch to 5k’ app or as it is now known ‘C25K’ and it has actually worked, providing a scheduled plan of running workouts that ensure new runners do not push too much and injure themselves, but also guiding them to their goal of running 5k without stopping. There are calorie counters and diet plans, kettlebell workouts and yoga instructors all available to give advice on how to get healthy, stay fit, or just inform you that you can avoid an array of common illnesses brought about by unhealthy lifestyles; in other words preventative medicine. With knowledge comes power and with power comes the chance to change one’s life, so maybe it’s time we all tried to ‘get physical’ with an app.

Unwanted Ads – Protect Your Browser

Unwanted Ads

Probably everyone has suffered form this at some point, i.e. unwanted ads appearing in the browser when you navigate to another site. It’s extremely annoying and also potentially dangerous. Ads can be a symptom of malware, or ‘malicious software’ which is designed to infiltrate your PC and perform all kinds of nasty things including gathering sensitive data. Obviously having your AV up-to-date can help to diagnose and remove them, but this isn’t always the case and you many need to take further action. I’m using Firefox, so this tip is really only aimed at that browser, but there are similar fixes for IE and Chrome.

The first thing I did to remove the malware and associated files, registry keys etc. was to run an adware cleaner tool, found here. This tool quickly found the guilty files and promptly removed them. It was easy to use and presented a log with all problems found. After deleteing all of the adware associated files and restarting the problem had disappeared immediately. I ran a full AV scan for a few hours just to see if anything else was picked-up, however since the AV didn’t originally pick it up I didn’t expect to see anything, this was more out of peace of mind. Feeling pretty confident that I had successfully removed the culprit, I then decided tto implement some positive, preventative, anti-adware action and discovered a couple of neat little add-ons in the process.

1. Adblock Plus
Adblock Plus allows you to regain control of the internet and view the web the way you want to. The add-on is supported by over forty filter subscriptions in dozens of languages which automatically configure it for purposes ranging from removing online advertising to blocking all known malware domains. Adblock Plus also allows you to customize your filters with the assistance of a variety of useful features, including a context option for images, a block tab for Flash and Java objects, and a list of blockable items to remove scripts and stylesheets.

2. Ghostery
Ghostery sees the “invisible” web, detecting trackers, web bugs, pixels, and beacons placed on web pages by Facebook, Google Analytics, and over 1,000 other ad networks, behavioural data providers, web publishers – all companies interested in your activity.

By installing each of the add-ons, I have now appear to be ad-free and I see small messages appearing every time ads are blocked, or trackers are found on the web page.

“Good Morning Vietnam!” – A Viable Outsourcing Model?


The line made famous by the movie of the same title and staring Robin Williams is one that may be familiar to many and was made in an era when post-Vietnam war flicks were popular. The country itself was still suffering badly from the effects of a twenty year war and whilst the western world basked in the glory of an economic boom, Vietnam lacked the consumables and creature comforts that a lot of us took for granted. Basic facilities like decent housing, proper medicines and consumer goods were scarce and available only to those with money probably gained from illicit wartime dealings. Since the countryside was ravaged by explosives and poisonous chemicals, agriculture was slow to recover and this only compounded the problems the country was experiencing. General management was corrupt and inept, or both; and the cost of the military occupation of Cambodia was consuming the national purse at an alarming rate. In 1987 the year that ‘Good Morning Vietnam’ was shot, the standard of living was unstable with manual workers, civil servants, armed forces personnel and labourers all experiencing serious economic difficulties in their everyday lives. Food and fuel rationing was reinstated in many parts of the country as the economy really struggled to get to its feet. Things were looking very gloomy indeed.

Vietnam did have an incredibly strong work ethic with countless generations having spent 10 to 15 hours a day in the rice fields and whilst that drive to succeed still exists the younger generation not wanting to follow literally in their parents footsteps flocked to the cities in search of a new life. It’s a story seen the world over, the new generation were eager to posses mobile phones, TVs, computers and game stations which were starting to flood the country. Of course when these goods appeared on the scene so too did industries to service them. This is where our story becomes a little more interesting, at least from a tech point-of-view. With the inevitable modernisation came an increase in demand for the higher end goods and tech gadgets, and many ‘modern’ businesses started to instantiate themselves within the new and fast expanding economy. The country had a burning desire to join the rest of the developed world, to educate the children and promote business start-ups and economic trade.

On a recent visit to Vietnam and reading various websites on my return, I think that things have moved on considerably and rapidly. Both Hanoi and Ho Chi Minh City are enjoying a mini-explosion in the tech industry and students are leaving university with degrees in computing science and subsequently taking up jobs in air conditioned offices in the city. The IT industry in general is posting year-on-year growth rates of 25-35%, some 3 to 5 times higher than the GDP and has done so since the early part of the last decade. But what’s really interesting is the fact that Vietnam is vying to become a substantial player in the offshore outsourcing market. It’s well know that India has been incredibly successful in this arena for some time now, but as costs increase, it is a less lucrative prospect for technical outsourcing.

IBM for example have already jumped into Vietnam with both feet and currently have their biggest offshore delivery centre located there. Vietsoftware, Hanoi’s second largest outsourcing company is enjoying an extremely successful run;  its founders having studied and worked in Australia and Europe being well aware of the kind of outsourcing model that works well for the western world. There are few companies with greater than 1000 employees, but one that has 1200 is called TMA and being one of the most successful in the country, have said that they earned more than $22 million dollars in 2012. The founder’s husband, yes that’s right, this company was started by a lady, said that their ambition for TMA was “to be one of the top offshore developers and help put Vietnam on the world map of offshore development by exemplary quality and customer focus”. Yes we have heard this kind of talk before, but with plenty of failed Indian outsource examples to learn from, one can’t help but think that this is a success story that will last and lead by example.

Outsourcing startups In Vietnam are not without facing challenges though and India still remains a strong competitor on two fronts. Obviously the experience of having succeeded in the outsourcing market is prevalent in areas such as Bangalore, the tech-hub of India. Additionally, India has strong English skills and in fact far exceeds Vietnam in this area, even though English is on the curriculum of most decent schools. It is still difficult to find IT talent in Vietnam and this has been one of the limiting factors on progress, especially when there are one or two big companies that tend to hoover up all the available talent. This is starting to be offset though by the sheer numbers of students taking computing science degrees, since 2006 the number of students in this faculty has increased by something like 70%.

Many people have said that the current economic bubble that Vietnam is experiencing will burst, however the response in the outsourcing industry is that most of the revenue is from foreign countries and it is those countries upon which they are reliant and not the continual generation of new, internal business. Whatever the case, Vietnam is an exciting prospect for outsourcing and a number of the main players have opened as many as six offices worldwide with plans to expand still further. I can only welcome the newly emerging businesses into to the world of consulting where getting the next customer is always a challenge and relying on repeat business is becoming ever more risky.

Gartner Webinars – Ten Trends and Technologies to Impact IT Over the Next Five Years

Gartner Webinars

The web is full of great ways to learn. All the information you could ever need is out there and it is continually accumulating. In its current state with around 1.5 billion web pages it would take many lifetimes to read all of what has been posted online to date. It’s an incredible wealth of data and sifting out what is useful is becoming increasingly taxing on one’s filtering skills. So how best to gather relevant, useful and interesting information. Well, one way I have found that is particularly useful to grab snippets of information, distilled and presented by people who should know what they are talking about, is to register for and watch webinars. Webinars are essentially ‘seminars on the web’, presentations if you like, given by ‘experts’ in the field to an audience of listeners who can ask questions and interact in the usual way. In theory this should be time better spent than trawling the web attempting to collate the same information, some of which may be incorrect or outdated.

Recently, I logged into Gartner [1] and watched a webinar about anticipated trends that would change technology over the next five years. Gartner described the webinar in the following summary paragraph; “Strategic planners have long realized that efficient planning must be accomplished by looking from the outside in. Internal trends, market trends and societal trends are rapidly converging, and many of these will have dramatic effects on infrastructure and operations planning. This presentation will highlight the most crucial trends to watch over the next five years.”

The pace of change of technology never ceases to amaze me. In the mobile device era for example, it is customer demand that is driving a lot of that change and this demand has inevitably made its presence felt in the workplace. However, the method by which IT (in general) moves forward in time isn’t just about technology, it’s also about market forces, social trends and even climate change. There are many factors to consider and from the bottom up people should continually look for ways to broaden their understanding of the multitude of influencing factors. It has been shown that the more desirable/useful IT staff have a broad ranging skill set and whilst they may have cut their teeth in development, database management, or networking; having the ability to look across verticals, organise people, and ultimately know where to look for problems are potentially more important to a business. In doing so one must also consider the future; the technologies, the demands and the trends. Where are they likely to come from and how can you, as a business best position yourself to reap maximum reward? Here, I put forward my spin on David Capuccio’s excellent webinar and present my thoughts in response to the topics discussed on the day.

1. Organisational Entrenchment and Disruptions
This is clearly a two-point problem. On the one hand this is about an organisation’s ability to respond positively to disruptive technology and use it to good effect. Of course this also means a certain amount of risk-taking, perhaps going out on a limb to embrace new tech, train staff and develop new business with interested customers. It’s a big ask, but one I feel is worth it since the alternative is not pretty, i.e. to remain rooted in old technology, potentially lose custom and nourish a culture of nonchalance in the workplace. Cultural changes are definitely required for success in a world where technology is the product, however things can go wrong and move backwards. For example Carpuccio quoted that “By 2014, 30% of organizations using SaaS Operations Management tools will switch to OnPremise due to poor service levels.” And this is predicted when we really should have seen continual growth in this area.

2. Software Networks
The first technology point in the series talks about SDNs or software-defined networks that abstracts away elements of the networks. This means entire networks can be built on-the-fly without having to provision them manually, or node-by-node. Parameters for monitoring and controlling information and flow can be effected via a centrally located software program and there are a number of advantages of having the control logic removed from the actual network. Another example of being driven by customer demand, the SDN offers less time to provision, better up-time performance, infrastructure savings etc. so definitely one to look out for in the near future.

3. Bigger Data and Storage
Big data has been around for a while, but what does this really mean for us? Well from the perspective of a business, data continues to grow, regardless of budget and effectively never ending. From a user perspective, as more people move to the internet and mobile device usage, the increase in demand will in turn generate an increase in data. What does all this mean? The answer is big data, i.e. “so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications” []. It’s pretty obvious that this brings it’s own problems, auditing, back-up and of course analyses. Big data is not industry specific and spans many verticals including defence, academia, banking and other private sector industries. Big data will change how data is managed and stored but it should also offer-up many advantages.  Bigger is better, right?

4. Hybrid Cloud Services
It is anticipated that private clouds will dominate in the next five years, but there will still be a requirement for public clouds and this combination of private-public (and/or community), cloud-based service availability from vendors, each tailored to individual organisations is known as the ‘hybrid cloud’. The general advantages are pretty much the same as any cloud offering, but there are some specific ones including; the private cloud will be more versatile, responsive and secure. For example, organisations who couldn’t previously leverage cloud services at all due to regulatory or compliance issues, should be able to utilise the private cloud and still comply with regulation whilst at the same time making use of the public cloud for use with non-compliance data.

5. Client & Server Architectures
The development of both client and server architectures will continue and the variation celebrated. It is accepted that one size does not fit all and there is a need for specialised clients (and servers), and the OS that runs on them. One approach for servers is to make them more modular so that individual components can be swapped out for new versions without having to upgrade the whole machine. A driving force will also be environmental considerations, exceptionally low power machines will be in demand as will the development of specialist tools to monitor and report on energy usage. With BYOD also coming more and more into play, the client/server partnership has never been more varied and this can should be extremely beneficial to both business and consumer.

6. The Internet of Things
What does this mean?  Well simply, it means that in the future many ‘things’ will be connected to the internet via smart objects, monitoring devices, radio transmission, near field devices etc. At the moment within the sporting community many athletes regularly collect, monitor and upload data and compare with other athletes in the same sport, for example. Imagine the same principle applied to numerous other household devices, the fridge that ordered food automatically, the heating system that is controlled from the mobile phone, the car that emails you when it is due for a service etc. Note the feedback loop to big data and potentially the hybrid cloud, it goes without saying that many of the points in this list are interdependant and intradependant. This particular point is the one that the consumer will be most aware of, the one that truly disrupts their lives and deliveries a society that is ‘always on’.

7. IT/OT and Appliance Madness
This point refers to the sheer multitude of appliances that are currently used in the industry and the trend that has seen that number explode in fairly recent times. From consumer-based PCs, Macs, laptops, tablets and mobile devices, to business-focused backend machines like standard servers and blade servers, the growth has been phenomenal. It also includes devices that can be virtualised by using software from the ever growing number of vendors, essentially if it can be built, it can be simulated. This growth is set to continue and it is again driven by consumer demand. It is not without its concerns however, since it is estimated that “Through 2014, employee-owned devices will be compromised by malware at more than double the rate of corporate-owned devices.” Clearly there are new challenges to be met, but knowing that this explosive trend in appliance diversification is set to continue will no doubt encourage new and innovative ways to offset these problems.

8. Virtual Data Centres
This is really the next logical step in virtualisation and the advantages it offers. With virtualised data centres, workloads could be moved from one site to another, literally anywhere in the globe in response to a demand. Virtual storage is combined with virtual servers and networking to generate an entire data centre that can be accessed through a single portal and parameters such as capacity and pooling of resources can all be changed in real-time. This is a powerful resource and will surely be at the forefront of virtualisation trends in the next few years.

9. Operational Complexity
Points 1 through 9 have all contributed to operational complexity in one way or another and according to Glass’ Law (applied to IT), “for every 25% increase in functionality in a system there is a 100% increase in the complexity of that system. []. I don’t find this statement too surprising but it does raise a conundrum; just how complex can systems get and still be usable? It’s an interesting point and one I think that could be defended by NASA during their operation of the space shuttles, cited as many as the single most complicated system ever built. Nevertheless, complexity is par for the course during periods of rapid development and it should be recognised that the IT industry is no exception.

10.IT Demand
A really simple one to finish with and I will summarise with Carpuccio’s bulleted list of web stats:
Over 1.5 billion Web pages (and growing)
450,000 iPhone apps
Over 200,000 Android apps
10,500 radio stations
5,500 magazines
Over 300 TV networks

This is a trend that even the most dispassionate of internet futurists couldn’t fail to see, the question is; how do we respond?