The future of the L&P data centre (part 2 of 2)

In my previous post I started thinking about the influences which will change data centres in Life and Pensions organisations.  There are a few more topics I believe will have an impact.  The first is DevOps.

Introduction of DevOps

It’s more than just one of the latest buzzwords, DevOps is the sort of practice which IBM and the most forward-thinking companies are adopting for continuous delivery of software.  The idea is to better respond to business requirements and to much more quickly respond to market opportunities and balance speed, quality and cost.

Although often described as a new approach to traditional application lifecycle management I tend to think of it more as the next evolution.  We broaden software dev and delivery – using agile and lean principles – to all stakeholders in an organisation that develop, operate or benefit from the business’s systems.  I think the “benefit” piece of that sentence is hugely important, and have been chatting to others about “BusDevOps” because it’s crucial we include the business and correctly assess the importance and relevance of their perceived needs.

DevOps provides greater integration between line of business and IT dev and delivery organisations.  It should address the “Business-IT” gap as well as the “IT-IT” gap.

DevOps is used extensively for Systems of Engagement for expanding customer outreach and enabling an increasingly mobile workplace.  This has particular importance for L&P organisations aiming to benefit from customer insight, and to increase customer engagement and customer transactions.

(You can read IBM’s perspective of DevOps here: http://www.ibm.com/ibm/devops/us/en/)

Next on my list of influences is….

Automation of IT service design and delivery

In this new DevOps environment the importance and relevance of automated provisioning also increases.  Many existing L&P organisations now boast of virtualised infrastructure which provides a solid foundation on which automation can be built, and it that automation should be applied to workflows, provisioning (including dynamic allocation of workload), deployment, discovery of infrastructure elements and their configuration, event resolution, metering and billing.  These latter two in particular are vital in demonstrating the value of technology to the L&P businesses which increasingly see IT as a cost centre and not a value centre; with application of analytics to the finances of delivering IT comes better insight.

Recording and automation of expert knowledge within IT must be considered to support geographically-dispersed organisations, as well as better integrated and policy-based service management.  As capability to allow staff to use mobile devices to work on the move is provided, consideration must be given to ensure that service and IT management via these devices is also possible.  Although self-service portals already exist to allow end users to make technology requests, this should be extended to allow technologists to request provision of environments for development.

And of course, next is…

Adopting Cloud

For existing L&P organisations I would not argue with any decision that Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) capabilities will be provisioned privately, on premise.

Traditional IT management is centred on the hardware, with a somewhat bottom up approach.  There are many silos: server, storage, network, and homogeneous compute, and there are is often extensive manual process intervention.  But the IT world is moving to new software defined management which is workload aware, with a top down approach. and integrated server, storage and network.  There is heterogeneous compute federation, and it is managed by programmed automation.  Both software and infrastructure have integrated workload patterns exist.  This is the goal which these organisation should be aiming to achieve.

To begin this journey, any internal IaaS and PaaS offerings should be based on a foundation of open standards and interoperable technologies.  A common open-standards based architecture will provide flexibility and choice in how to build and deliver cloud services and we’ll also have greater confidence that technical processes will work together across heterogeneous environments.  This should probably be ITIL compliant, and it should be a fully managed and hosted IaaS and PaaS solution with committed SLAs and management of virtual instances above the hypervisor layer.

To get the cloud strategy right, we have to simplify the delivery of environments, standardising across platforms, and removing the plethora of choice from developers.  For example, IaaS offerings for x86 could be simplified as per the following table.

x86 options 32-bit configurations 64-bit configurations
Small Medium Large Small Medium Large Extra large
Virtual CPUs 1 2 4 1 2 4 8
Virtual memory (gigabytes) 1 2 4 2 4 8 16
Instance storage (gigabytes) 64 128 192 64 128 192 384

Where relevant, mainframe and other enterprise compute should also be included within the cloud environment, this is not just an Intel capability.

Regarding PaaS, each IT delivery organisation should determine the best infrastructure on which each platform should be deployed, again removing choice for the developers, placing power in the hands of those provisioning and operating the environments.  For example, should a business require a business process management environment to develop upon this should be provisioned based upon the workload characteristics, rather than on an individual’s personal technology preference.  It is desirable that this provisioning is automated as far as possible, with human intervention only at approval stages.

Without the constraints and legacy of existing technology any new entrants to L&P may choose to entirely bypass any internal hosting – despite regulatory demands – and to build purely upon cloud, using a combination of IaaS, PaaS and SaaS, perhaps even “Business Process as a Service”.

For any organisation – L&P or otherwise – the success of implementing a cloud-like environment depends upon accurate and mature monitoring, metering and billing to the businesses.  According to the IBM Cloud Computing Reference Architecture, usage can be measured in a number of ways:

Allocation usage: the result of a consumer occupying a resource such that other consumers cannot use it.  For example: the time period IT infrastructure topology (e.g. servers, CPUs, memory, storage, network, database, WebSphere cluster) has been allocated to a particular cloud service.  This is most suitable as a service usage metric.

Activity Usage: The result of activity performed by the consumer e.g. CPUSecs, bytes transferred, etc.  This is most suitable as a cost usage metric.

Action Usage: Actions initiated by the consumer that the provider may wish to charge for or track costs against such as backup/restore server, change virtual server configuration.  This action may or may not involve manual steps.

Software as a Service will become increasingly attractive for certain business requirements, but should be monitored carefully, with a short term target of consolidating any SaaS contracts already in place, as these are likely to have been agreed without any real governance.  In parallel, organisations must develop a proactive strategy for engaging with their business counterparts to influence and govern any SaaS purchase.  Whilst it may continue to be preferable for some organisations to host their internally developed applications and systems of record on premise (especially considering the train of thought that these contain an org’s ability to differentiate and intellectual property), SaaS will be increasingly relevant for commoditised function such as CRM, email, and even analytics.  Note that it is as important to have a SaaS exit strategy, as it is to have one for SaaS usage.

In the new API economy, SaaS may also be offered by a L&P organisation externally in a future state.

Security must be built in to the cloud strategy and implementation, no longer can we afford for it to be an add-on, an afterthought.  Real-time monitoring of data and proactive evaluation of applications must also be implemented.

Conclusion (so far)

The future for the L&P data centre is not just about cloud technology. It’s about streamlining business processes to make the different organisations and people more strategic, more responsive to change and more oriented to service delivery.

At least five steps are now necessary: define a value-driven cloud computing strategy, transition to private cloud quickly and affordably, accelerate software delivery with DevOps, optimise any virtualised infrastructure and secure your cloud across lifecycle and domains.

And I stand by my introduction in my previous post: “Regulatory challenges, internal operational strategy, and intellectual property all influence the future of the data centre in the Life and Pensions sector”…  “Whilst regulatory requirements continue to be immature in comparison with the IT industry’s experience of delivering technology, those responsible for risk will be driven by caution regarding placement of customer data.  Within those constraints, additional questions will continue to be asked regarding placement of homegrown, organisation-specific applications which in their way represent a company’s intellectual property.”

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s