In the last fortnight IBM’s Wimbledon in a Box tour has been in Scotland. To jazz things up a little we’ve created our own version of Pharrell’s “Happy”. It’s ridiculous and hilarious.
In my previous post I started thinking about the influences which will change data centres in Life and Pensions organisations. There are a few more topics I believe will have an impact. The first is DevOps.
Introduction of DevOps
It’s more than just one of the latest buzzwords, DevOps is the sort of practice which IBM and the most forward-thinking companies are adopting for continuous delivery of software. The idea is to better respond to business requirements and to much more quickly respond to market opportunities and balance speed, quality and cost.
Although often described as a new approach to traditional application lifecycle management I tend to think of it more as the next evolution. We broaden software dev and delivery – using agile and lean principles – to all stakeholders in an organisation that develop, operate or benefit from the business’s systems. I think the “benefit” piece of that sentence is hugely important, and have been chatting to others about “BusDevOps” because it’s crucial we include the business and correctly assess the importance and relevance of their perceived needs.
DevOps provides greater integration between line of business and IT dev and delivery organisations. It should address the “Business-IT” gap as well as the “IT-IT” gap.
DevOps is used extensively for Systems of Engagement for expanding customer outreach and enabling an increasingly mobile workplace. This has particular importance for L&P organisations aiming to benefit from customer insight, and to increase customer engagement and customer transactions.
(You can read IBM’s perspective of DevOps here: http://www.ibm.com/ibm/devops/us/en/)
Next on my list of influences is….
Automation of IT service design and delivery
In this new DevOps environment the importance and relevance of automated provisioning also increases. Many existing L&P organisations now boast of virtualised infrastructure which provides a solid foundation on which automation can be built, and it that automation should be applied to workflows, provisioning (including dynamic allocation of workload), deployment, discovery of infrastructure elements and their configuration, event resolution, metering and billing. These latter two in particular are vital in demonstrating the value of technology to the L&P businesses which increasingly see IT as a cost centre and not a value centre; with application of analytics to the finances of delivering IT comes better insight.
Recording and automation of expert knowledge within IT must be considered to support geographically-dispersed organisations, as well as better integrated and policy-based service management. As capability to allow staff to use mobile devices to work on the move is provided, consideration must be given to ensure that service and IT management via these devices is also possible. Although self-service portals already exist to allow end users to make technology requests, this should be extended to allow technologists to request provision of environments for development.
And of course, next is…
For existing L&P organisations I would not argue with any decision that Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) capabilities will be provisioned privately, on premise.
Traditional IT management is centred on the hardware, with a somewhat bottom up approach. There are many silos: server, storage, network, and homogeneous compute, and there are is often extensive manual process intervention. But the IT world is moving to new software defined management which is workload aware, with a top down approach. and integrated server, storage and network. There is heterogeneous compute federation, and it is managed by programmed automation. Both software and infrastructure have integrated workload patterns exist. This is the goal which these organisation should be aiming to achieve.
To begin this journey, any internal IaaS and PaaS offerings should be based on a foundation of open standards and interoperable technologies. A common open-standards based architecture will provide flexibility and choice in how to build and deliver cloud services and we’ll also have greater confidence that technical processes will work together across heterogeneous environments. This should probably be ITIL compliant, and it should be a fully managed and hosted IaaS and PaaS solution with committed SLAs and management of virtual instances above the hypervisor layer.
To get the cloud strategy right, we have to simplify the delivery of environments, standardising across platforms, and removing the plethora of choice from developers. For example, IaaS offerings for x86 could be simplified as per the following table.
|x86 options||32-bit configurations||64-bit configurations|
|Virtual memory (gigabytes)||1||2||4||2||4||8||16|
|Instance storage (gigabytes)||64||128||192||64||128||192||384|
Where relevant, mainframe and other enterprise compute should also be included within the cloud environment, this is not just an Intel capability.
Regarding PaaS, each IT delivery organisation should determine the best infrastructure on which each platform should be deployed, again removing choice for the developers, placing power in the hands of those provisioning and operating the environments. For example, should a business require a business process management environment to develop upon this should be provisioned based upon the workload characteristics, rather than on an individual’s personal technology preference. It is desirable that this provisioning is automated as far as possible, with human intervention only at approval stages.
Without the constraints and legacy of existing technology any new entrants to L&P may choose to entirely bypass any internal hosting – despite regulatory demands – and to build purely upon cloud, using a combination of IaaS, PaaS and SaaS, perhaps even “Business Process as a Service”.
For any organisation – L&P or otherwise – the success of implementing a cloud-like environment depends upon accurate and mature monitoring, metering and billing to the businesses. According to the IBM Cloud Computing Reference Architecture, usage can be measured in a number of ways:
Allocation usage: the result of a consumer occupying a resource such that other consumers cannot use it. For example: the time period IT infrastructure topology (e.g. servers, CPUs, memory, storage, network, database, WebSphere cluster) has been allocated to a particular cloud service. This is most suitable as a service usage metric.
Activity Usage: The result of activity performed by the consumer e.g. CPUSecs, bytes transferred, etc. This is most suitable as a cost usage metric.
Action Usage: Actions initiated by the consumer that the provider may wish to charge for or track costs against such as backup/restore server, change virtual server configuration. This action may or may not involve manual steps.
Software as a Service will become increasingly attractive for certain business requirements, but should be monitored carefully, with a short term target of consolidating any SaaS contracts already in place, as these are likely to have been agreed without any real governance. In parallel, organisations must develop a proactive strategy for engaging with their business counterparts to influence and govern any SaaS purchase. Whilst it may continue to be preferable for some organisations to host their internally developed applications and systems of record on premise (especially considering the train of thought that these contain an org’s ability to differentiate and intellectual property), SaaS will be increasingly relevant for commoditised function such as CRM, email, and even analytics. Note that it is as important to have a SaaS exit strategy, as it is to have one for SaaS usage.
In the new API economy, SaaS may also be offered by a L&P organisation externally in a future state.
Security must be built in to the cloud strategy and implementation, no longer can we afford for it to be an add-on, an afterthought. Real-time monitoring of data and proactive evaluation of applications must also be implemented.
Conclusion (so far)
The future for the L&P data centre is not just about cloud technology. It’s about streamlining business processes to make the different organisations and people more strategic, more responsive to change and more oriented to service delivery.
At least five steps are now necessary: define a value-driven cloud computing strategy, transition to private cloud quickly and affordably, accelerate software delivery with DevOps, optimise any virtualised infrastructure and secure your cloud across lifecycle and domains.
And I stand by my introduction in my previous post: “Regulatory challenges, internal operational strategy, and intellectual property all influence the future of the data centre in the Life and Pensions sector”… “Whilst regulatory requirements continue to be immature in comparison with the IT industry’s experience of delivering technology, those responsible for risk will be driven by caution regarding placement of customer data. Within those constraints, additional questions will continue to be asked regarding placement of homegrown, organisation-specific applications which in their way represent a company’s intellectual property.”
In response to conversations with some Life and Pensions organisations recently I have been pondering the future of the data centre in that sector.
Regulatory challenges, internal operational strategy, and intellectual property all influence the future of the data centre in the Life and Pensions sector. Although the provision of information technology is not the key business of an L&P organisation, it is understandable that these influences will result in a desire for a number of these organisations to deliver at least some technology services internally.
Whilst regulatory requirements continue to be immature in comparison with the IT industry’s experience of delivering technology, those responsible for risk will be driven by caution regarding placement of customer data. Within those constraints, additional questions will continue to be asked regarding placement of homegrown, organisation-specific applications which in their way represent a company’s intellectual property.
The advantages and disadvantages can be debated and in reality, the future for some financial services institutions will be outsourcing of their entire IT estate, for others to maintain their own IT function but to use only cloud, and for others the preference will be to use a hybrid cloud model.
In all honesty, I expect the same considerations and changes will have an impact on all – or most – financial institutions, no matter the specialism, although my expertise across the Financial Services sector varies.
Systems of Engagement are a Driving Force upon the Provision of IT
IT is moving from Systems of Record, focused on transactions, to Systems of Engagement, focused on interactions. To quote Martin Gale, IBM Client Technical Architect, “Systems of Engagement support consumers and knowledge workers in the achievement of their objectives. Systems of Engagement optimise the effectiveness of the user by providing the required responsiveness and flexibility to deal with the fluidity of everyday life.” Martin explains that although Systems of Record will continue to have a key role because of their efficiency and robustness in quality of service, they have limitations as they are usually enable only a subset of the process to achieve the real outcome desired, and are constructed from a provider’s point of view rather than the consumer’s.
The Harvard Business Review describes nine traits of Systems of Engagement:
These Systems of Engagement will have new workload characteristics such as an integrated lifecycle (through DevOps); rapidly changing, bursty workloads; eventual consistency and continuous availability. They are enabled by the proliferation of mobile devices, the increasing use of social tools, analytics and big data capabilities, and cloud computing as a delivery model.
As traditional L&P organisations move to a more customer-focused model and greater embrace mobile and social technologies, the underlying platforms must be enabled as systems of engagement. New entrants to the market will have the advantage with greater flexibility in developing such systems.
Increased Industrialisation of IT
As the architecture management discipline matures further it will be possible to enable standardised and integrated application and infrastructure landscapes underpinned by automation of IT service design and delivery. A L&P business should have these demands of their IT provider, whether an outsourcer or internal organisation.
IT processes must be made efficient by running on top of an optimised application portfolio and a scalable IT infrastructure.
With increasing pressure on cost of resource the application delivery model, IT service operations, support and management model should be optimised to balance resources. Alignment of skills should be performed with respect to business requirements; cheapest skills do not always lead to cheapest models.
Although more centralised IT functions have arisen in industry to provide greater control and standardisation, the weight of influence on IT decisions is increasingly coming from lines of business, increasing the importance of alignment of business and IT strategies. Governance is ever more key to ensure effective and targeted IT service provision, especially where growth means that technology delivery moves to a global model.
Next Time… DevOps, Automation and Cloud…
These days everyone wants to access the function they need to do their job in the shape of apps for their smartphones and tablets don’t they? More and more app stores are available whether iTunes, Windows Store, Google Play, IBM PureSystems Centre, and so on.
A conversation I had recently got me thinking more about how we consume IT and the changes IT delivery organisations will experience. I also wonder about the hype cycle and where we may be on it these days.
Right now an organisation may find that consumers of its IT want access to just one or two functions via an app, and perhaps 10 apps will be created for 10 different functions. (By function I do mean one type of interaction with some back end technology, whatever that may be, perhaps searching for client information. I don’t mean the “Sales” function or other such organisation.) And that sort of thing is proliferating, so perhaps we’re somewhere between the “Technology Trigger” and “Peak of Inflated Expectations” with lots of these new apps being developed.
But where do we draw the line? That is, is it realistic or unmanageable to have to navigate between 10 or 20 apps to do one’s job? As we move more and more to this new model driven by the consumerisation of IT I think we will hit that “Trough of Disillusionment” when it starts to get hard. As it is I have well over 100 apps on my smart phone, and while very few of them are to do my job, I expect that to change increasingly. Management of that is going to be very hard.
So, I’m thinking about what is next. Will we ditch our smartphones, tablets and their apps, and go back to a desktop/laptop world to access enterprise applications? Back to green-screens anyone? I seriously doubt it. There will clearly continue to be a place for both.
We’ll mature into a world with more feature-rich apps to allow us to do more from our smart devices in a sensible manner, and with better models for identifying which instruments are best for which tasks.
So, what changes? Our architects must be able to design technology which has the flexibility to support a number of interaction models, and a variety of performance models, and our requirements gatherers (business analysts, system analysts, etc.) must understand our companies’ business models to better define who needs access to what function and information in what form. The “business” must get closer to “IT”, DevOps must become BusDevOps, and whilst a technical person may think their business counterparts need to listen more perhaps technical leaders must learn to become trusted advisors.
Analytics will become increasingly important to allow us to understand the non-functional characteristics and apply that knowledge to future developments. Is it possible for security to be any more important than it already is? Perhaps not, but we may see more organisations begin to take it more seriously, and we certainly need to adapt our security measures more quickly.
And as these are just my initial thoughts they’ll evolve and mature themselves!
… to get it wrong.
That was another conclusion from a Smarter City workshop and I think it’s so important.
Not every good idea will work in every area in every city. But not every idea that fails in one area will fail in another too. So don’t be afraid to trial things.
After all, it can be expensive and hard to implement a solution city wide, especially when so many of those that are in the name of sustainability come with results that can be hard to quantify in advance. So, try them out in a couple of areas; a Proof of Concept is not a bad thing.
Then understand why an idea was successful, or why it was not. And keep a record.
Of course, wouldn’t it be nice if you *could* predict whether something will work? And that’s where predictive analytics comes in. Wikipedia’s definition is “Predictive analytics encompasses a variety of techniques from statistics, modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future events”.
So, a local authority can use a variety of data (e.g. the demographics of where a solution is be applied, asset management in the area, historical data about similar solutions in this city and others) to model the implementation of the solution and the likelihood of its success across the city. A small investment up front in the analytic solution can mean resources are better applied to sustainability: whatever shape those resources come in (funding, people, tools, etc.). Spend wisely to spend even more wisely.
I’ve been thinking about this ever since I sat down in a workshop with Sustainable Glasgow to discuss the future of the city centre in Glasgow and what changes are required, with limited resources, to cater for future needs.
The climate is changing, and it’s likely to get wetter and warmer. Anna Beswick, of Adaptation Scotland, presented on the subject, and some solutions which assist. Take a look at their website to find out more.
What jumped out at me was the need to implement solutions that can address more than one problem, thus maximising any investment. For example, green walls and roofs assist with CO2 challenges, but also provide a level of insulation which reduces fuel consumption and therefore costs to domestic households, and to businesses, and carbon emissions by the energy providers. Of course, this would not be appropriate for every property, but where applicable more than one challenge is being (in part) addressed by one solution.
I know less about these non-technical solutions than ones which are provided by technology, but I believe the principle applies to technology also. One of the benefits of a system such as IBM’s Intelligent Operations Centre, is that it is a platform which allow reuse of technologies which have been applied to one requirement of a city – and of the learnings from that technology – for additional requirements of the city. For example, it can be used to integrate asset management of roads and demographic data (typically data held by different functions in a local authority) so that it is possible to work out which roads and pavements should be gritted first in winter based upon the people that use them. The next step could then be to integrate with CCTV provided by organisations external to the local authority to monitor traffic on the roads, and enhance the gritting plans based upon that. (Ordinarily this example would be appropriate for this time of year, but perhaps I need to change this to dealing with flooding and floodwater instead.)
Two (or more) for the price of one is always an attractive proposition.
The challenge now is assisting cities with how to allocate cost internally when one solution helping more than one department…
In 2011 I had the privilege of being a part of an IBM Smarter Cities Challenge team, a team deployed to Glasgow to tackle the issue that is Fuel Poverty. Making our goal “affordable warmth” we recommended a long list of actions for Glasgow, and not all of them were based on technology, but about collaboration and sharing experiences.
I’m going to be talking about this at an event for BCSWomen in Scotland on 6th December, in the early evening. So, save the date in your diary, and I’ll add a post here when the sign-up page for the event is ready. In the meantime take a look here bcswomenscotland.wordpress.com.