How Taming IT Environment Complexity Can Spur Innovation and Propel Digitization Initiatives

IT environments are continuing to become more complex even as the old blends in with the new, and as legacy rubs shoulders with the modern. Consider this:

  1. Traditional network, compute and storage Infrastructure is becoming more software-defined even as considerable investment has gone into static IT infrastructure over the past decade that is still functional
  2. Workload mobility and ungoverned virtual machines are creating “VM sprawl” creating challenges of management, mobility and cost control
  3. On-premise and public cloud deployments are creating hybrid IT architectures that are can be quite complex to manage and secure
  4. Containers and orchestration software need to co-exist with monolithic applications with as application modernization initiatives are underway
  5. Hyperconvergence is on the upswing, as also serverless computing that requires rightsizing of workloads with more traditional IT stacks
  6. Edge clouds are becoming larger with IoT and creating power shifts with traditional cloud computing architectures
  7. Artificial intelligence (AI) and machine learning are challenging human interfaces and workflows, while blurring boundaries between the artificial and the real.

Amidst all this, the pace of change continues. Today a Telsa probably gets as many remote software updates as does an iPhone.

The combination of environment complexity coupled with the pace of change is creating an illusion of a faced paced vision of innovation. In reality, most IT organizations particularly in the Fortune 1000 and the Global 100 stumble while dealing with them. No doubt automation is helping alleviate the hurdles, but even that is being done piecemeal and not as an end-to-end approach. The whole is still not greater than the sum of the parts.

The situation is not that different across other enterprises or across industry verticals.

Because everything is so software dependent, Dev/Test teams have a huge role to play in taking accountability for absolving this complexity and delivering code that is functional, stable, secure and scalable. However, this is not an easy task to accomplish.

Part of the challenge is Dev/Test teams in larger, distributed organizations function in silos and while there are various methods to collaborate and share code, it is not often easy for them to envision the end-state environments holistically. This can lead them to overlook aspects of environment dependencies that can have a bearing on application performance, security etc. The variation and fragmented toolsets – open source, commercial or custom – also don’t help. The ramifications of all this can be seen in security breaches, in degraded application performance and even availability issues at times resulting in service outages, compromised customer experiences and negative business outcomes.

This is not altogether an easy problem to solve. In a 2017 survey conducted by Quali over the past few years, 74 percent of IT and DevOps respondents waited for a month or more queued in line for a target environment to be set up with the right infrastructure, with 24 percent of those waiting for more than a month. This not only creates productivity issues but also creates impediments to digitization in a fast-paced organization.

So, what is the cure?

One credible approach is to prototype IT production environments and make them accessible to the Dev/Test teams early on. This can be an authentic reproduction of the entire environment or a subset thereof that are being churned.

The more authentically an environment is reproduced with the right dependencies, tools, data sets and infrastructure components in at the dev/test stage, the greater is the resulting quality of the code and thereby lower the risk of a redo and compromise.

To do this properly, the following are important:

  1. Ability to model different components and resources in the environment. This lets these resources inherit the relevant attributes, automation properties and dependencies.
  2. Dynamically develop shareable blueprints that can drive standardization across the organization. These can be designed by IT and business architects with contextual and business understanding of the environment
  3. Enabling a shared self-service model. This implies having access to a self-service catalog that can allow distributed teams to access these blueprints and modify them as appropriate without dependency on IT teams or a ticket-based process every time.
  4. Ability to select and reserve these environments, including physical resources with complete automation and orchestration to reproduce prototypes that are quite authentic to production. This improves productivity and enables organizations to cope with the pace of change.
  5. Ability to manage resources efficiently to allow better resource sharing, eliminate VM sprawl, minimize physical resource hogging as well as to gain a better handle on usage efficiency and cost. This should include tearing these environments down as and when a particular activity is complete
  6. Complete business insights and analytics view that capture usage, costing and adopts baseline and predictive modeling
  7. Have open REST APIs that would allow all functions to be completely invoked via non-human interfaces if required, as also to open source how resources are modeled so a community could be fostered.

The above oversee the complete the lifecycle of environment design, creation, management and termination – all in an automated manner. It is a holistic platform approach.

For organizations that depend on software, and have hundreds of engineers working on their code, such a platform that manages environment complexity can be a boon that delivers on productivity, efficiency, collaboration and cost control.

Allowing for a role-based access to the platform means that IT can co-exist and work in harmony with the needs of Dev/Test and Ops teams and help blur some of the silos we see in organizations today. A positive side-effect would be the elimination of shadow-IT practices with positive governance mechanisms that don’t compromise on innovation velocity.

In summary, taming the beast of environment complexity can go a long way in propelling the digitization efforts of CIOs, CEOs and the business at large.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

How Smaller Businesses Can Battle Tariffs
10 Things You Didn’t Know about Valero Energy CEO Joseph Gorder
How Katy Perry Achieved a Net Worth of $280 Million
10 Things You Didn’t Know About Citigroup CEO Michael Corbat
20 Important Tips for Selling on Craigslist
10 Benefits of Having a Pottery Barn Credit Card
How to Buy a Bargain Used Car for Under $5,000
10 Benefits of Having a Williams Sonoma Credit Card
Five Companies Leading the Way in Facial Recognition Technology
Data, Data Everywhere and Not a Drop to Drink
Protect Your Castle: Securing Operational Technology Against Today’s Threats
Why 5G Will Be Way More Important than you Think
The Five Best Mexican Restaurants in Phoenix
The Five Best 5-Star Hotels in Beverly Hills
10 Things to Do in Milwaukee for First Time Visitors
The Five Best Mexican Restaurants in Austin, TX
The History and Story Behind the Bentley Logo
Why You Should Consider Gotham Dream Cars for a Supercar Rental
The 10 Best Muscle Cars of the 1970s
The History and Evolution of the Rolls Royce Ghost
The Five Best Rockwell Watches on the Market Today
10 Things You Didn’t Know about Alpina Watches
The Five Best Reactor Watches on the Market Today
The Five Most Expensive Pocket Watches Ever Made