How Taming IT Environment Complexity Can Spur Innovation and Propel Digitization Initiatives

IT environments are continuing to become more complex even as the old blends in with the new, and as legacy rubs shoulders with the modern. Consider this:

  1. Traditional network, compute and storage Infrastructure is becoming more software-defined even as considerable investment has gone into static IT infrastructure over the past decade that is still functional
  2. Workload mobility and ungoverned virtual machines are creating “VM sprawl” creating challenges of management, mobility and cost control
  3. On-premise and public cloud deployments are creating hybrid IT architectures that are can be quite complex to manage and secure
  4. Containers and orchestration software need to co-exist with monolithic applications with as application modernization initiatives are underway
  5. Hyperconvergence is on the upswing, as also serverless computing that requires rightsizing of workloads with more traditional IT stacks
  6. Edge clouds are becoming larger with IoT and creating power shifts with traditional cloud computing architectures
  7. Artificial intelligence (AI) and machine learning are challenging human interfaces and workflows, while blurring boundaries between the artificial and the real.

Amidst all this, the pace of change continues. Today a Telsa probably gets as many remote software updates as does an iPhone.

The combination of environment complexity coupled with the pace of change is creating an illusion of a faced paced vision of innovation. In reality, most IT organizations particularly in the Fortune 1000 and the Global 100 stumble while dealing with them. No doubt automation is helping alleviate the hurdles, but even that is being done piecemeal and not as an end-to-end approach. The whole is still not greater than the sum of the parts.

The situation is not that different across other enterprises or across industry verticals.

Because everything is so software dependent, Dev/Test teams have a huge role to play in taking accountability for absolving this complexity and delivering code that is functional, stable, secure and scalable. However, this is not an easy task to accomplish.

Part of the challenge is Dev/Test teams in larger, distributed organizations function in silos and while there are various methods to collaborate and share code, it is not often easy for them to envision the end-state environments holistically. This can lead them to overlook aspects of environment dependencies that can have a bearing on application performance, security etc. The variation and fragmented toolsets – open source, commercial or custom – also don’t help. The ramifications of all this can be seen in security breaches, in degraded application performance and even availability issues at times resulting in service outages, compromised customer experiences and negative business outcomes.

This is not altogether an easy problem to solve. In a 2017 survey conducted by Quali over the past few years, 74 percent of IT and DevOps respondents waited for a month or more queued in line for a target environment to be set up with the right infrastructure, with 24 percent of those waiting for more than a month. This not only creates productivity issues but also creates impediments to digitization in a fast-paced organization.

So, what is the cure?

One credible approach is to prototype IT production environments and make them accessible to the Dev/Test teams early on. This can be an authentic reproduction of the entire environment or a subset thereof that are being churned.

The more authentically an environment is reproduced with the right dependencies, tools, data sets and infrastructure components in at the dev/test stage, the greater is the resulting quality of the code and thereby lower the risk of a redo and compromise.

To do this properly, the following are important:

  1. Ability to model different components and resources in the environment. This lets these resources inherit the relevant attributes, automation properties and dependencies.
  2. Dynamically develop shareable blueprints that can drive standardization across the organization. These can be designed by IT and business architects with contextual and business understanding of the environment
  3. Enabling a shared self-service model. This implies having access to a self-service catalog that can allow distributed teams to access these blueprints and modify them as appropriate without dependency on IT teams or a ticket-based process every time.
  4. Ability to select and reserve these environments, including physical resources with complete automation and orchestration to reproduce prototypes that are quite authentic to production. This improves productivity and enables organizations to cope with the pace of change.
  5. Ability to manage resources efficiently to allow better resource sharing, eliminate VM sprawl, minimize physical resource hogging as well as to gain a better handle on usage efficiency and cost. This should include tearing these environments down as and when a particular activity is complete
  6. Complete business insights and analytics view that capture usage, costing and adopts baseline and predictive modeling
  7. Have open REST APIs that would allow all functions to be completely invoked via non-human interfaces if required, as also to open source how resources are modeled so a community could be fostered.

The above oversee the complete the lifecycle of environment design, creation, management and termination – all in an automated manner. It is a holistic platform approach.

For organizations that depend on software, and have hundreds of engineers working on their code, such a platform that manages environment complexity can be a boon that delivers on productivity, efficiency, collaboration and cost control.

Allowing for a role-based access to the platform means that IT can co-exist and work in harmony with the needs of Dev/Test and Ops teams and help blur some of the silos we see in organizations today. A positive side-effect would be the elimination of shadow-IT practices with positive governance mechanisms that don’t compromise on innovation velocity.

In summary, taming the beast of environment complexity can go a long way in propelling the digitization efforts of CIOs, CEOs and the business at large.

Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

20 Things You Didn’t Know About Viela Bio
James W Rane
The 10 Richest People in Alabama in 2019
Louis Bacon
20 Things You Didn’t Know About Louis Moore Bacon
Afya Limited
20 Things You Didn’t Know About Afya Limited
The Top 10 Mutual Funds by 10 Year Performance
Navy Federal Credit Card
The 10 Best Credit Cards for Military Members
The 10 Most Valuable Cryptocurrencies in the World
The 10 Best Credit Cards for Small Businesses
solar panels
The Five Best Solar Panel Companies Based on Efficiency
Why Are AirPods So Expensive? Here’s The Answer
Computer Virus
The 10 Worst Computer Viruses of All-Time
printer ink
Why is Printer Ink So Expensive? Here’s the Answer
The Top 10 Golf Courses in Palm Springs
Florida U.S. 1
The 20 Worst Roads in America in 2019
The Top 10 Golf Courses in Orlando, Florida
Why The Private Suite at LAX is the Ultimate Airport Experience
The Porsche 911 Carrera RS
10 of the Best Porsche Carrera Models of All Time
Ferrari Testarossa
10 Best Ferrari Testarossa Models of All-Time
1982 Porsche 944
The Five Best Porsche 944 Models of All-Time
Ferrari Portofino
10 Things You’ll Love About the Ferrari Portofino
A Closer Look at the Hublot Bigger Bang
IWC Big Pilot's Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the IWC Big Pilot’s Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the Jaeger-LeCoultre Master Ultra Thin Tourbillon
Time Traveling: The Hublot Classic Fusion Zirconium