How Taming IT Environment Complexity Can Spur Innovation and Propel Digitization Initiatives

IT environments are continuing to become more complex even as the old blends in with the new, and as legacy rubs shoulders with the modern. Consider this:

  1. Traditional network, compute and storage Infrastructure is becoming more software-defined even as considerable investment has gone into static IT infrastructure over the past decade that is still functional
  2. Workload mobility and ungoverned virtual machines are creating “VM sprawl” creating challenges of management, mobility and cost control
  3. On-premise and public cloud deployments are creating hybrid IT architectures that are can be quite complex to manage and secure
  4. Containers and orchestration software need to co-exist with monolithic applications with as application modernization initiatives are underway
  5. Hyperconvergence is on the upswing, as also serverless computing that requires rightsizing of workloads with more traditional IT stacks
  6. Edge clouds are becoming larger with IoT and creating power shifts with traditional cloud computing architectures
  7. Artificial intelligence (AI) and machine learning are challenging human interfaces and workflows, while blurring boundaries between the artificial and the real.

Amidst all this, the pace of change continues. Today a Telsa probably gets as many remote software updates as does an iPhone.

The combination of environment complexity coupled with the pace of change is creating an illusion of a faced paced vision of innovation. In reality, most IT organizations particularly in the Fortune 1000 and the Global 100 stumble while dealing with them. No doubt automation is helping alleviate the hurdles, but even that is being done piecemeal and not as an end-to-end approach. The whole is still not greater than the sum of the parts.

The situation is not that different across other enterprises or across industry verticals.

Because everything is so software dependent, Dev/Test teams have a huge role to play in taking accountability for absolving this complexity and delivering code that is functional, stable, secure and scalable. However, this is not an easy task to accomplish.

Part of the challenge is Dev/Test teams in larger, distributed organizations function in silos and while there are various methods to collaborate and share code, it is not often easy for them to envision the end-state environments holistically. This can lead them to overlook aspects of environment dependencies that can have a bearing on application performance, security etc. The variation and fragmented toolsets – open source, commercial or custom – also don’t help. The ramifications of all this can be seen in security breaches, in degraded application performance and even availability issues at times resulting in service outages, compromised customer experiences and negative business outcomes.

This is not altogether an easy problem to solve. In a 2017 survey conducted by Quali over the past few years, 74 percent of IT and DevOps respondents waited for a month or more queued in line for a target environment to be set up with the right infrastructure, with 24 percent of those waiting for more than a month. This not only creates productivity issues but also creates impediments to digitization in a fast-paced organization.

So, what is the cure?

One credible approach is to prototype IT production environments and make them accessible to the Dev/Test teams early on. This can be an authentic reproduction of the entire environment or a subset thereof that are being churned.

The more authentically an environment is reproduced with the right dependencies, tools, data sets and infrastructure components in at the dev/test stage, the greater is the resulting quality of the code and thereby lower the risk of a redo and compromise.

To do this properly, the following are important:

  1. Ability to model different components and resources in the environment. This lets these resources inherit the relevant attributes, automation properties and dependencies.
  2. Dynamically develop shareable blueprints that can drive standardization across the organization. These can be designed by IT and business architects with contextual and business understanding of the environment
  3. Enabling a shared self-service model. This implies having access to a self-service catalog that can allow distributed teams to access these blueprints and modify them as appropriate without dependency on IT teams or a ticket-based process every time.
  4. Ability to select and reserve these environments, including physical resources with complete automation and orchestration to reproduce prototypes that are quite authentic to production. This improves productivity and enables organizations to cope with the pace of change.
  5. Ability to manage resources efficiently to allow better resource sharing, eliminate VM sprawl, minimize physical resource hogging as well as to gain a better handle on usage efficiency and cost. This should include tearing these environments down as and when a particular activity is complete
  6. Complete business insights and analytics view that capture usage, costing and adopts baseline and predictive modeling
  7. Have open REST APIs that would allow all functions to be completely invoked via non-human interfaces if required, as also to open source how resources are modeled so a community could be fostered.

The above oversee the complete the lifecycle of environment design, creation, management and termination – all in an automated manner. It is a holistic platform approach.

For organizations that depend on software, and have hundreds of engineers working on their code, such a platform that manages environment complexity can be a boon that delivers on productivity, efficiency, collaboration and cost control.

Allowing for a role-based access to the platform means that IT can co-exist and work in harmony with the needs of Dev/Test and Ops teams and help blur some of the silos we see in organizations today. A positive side-effect would be the elimination of shadow-IT practices with positive governance mechanisms that don’t compromise on innovation velocity.

In summary, taming the beast of environment complexity can go a long way in propelling the digitization efforts of CIOs, CEOs and the business at large.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Jane Fonda
How Jane Fonda Achieved a Net Worth of $200 Million
William Burr
How Bill Burr Achieved a Net Worth of $8 Million
Why is Health Insurance So Expensive?
James Lico
10 Things You Didn’t Know about Fortive CEO James Lico
The 20 Most Expensive Stocks in 2019 By Share Price
Advice on Obtaining a Credit Card as a College Student
Takeaways from The 2019 Student Card Survey from Creditcard.com
American Tower
Why American Tower is a Solid Long-Term Dividend Stock
20 ‘Smart’ Technologies That Will Be Available Before We Know It
embedded personal devices
Where are We With Embedded Personal Devices?
20 Smartphone Technologies That Will Blow You Away
bullets that change direction
Where are We With Bullets that Change Direction?
WOW Air
The 20 Worst Airlines in the World in 2019
Swift and Sons
The 20 Best Steakhouses in Chicago
Caladesi Island
The 20 Best Beaches in Florida in 2019
Why La Cosecha Argentinian Steakhouse is One of Miami’s Finest Steakhouses
Land Rover Discovery
The 20 Worst Resale Value Cars of 2019
Hybrid Cars
The 20 Best Hybrid Cars of All-Time
Rolls Royce Silver Seraph
The Rolls Royce Silver Seraph: A Closer Look
The Rolls-Royce Silver Spirit
The Rolls-Royce Silver Spirit: Its History and Its Evolution
A Closer Look at the Hublot Bigger Bang
IWC Big Pilot's Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the IWC Big Pilot’s Watch Constant-Force Tourbillon Edition Le Petit Prince
A Closer Look at the Jaeger-LeCoultre Master Ultra Thin Tourbillon
Time Traveling: The Hublot Classic Fusion Zirconium