How Taming IT Environment Complexity Can Spur Innovation and Propel Digitization Initiatives

IT environments are continuing to become more complex even as the old blends in with the new, and as legacy rubs shoulders with the modern. Consider this:

  1. Traditional network, compute and storage Infrastructure is becoming more software-defined even as considerable investment has gone into static IT infrastructure over the past decade that is still functional
  2. Workload mobility and ungoverned virtual machines are creating “VM sprawl” creating challenges of management, mobility and cost control
  3. On-premise and public cloud deployments are creating hybrid IT architectures that are can be quite complex to manage and secure
  4. Containers and orchestration software need to co-exist with monolithic applications with as application modernization initiatives are underway
  5. Hyperconvergence is on the upswing, as also serverless computing that requires rightsizing of workloads with more traditional IT stacks
  6. Edge clouds are becoming larger with IoT and creating power shifts with traditional cloud computing architectures
  7. Artificial intelligence (AI) and machine learning are challenging human interfaces and workflows, while blurring boundaries between the artificial and the real.

Amidst all this, the pace of change continues. Today a Telsa probably gets as many remote software updates as does an iPhone.

The combination of environment complexity coupled with the pace of change is creating an illusion of a faced paced vision of innovation. In reality, most IT organizations particularly in the Fortune 1000 and the Global 100 stumble while dealing with them. No doubt automation is helping alleviate the hurdles, but even that is being done piecemeal and not as an end-to-end approach. The whole is still not greater than the sum of the parts.

The situation is not that different across other enterprises or across industry verticals.

Because everything is so software dependent, Dev/Test teams have a huge role to play in taking accountability for absolving this complexity and delivering code that is functional, stable, secure and scalable. However, this is not an easy task to accomplish.

Part of the challenge is Dev/Test teams in larger, distributed organizations function in silos and while there are various methods to collaborate and share code, it is not often easy for them to envision the end-state environments holistically. This can lead them to overlook aspects of environment dependencies that can have a bearing on application performance, security etc. The variation and fragmented toolsets – open source, commercial or custom – also don’t help. The ramifications of all this can be seen in security breaches, in degraded application performance and even availability issues at times resulting in service outages, compromised customer experiences and negative business outcomes.

This is not altogether an easy problem to solve. In a 2017 survey conducted by Quali over the past few years, 74 percent of IT and DevOps respondents waited for a month or more queued in line for a target environment to be set up with the right infrastructure, with 24 percent of those waiting for more than a month. This not only creates productivity issues but also creates impediments to digitization in a fast-paced organization.

So, what is the cure?

One credible approach is to prototype IT production environments and make them accessible to the Dev/Test teams early on. This can be an authentic reproduction of the entire environment or a subset thereof that are being churned.

The more authentically an environment is reproduced with the right dependencies, tools, data sets and infrastructure components in at the dev/test stage, the greater is the resulting quality of the code and thereby lower the risk of a redo and compromise.

To do this properly, the following are important:

  1. Ability to model different components and resources in the environment. This lets these resources inherit the relevant attributes, automation properties and dependencies.
  2. Dynamically develop shareable blueprints that can drive standardization across the organization. These can be designed by IT and business architects with contextual and business understanding of the environment
  3. Enabling a shared self-service model. This implies having access to a self-service catalog that can allow distributed teams to access these blueprints and modify them as appropriate without dependency on IT teams or a ticket-based process every time.
  4. Ability to select and reserve these environments, including physical resources with complete automation and orchestration to reproduce prototypes that are quite authentic to production. This improves productivity and enables organizations to cope with the pace of change.
  5. Ability to manage resources efficiently to allow better resource sharing, eliminate VM sprawl, minimize physical resource hogging as well as to gain a better handle on usage efficiency and cost. This should include tearing these environments down as and when a particular activity is complete
  6. Complete business insights and analytics view that capture usage, costing and adopts baseline and predictive modeling
  7. Have open REST APIs that would allow all functions to be completely invoked via non-human interfaces if required, as also to open source how resources are modeled so a community could be fostered.

The above oversee the complete the lifecycle of environment design, creation, management and termination – all in an automated manner. It is a holistic platform approach.

For organizations that depend on software, and have hundreds of engineers working on their code, such a platform that manages environment complexity can be a boon that delivers on productivity, efficiency, collaboration and cost control.

Allowing for a role-based access to the platform means that IT can co-exist and work in harmony with the needs of Dev/Test and Ops teams and help blur some of the silos we see in organizations today. A positive side-effect would be the elimination of shadow-IT practices with positive governance mechanisms that don’t compromise on innovation velocity.

In summary, taming the beast of environment complexity can go a long way in propelling the digitization efforts of CIOs, CEOs and the business at large.


Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mark Zekulin
20 Things You Didn’t Know about Mark Zekulin
Mitch McConnell
How Mitch McConnell Achieved a Net Worth of $30 Million
Baiju Bhatt
20 Things You Didn’t Know About Baiju Bhatt
Kamala Harris
How Kamala Harris Achieved a Net Worth of $4 Million
Bank of America
Choosing The Right Bank Account for Your Child: 5 Suggestions
Debt
The Document That Protects You Against Debt Collectors
10 Ways Millionaires Manage Their Money that You Don’t
Retirement
10 Careers Workers Retire Earliest In
The 20 Best Places to Live in Boston
Austin Texas
The 20 Best Places to Live in Austin
UK place
The 20 Best Places to Live in England
Houston Discovery Green
The 20 Best Places to Live in Houston
The 20 Best Cape May Hotels in 2019
Flagler College
The 10 Best Seafood Restaurants in St. Augustine, FL
Atlanta evening
The 20 Best Hotels in Atlanta in 2019
The 20 Best Hotels in Philadelphia
Best Large Hybrid SUVs
20 Best Large Hybrid SUVs for 2020
Best Cadillac CTS Models
The 10 Best Cadillac CTS Models of All-Time
2020 Chrysler Pacifica Hybrid
The 20 Best Affordable Plug-In Hybrid Cars for 2020
2020 Toyota Tundra
The 20 Best Pickup Trucks Heading into 2020
The 20 Best Bulova Watches of All-Time
The 20 Best Ball Watches of All-Time
The 20 Best Victorinox Watches of All-Time
Samsung Galaxy Watch Active
The 20 Best Samsung Watches of All-Time