Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
December 1, 2022
IT tool consolidation is the ongoing and combined effort of all members of an IT organization to ensure that employees (only) use IT hardware, software and services that create and demonstrate explicit value for stakeholders in the business. The best metaphor for tool consolidation is in my kitchen, where common sense principles around value creation provide useful guidelines for any consolidation process.
I love to cook and prepare food with an umami (savoriness) taste, and have many kitchen tools to help me. Most of my kitchen tools are fit for purpose and available when I need them, and provide good value for money, but some violate these principles. I hardly use my automatic soup maker, for example. It does what it promises and is always available in my cupboard. But when I brew soups, I use alternatives because I can do the job with less fuss. The value of the soup maker does not manifest. Should I consolidate, and if yes, how? If I do nothing, the value I generate from my soup maker remains low — unless I start using it. If I give it away or sell it, I create value as cupboard space frees up, I might recover some cost, and others are happy with a new and useful tool.
Companies can either act on the need to consolidate or do nothing. As in the kitchen analogy, unifying IT equipment and services for a business can be an important assessment, often left undone. Action requires you to take stock of what is in the cupboard, review working processes and practices, and ensure that outcomes are goal driven — have a taste of umami, or whatever tastes best to you.
Common sense economic principles used in ITIL methodologies prescribe how an IT organization creates stakeholder value. It boils value down to three conditions: utility, warranty, and cost. Violate any value condition, and stakeholder value is lost or not maximized. In a worst-case scenario, value is irrecoverably destroyed.
Tool fragmentation is when teams use different IT tools to work together. It occurs for many legitimate reasons (e.g., new people joining the company using more productive tools, M&A activity, and more). Tool fragmentation is not an issue if IT equipment and services deliver and demonstrate stakeholder value.
Tool fragmentation becomes an issue if fragmentation morphs into tool sprawl. Tool sprawl is when a company uses too many monitoring or security tools, for example, to address a single use case. IT tool usage does not maximize stakeholder value, because it does not deliver utility, warranty, or value for money.
Moreover, tool sprawl also leads to data sprawl. Data sprawl occurs when tools generate data that lacks utility, warranty or value. When teams collect the same logs from various monitoring and security tools for different use cases they pay twice (or more) for ingesting the same log data.
Corrective actions imply you have a plan in place to measure and meet goals. Making good plans and setting motivating goals is a complicated process. It requires taking stock of your current strategy and analyzing internal and external factors to formulate new plans and goals. Here are best practices to keep your IT (and kitchen) tools in order.
Hear my colleague Drew Horn discuss best practices and what you can do today to fight sprawl.
Take stock of where you stand by using the three value creation conditions (utility, warranty, and value for money) to determine whether tools and data create enough business value. Ask employees whether tools are fit for purpose and warranty, and assess to what extent they provide value for money. Violate any of the conditions, and stakeholder value is not maximized.
IT consolidation is a long-term effort for most organizations, because it requires significant architectural changes in how developers build, operate and secure a modern app. Traditional monolithic architecture fundamentally differs from modern, cloud-native architecture based on microservices, containers, and more. As the tech stack changed, IT professionals used different tooling to get their jobs done, leading to tool and data sprawl. Using multiple vendors to collect the same logs for different use cases means you pay twice for ingesting the log data. Unifying your log data on a single platform for all use cases is an opportunity for cost savings.
Driven by the need to reduce complexity, leverage commonalities, and minimize management overhead, many IT organizations plan for vendor consolidation, especially in the current economic climate. This process requires IT management to make difficult choices. The 2022 Voice of the Customer report by 451 Research finds:
90% of IT buyers want to work with fewer vendors,
40% say there are too many pricing models and
72% say pricing is too complex.
Moreover, management often cannot properly budget for changing demands on infrastructure capacity, new technology, integrations and maintenance. Financial data on who is using what is often not available or transparent.
Seek to work with vendors who offer flexible and transparent license conditions, so you can maximize the value of using your IT equipment and services.
Plans and goals must please many internal and external stakeholders on different levels and embed common sense principles around shared values to inspire people to act in unison. Agreeing on what drives value requires a simple but not simplistic approach. Utility, warranty, and value for money meet these demands, so formulate plans and goals in this spirit.
Automated systems help ensure a reliable and secure digital customer experience by keeping up with data sets from cloud-native apps and digital services. In essence, automation “takes the robot out of the human” by removing repetitive, replicable and routine tasks. Automated workflows mimic activities carried out by humans and learn to do them even better. Automation promises drastically improved efficiency, better worker performance, reduction of operational risks and enhanced response times and customer journey experiences.
All major vendors support the ability to declare via configuration files what flows through your pipelines. Everything can be defined as code, and so can your CI/CD pipeline. The same concepts of observability for cloud-native apps apply directly to your pipeline, so think about your pipeline as its own application. Although you may have multiple pipelines, you still need to observe and secure the process.
In the same way you treat infrastructure as code, you can go through the same code review processes for your CI/CD pipelines. This helps bridge the gap between the things that go wrong in a production workload and the insights generated by your DevOps engineers. It also improves your knowledge base while identifying tools and processes that provide value and what is unimportant.
Not everything is gravy. Things go wrong. There are many things you can do for better control. Collect the data and observe the CI/CD process to monitor, troubleshoot, diagnose and tackle sprawl in your pipeline. It all comes back to ensuring that you can collect the logs, metrics and traces from your pipelines and monitor your builds, deployments, pipeline runs, pipeline stage executions, application tickets, and alerts across all your pipelines in all the tools that you use.
Improve your management over the process by tracking essential cycle times that indicate your ability to define changes — but build them in small batches. Important KPIs focus on ‘Active Development Time’ (time from commit to ready for peer review) and ‘Review and Merge’ (time from ready for review to merged into the main repository).
The DORA metrics measure five elements related to software delivery performance. The metrics guide how well engineering teams perform and how successful a company is at delivering software. Learn how to use SLOs to define, manage, monitor and track service reliability and security posture via open source collectors, so you can set SLOs suited for your domain, product and use cases of your customers.
Hear my colleague Michael Baldani talk about how to optimize your software delivery cycle using DORA metrics.
Building and running high-performing software requires a team culture that wants to succeed. From a software development and operational management perspective, it is vital to ensure engineers have autonomy when choosing their tools, because it comes down to them to get the kitchen orders out to the dining room.
What historically doesn't work are top-down decisions restricting developers and prescribing what tools they should use. A mobile developer might pick up a tool like Fast Lane to automatically build and deploy their service. But a back-end developer may use completely different tools. It is a careful balancing act, but you must allow your teams to use the tools they need, while ensuring they comply with the best practices to add value.
Companies need a workforce that can absorb and unleash the value of more productive ways of working. Still, not all companies are in the same state of readiness to internalize the rapid technological changes, so lead your team by improving your proficiency. This requires training and certification. Be sure whichever tool vendors you work with offer comprehensive training and certification at a reasonable cost, allowing you to lead from the front when you ‘run the pass.’
Regularly repeat best practice one and establish the foundation of becoming your own success story by building the three value conditions (utility, warranty, value for money) into your reporting processes. Perform regular checks for better control of your plans and goals. Remember, it is not a one-off exercise! If you are not maximizing value, you must take corrective action.
When teams collect the same logs from various monitoring and security tools for use cases, for example, teams pay twice for ingesting the same log data. Data tiering in combination with flexible credit-based licensing allows you to economically analyze the rapid growth of your machine data, so you can maximize its value.
As discussed, IT tool consolidation is a strategy, not a project. All businesses - large or small - are software businesses. Digital transformation is an unstoppable force. You must constantly vet whether IT products and services are up to snuff or whether you should use alternatives.
Moreover, in today’s harsh macroeconomic climate, all businesses need to adapt strategy. Goals, action plans, and budgets are changing. These changes will trickle down into the IT organization and have consequences for the use of IT tooling.
An effective IT consolidation strategy guides engineers to use solutions that create value by being useful, practical and justified at acceptable costs. It shows what to do in case of broken value conditions, how to get the value creation process back on track and deliver a perfect tasting meal every time.
Under pressure to speed up innovation, DevOps and security teams rush to adopt the latest technologies and tools. The unintended result? Tool and data sprawl increase, challenging everyone across the DevSecOps lifecycle to deliver reliable and secure applications while controlling costs.
Our multi-use SaaS analytics platform helps DevOps and security teams innovate together - even when resources are tight. Much like Marie Kondo, we’ll give you a proven approach to plan, prepare, and execute your tool consolidation strategy, so that your entire “kitchen” of tools spark joy.
Learn more about how Sumo Logic can help consolidate your tools and save you time and money.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial