Over the past handful of years, systems architecture has evolved from monolithic approaches to applications and platforms that leverage containers, schedulers, lambda functions, and more across heterogeneous infrastructures. Cloudera Data Platform (CDP) is no different: it’s a hybrid data platform that meets organizations’ needs to get to grips with complex data anywhere, turning it into actionable insight quickly and easily.
While in the old world where questions around data quality or system performance were answered by monitoring a few logs and metrics, in a distributed landscape (like a hybrid data platform) it’s not that straightforward. There are many logs and metrics, and they are all over the place.
Monitoring alone will tell you when something’s not as it should be, but that’s not answering the question of “why?” That’s where observability comes in.
Pointing to “something” that could be an issue in the previous paragraph was intentional. There are various user roles that all have different questions “why?” as they use CDP. While a business analyst may wonder why the values in their customer satisfaction dashboard have not changed since yesterday, a DBA may want to know why one of today’s queries took so long, and a system administrator needs to find out why data storage is skewed to a few nodes in the cluster. Different types of observability for different aspects of CDP provide them with the answers: data, workload, and software observability as part and parcel of the platform.
For a platform so concerned with data and the insight it brings, knowing whether the star player—data—is up to scratch is crucial. As Barr Moses outlined in her original article, data downtime is directly related to data systems complexity and immediately impacts insight and decision making. Luke Roquet recently drilled into the topic of data observability with Mark Ramsey of Ramsey International (RI) to also cover the five pillars (freshness, distribution, volume, schema, and lineage) that describe the quality and reliability of data.
These pillars and the metrics they provide are closely linked to the data governance capability CDP’s Shared Data Experience (SDX) delivers, and are surfaced in the data catalog. SDX continually captures and manages both the active and passive metadata for data assets and the processes that work on them. And, crucial for a hybrid data platform, it does so across hybrid cloud. With CDP, and SDX in particular, Barr’s concern that data governance is hard to achieve is directly addressed. Especially when implemented as a unified data fabric, CDP ensures proactive data governance and, with that, the basis for good data observability, reduced data downtime, and trusted data for better decision making.
CDP’s key role for organizations is to turn data into insight and value at scale. To do so, the platform provides a range of analytics across the complete data life cycle. Data services and workloads cover ingesting data, enriching it, making it available for analysis in (operational) dashboards, or using it to build AI and machine learning models. Each of these analytics can be deployed to different infrastructures and may, on occasion, behave differently than expected. Although data downtime may be one of the causes of missed SLA and SLOs, implementation itself should be equally observed.
Observability always works from the same basis: metrics, traces, and logs; so too workload observability. Just as in the case of data observability, workload metrics and health tests help identify and troubleshoot issues as well as potential issues, while prescriptive guidance and recommendations address and optimize uncovered problems. Specifically for the main workload criteria of performance, baselines and historical analysis not only identify and address performance problems, but also create the basis for cost prediction and reduction (an area of increasing importance as financial governance increases). Within CDP, Workload Manager provides workload observability to ensure optimal performance, reduced downtime, and improved resource utilization.
And all this—this data, these workloads—are all deployed somewhere. On infrastructures ranging from bare metal data centers to public and private clouds, across hybrid cloud. Each has their own stacked layers of enabling technologies, from operating systems to containers to resources. Historically, this is where observability made its initial entry in the IT world.
For Cloudera as an organization too, software observability has been applied extensively in the area of support. Building on over 14 years of experience, Cloudera’s support organization draws on software observable insight from over 1.3 million nodes under subscription and has created sophisticated diagnostics tools that include predictive alerting based on diagnostic data. This allows Cloudera’s customers to receive advanced warning on hundreds of different known issues and security vulnerabilities to help avoid downtime, improve reliability, and reduce risk.
Observability will continue to evolve and has proven to deliver tremendous benefits. Baked right into the platform, CDP already provides the observability tools and insights for the full stack, all the way from the infrastructure to the end user. SDX’s data catalog provides data observability that highlights trusted data for better decision making across the business and helps reduce data downtime. Workload Manager adds workload observability for optimized processes and resource utilization.
As observability evolves, so will CDP. Cloudera is already hard at work bottling the software observability the support organization uses to bring the benefits and insight it brings closer to our customers. And being the open platform it is, we’re also looking at sharing CDP’s observability with other tools and vice versa.
Observability is an exciting area that provides the answers to the questions that crop up with increasingly complex hybrid cloud environments deployed at organizations. Get in touch now to learn more about CDP’s current and future observability capabilities.