Every decade or so a new computing platform takes the industry by storm. It brings with it “shiny new physics”—power and economics so seductive that enterprises are compelled to consider a leap onto this alluring new S-curve. But the new platform, while powerful, is still quite raw. It lacks many critical capabilities that enterprises have come to rely on, and will demand, in order to truly capitalize on its promise. Like the apocryphal Emperor, this new would-be ruler is naked and in dire need of clothes! Some customers will make the leap anyway, and fend for themselves with home-grown tooling and operational workarounds. But most will watch and wait for the missing functionality to be developed for the new platform.
The Caesar of modern computing platforms is the IBM mainframe. This one had it all—systems management; application management; monitoring; workload management; visibility; performance optimization; security; resource management; even virtualization. Over time successor platforms would be born of the inexorable march of semiconductor geometries, storage densities, network bandwidth and systems architecture. Mini-computers, Client-Server, Broadband Web, Virtual Infrastructure, Distributed Computing Fabrics… And with each turn of the wheel, the new platform would be held up against its predecessor and judged rather harshly for its lack of management, visibility and optimization capabilities.
While this judgment posed a stumbling block, it highlighted an opportunity for new companies who could supply the missing capabilities. These firms did not create the new platform, but they made it consumable for the enterprise, which in commercial terms is every bit as significant.
We see this cycle playing out today around Distributed Computing Fabrics—and specifically around Hadoop. Enterprises of every stripe are showing tremendous interest in this new platform. Quite a few have already taken the plunge, and even more are experimenting while waiting for Hadoop and their own operational skills to mature.
This is where Pepperdata comes in. It allows all kinds of companies, be they mainstream Fortune 50 players or avant-garde web-scale giants, to get more out of Hadoop and rely on it for more of their critical computing needs. Pepperdata has already garnered an impressive list of customers, including both early Hadoop adopters and mainstream enterprises. These customers have turned to Pepperdata for capabilities such as performance optimization, workload management and visibility—a combination that enables scalability, resource efficiency and multi-tenancy (running multiple, diverse jobs of differing types and priorities side by side) in their Hadoop infrastructures. Pepperdata’s highly granular monitoring and real-time enforcement allows them to elevate Hadoop to the status of trusted workhorse in their production environments. Which is, of course, exactly where the new platform itself aspires to be!
In past platform cycles, this kind of value has led to the creation of enduring, strategically important and extremely valuable companies. Historical examples include CA, BMC and IBM / Tivoli. More modern archetypes range from AppDynamics, New Relic and Wily (acquired by CA) to Splunk and ServiceNow. There are important differences in the functionality offered by all these companies and in their respective places in the IT value chain. But what they have in common is their primacy in expanding the consumability of a new platform. This is a necessary and highly strategic component in how markets are created. We see Pepperdata providing that component in the Hadoop market today—and in the broader world of next-generation Distributed Computing Fabrics tomorrow.
We look forward to working with Pepperdata’s founders, Sean Suchter and Chad Carson, and the rest of their phenomenal team. This is a group with incredible depth in both the development and operation of Distributed Computing Fabrics, dating back to their seminal work at Inktomi in the 1990’s. Emperor Hadoop will soon be richly clothed!
--Peter Wagner