Blackstone's Ken Allen Reacts to DG's Acquisition of Semantic Technology Firm, Peer39
04/24/2012
Yesterday TV and online ad platform DG announced the acquisition of semantic technology firm, Peer39. AdExchanger asked Ken Allen, Director, Head of Digital Media and Internet Advisory with Blackstone Advisory Partners, LP, his reaction to the deal and the opportunity with digital ad, targeting data. The article can be found here.

AdExchanger: What's your take on DG's acquisition of Peer39?
KEN ALLEN: It's an interesting move for DG. This deal is emblematic of a central theme we see in the sector around cross-channel, real-time data analytics and targeting. One needs many pieces to fulfill this vision -- analytics capabilities across multiple, disparate pools of data, real-time attribution and targeting, RTB across display, mobile, social, video, search, etc. Some of these pieces will be "owned" and others will be "rented". Arguably, no one has the full suite of capabilities yet, but many players are moving in that direction, assembling the various components through both partnerships and acquisition. This deal is a prime example of a trend towards building comprehensive, unified marketing platforms and I expect to see more like it.

What makes for a successful data company these days? Any successful attributes?
Third-party data is becoming increasingly commoditized and there is less and less differentiation among industry participants that are simply functioning as data aggregators / distributors.
The most successful models have evolved beyond aggregation to real-time management of large, complex first+third party data sets. The natural evolution on the enterprise side is to layer on custom analytics and real-time media buying. There is a strong logic to having many of these pieces as part of a unified platform (e.g. a cross-channel CMO dashboard) and I believe that is where we are headed with many of the leading products. There will be an interesting place for aggregators/suppliers of unified first/third party data sets; however, undifferentiated third party data aggregators will need to evolve their offerings in my view.

How do you see strategy playing for data companies in the future?
I think one can draw an interesting analogy in the data value chain with manufacturing. The raw material, data, in a variety of formats and from multiple channels, needs first to be collected, aggregated, and refined into a useable format. This raw material then must be delivered to a "plant" -- e.g., an analytics and decisioning engine that leverages the data to make marketing decisions. I see different firms taking distinct strategies -- one is to gain scale as either a (data) raw materials provider or as a (data) manufacturing facility, but not both. Think data exchanges / DMPs and the many buying platforms in the market. Another strategy is to vertically integrate; to own multiple pieces of the data supply chain, analytics, and campaign management. This is a powerful model but one that is also difficult from an operational perspective -- it is very difficult to maintain best-of-breed capabilities across all segments of the value chain, and there are potential conflicts with this approach as well. We will see successful examples of each strategy going forward, with certain key determinants of success, including some or all of the following: (1) access to proprietary cross-channel data sets, (2) sophisticated data management capabilities, (3) superior analytics and decisioning platforms, and/or (4) scale.

Online Video: The Media Industry’s New Frontier

The following interview was conducted with Ken Allen, a Director in Blackstone’s Advisory Practice, who leads Blackstone’s coverage of the Digital Media sector.


What are some of the macro trends you are seeing in the advertising industry today?


The advertising landscape is undergoing a dramatic transformation that began over a decade ago and that continues today.  We have gone from a world in which advertising was once broadcast to large, homogeneous audiences to one where highly-tailored messages are targeted to specific individuals through multiple digital channels, often simultaneously.


The chart below shows the broad transformation that has taken place in the U.S. market.  As the online marketing channel has become more prevalent, increasing from only 5% of the total market in 2000 to 16% today, print has similarly experienced a dramatic decline, decreasing from a 38% share in 2000 to 22% today.  Underpinning these share shifts is the rise of online advertising technologies, in particular, Paid Search and Display advertising.


Importantly, as the previous data shows and contrary to the intuition of many, TV advertising’s share has actually increased in the past decade, implying that online has far from cannibalized TV as an advertising medium.  Television advertising’s resilience is primarily due to the fact that TV advertising, at least historically, has been fundamentally different from online advertising.  Online advertising is static in nature -- Paid Search ads appear in the sidebar of our search results; banner ads appear as we read articles and browse web pages.  The Internet as we know it, to a large degree, has been a substitute for print media, such as the Yellow Pages, newspapers and magazines.  But it has not been a replacement for TV.  The advertising share shift has followed accordingly, with online advertising substituting for print advertising.  This dynamic is starting to change, however, as Online Video technologies continue to proliferate and as Internet Video becomes an increasingly viable substitute for television.

Another major trend we are seeing is that advertising is becoming more and more data-driven, in turn enabling a much higher degree of targeting.  A tremendous amount of data about our tastes and preferences is being generated through each interaction we have with our digital world, including web browsing, using our mobile phones, and using social media platforms.  This data is increasingly being analyzed in new ways to target each of us with the ad that has the highest probability of achieving a response from us at a particular point in time.  That ad could be shown as part of our search results, in the banner ads we are all familiar with, in the sidebar ads we see in Facebook, and, increasingly, through Online Video.


Why do you believe that Online Video has such tremendous potential as an advertising medium?


Despite the significant expansion of the online advertising channel over the past 10 years, the majority of online advertising today continues to be related to
searching for specific products or services.  In other words, advertising online is typically targeted based on our intent; we search for something online, and we see advertisements that are related to what we were looking for.  This intent-based advertising is very different from what we typically experience when we see television commercials.  Rather, TV advertising is generally more focused on the branding of a product or service.  Such advertising is meant to connect with viewers on an emotional level and help customers – both existing and targeted – form a connection with a brand.  By its very nature, television advertising is also less targeted and more focused on reaching a wide audience.


The promise of Online Video is that advertisers can have the best of both worlds; e.g. high-impact branding that can be micro-targeted to a specific audience segment.  Until only recently, however, bandwidth limitations prohibited streaming video content and advertisements from reaching our desktops, mobile phones, or TVs.  With the massive investment that has been made in IP infrastructure over the past decade, bandwidth has finally gotten to the point where high-definition video is viewable through the various ‘endpoints’ we use to connect to the Internet.  This, in turn, has led to the relatively recent explosion of Online Video services, including Netflix, Hulu, YouTube, Apple TV, and Amazon.

Another key driver here is the effectiveness of Online Video advertising, which has been increasingly documented, whether you are looking at recall rates, viewing time, likability, etc.  This performance has led to CPMs that are on-par with, or greater than, what we see in the offline world with TV.  Due to these underlying trends, we believe branding dollars will shift away from television and to the Online Video channel over time, much in the same way that advertising dollars shifted away from print and into online channels over the past decade.


Can you quantify the potential opportunity in Online Video?


Most forecasts for Online Video advertising assume a roughly $2 billion market today growing at nearly 40% per year to over $7 billion by 2015.  I think this forecast, which is primarily determined by taking today’s market and growing it at a constant rate into the future, potentially understates the long-run market size by a significant amount. The chart below shows the untapped potential we see in this market.  As the chart on the left shows, 82% of offline advertising dollars in the
U.S. are directed at branding efforts (defined as TV, radio, print, and outdoor channels), versus only 18% targeted at direct response channels (defined as direct mail and directories).  This ratio inverts in the online world, with only 39% of the online dollars targeted at branding (Display and Video).  In other words, the ‘branding gap’ in the online world, based on offline ratios, is approximately $13 billion in the U.S. alone. Video is particularly well-suited to target this gap, in our view.  When combined with the $2 billion being spent on Online Video advertising today, this implies a $15 billion potentially-addressable market.


Additional support for this untapped market potential is provided by the chart on the right, which compares the number of Online Video impressions having advertisements to those without advertisements.  Today, only 12% of the total videos online currently have ads.  The monetization of only 15% of the ‘unfilled’ inventory, at current average CPM rates, again implies a nearly $15 billion potential market by 2015.  Clearly, the current low ad penetration is partially due to the high proportion of user-generated content; for such content we would not expect to see high ad penetration rates.  But the low penetration is also a function of the nascence of this market.  Over time, as the quality of content improves and monetization techniques such as video ad networks and ad insertion technologies proliferate, we would expect to see much higher ad penetration rates and monetization of this largely untapped opportunity.


What are some of the implications you see regarding the advertising landscape?  Who are the winners and who are the losers?


I believe that the Online Video industry has the potential to impact the TV industry in the same way that the Internet impacted the print industry.  We see many of the same precursors: dramatically improved bandwidth, new and disruptive technologies (YouTube, Hulu, etc.), the fragmentation of content, and new ways of accessing content that go ‘over-the-top’ of traditional, entrenched distribution channels.  Innovation is occurring at a dizzying pace right now and I believe it is only a matter of time before we see an over-the-top video platform become successfully integrated with the television.


Given these trends, there is a war going on right now among players in the ecosystem to try and lock up two of the most strategic elements of the video value chain: 
the end-point (TV, set-top-box, mobile handset, etc.) and the content that is distributed to that end-point.  Much of the partnership and acquisition activity we are observing among the major players in the ecosystem is aimed at maximizing share of content by solidifying rights to use that content.  Further, many of the distributors of Online Video are beginning to create proprietary content of their own.  Examples include Google, which is reportedly spending hundreds of millions of dollars to create original content on YouTube, and Netflix, which is doing the same through its streaming video service.  The end-point is a similarly-prized battleground, where the various service providers are fighting to gain a large strategic footprint.


In addition to the larger players, we also believe a new crop of technology providers are showing tremendous potential, such as the video ad networks, ad insertion players, and online video platforms (OVPs), which are increasingly enabling the ecosystem with new and innovative technologies.


The losers will be incumbents who elect not to embrace these trends.  We have seen such a phenomenon before with other types of media:  first with print, then with books and music.  Video, I believe, will be next.

The interview below was posted on AdExchanger.com on Monday, November 29, 2010
AdExchanger.com: First, can you give a quick background on you, Blackstone and your responsibilities today?


KA: Blackstone is one of the world’s leading advisory and investment firms. The firm was originally founded in 1985 as a boutique M&A advisory firm by Steve Schwarzman and Pete Peterson and advisory has remained core to the firm’s activities for over 25 years. Our firm has advised on some of the most important and complex transactions worldwide, including advising AIG and Ford in their restructuring during the recent financial crisis, Reuters in its sale to Thomson Corporation, Microsoft in its search negotiations and partnership with Yahoo!, and Xerox in its purchase of Affiliated Computer Services.


I have led the advisory group’s efforts in the Digital Media and Internet sectors since coming to Blackstone from Deutsche Bank in 2007. I have spent 10 years covering the media and technology sectors and have advised on over $10 billion in transaction value during that period. In addition to the Yahoo! search partnership, Blackstone has been extremely active in the digital space, having advised on numerous recent high-profile deals such as Publicis in its acquisition of Razorfish from Microsoft, Sapient in its acquisition of The Nitro Group, and BuyVIP in its sale to Amazon.com.


Our philosophy for 25+ years has been to provide thoughtful, objective, and realistic advice to companies and entrepreneurs throughout all stages of a company’s lifecycle, whether during the start-up phase or as a large corporation. Within the digital media sector and with respect to high-growth emerging businesses, in particular, we find there to be a serious lack of impartial, trusted advisory services. Many of the small boutiques are simply pushing product to buyers’ corporate development departments and do not have the depth or breadth of relationships with buyers, the long-term strategic perspective, the global reach, or the M&A expertise necessary to provide best-in-class service. Conversely, the larger bulge bracket firms are not able to deliver the same senior-level attention, the objectivity, or the depth of sector-knowledge that we do. Finally, our relationship with the private equity side of the business enables us to have a principal mindset and truly put ourselves in the shoes of the companies we advise with an eye toward creating long-term shareholder value.


What's your view of the digital ad technology landscape today? Are there too many companies?


The digital ad ecosystem is one of the most exciting and dynamic areas within the entire technology sector. At Blackstone, we spend a good deal of time speaking with and advising the large technology acquirers and investors regarding where they should place their chips next. Digital advertising and marketing is front and center among nearly every conversation we have, whether with the holding companies, interactive agencies, media players, or technology infrastructure providers.


A lot has been made about this market being overcrowded. In my view, the number of companies in the sector is a reflection of the massive opportunity for growth that exists. Additionally, one’s view on the market fragmentation really needs to be formed on a sector-by-sector basis. Are there too many long-tail networks that are simply aggregating low-value inventory? Probably. However, one could easily argue that, in certain sub-sectors, such as mobile and video, there is a scarcity of companies and a need for new businesses to solve entirely new problems.
I also do not believe it is a particularly productive debate whether there are too many or too few companies in this ecosystem. As with any major market shift, as we are experiencing here, new innovators will arise to meet market demand for new products and services. There will be winners and losers, but that is not necessarily a function of the number of players, but rather whether a company has a unique enough offering that cannot be provided by others, and the skills and vision to bring that offering to market. Among the companies I work with, each has a unique angle on creating value for its customers that, I believe, will “win” regardless of how many players there are in the market today and whether or not the market is deemed by some as being overcrowded.


How do you expect this market to play out in the coming year? More M&A? Attrition?


I expect a robust M&A market within the digital advertising space over the next several years. Among the prospective buyers I speak with and advise, nearly all are closely examining this sector and figuring out what their next move will be. Most realize they have some of the elements of what they ultimately need, but are missing certain others. On the buy-side, there are a number of key strategic assets and capabilities that need to be combined, including: analytics, optimization, data, creative, business intelligence / insight, and services. These all need to be brought together in a multi-channel format, at scale, and in real-time. That is a really hard problem to solve, and no one has done it successfully yet, in my view. On the sell-side, publishers need to figure out how to maximize the value of their premium inventory and monetize their audiences in new ways. In the middle, the exchanges are commoditizing low-value inventory, ramping quickly, and expanding their analytic tool sets. And the networks need to evolve, as we all know, by rapidly gaining scale and/or optimizing targeting and media performance. All of these trends translate to change and opportunity, which historically has always been a great catalyst for M&A.


Any thoughts on macro-economic trends affecting digital media?


Regarding the macroeconomic environment, while we are out of the woods and, I believe, the risks of a double-dip are low, we are still not where we should be by historical standards at this point in a recovery. Much of the world today is taking somewhat of a wait-and-see approach to investments, whether they be in people (hiring), R&D, capital equipment, or M&A. Despite the multiple factors I have noted that are tending to drive M&A, it is rather remarkable that M&A has not in fact been more robust given many of the economic factors in place today. Technology and Media companies are sitting on all-time high levels of cash. Debt is cheap and abundant by historical standards. Valuations are reasonable. And yet, technology M&A spending globally in Q3 was down nearly 7% vs. Q3 2009, which was near the cycle low (or so we thought). In October of this year, the picture looked even worse, with y/y volume down 22%. This somewhat anemic rebound in M&A is due to the significant amount of uncertainty – around the political and regulatory climate, tax policy, and the pace of economic recovery – in the market today. Until this uncertainty is lifted, M&A will continue to be muted.


Among the companies I advise on the sell-side, I often get asked the question whether now is the right time to pursue a sale process. The answer, of course, is entirely situation- and company-dependent. However, I will say that, if you are trying to maximize value over the long-term and you have no need for liquidity today, it is not necessarily the best time to pull the trigger. There is a lot of supply in the market right now, and demand for M&A is still modest. Among our clients, the most successful exits over the past year have resulted from in-bound buyer inquiries, which have in turn catalyzed a process. In certain situations, we are advising clients (if they have the luxury of doing so) to continue to build business momentum and wait until valuations improve and buyers are more open with their checkbooks before approaching the market proactively.


Much has been made about today's stock exchanges and how they relate to advertising exchanges. Do you think the financial market analogy is appropriate in media? Why or why not?


I agree that this is a rather easy and obvious comparison to make (they are both called exchanges, after all!). The question is not whether inventory will be bought and sold in an exchange format (I believe a majority of it will, someday), but when and how? With all of the hype around the migration to exchange-based buying, some have extended the analogy and made the assumption that in the next few years the vast majority of inventory will be transacted directly by brands through trading desks linked to a small number of exchanges (e.g. that there will be virtually no difference in the way stocks and ads are traded). I think this oversimplifies the complexity of how digital ad inventory will be bought and sold in the future; I also believe adoption will be slower than some suggest due to a number of important differences between the financial and ad exchanges, including a lack of (high-quality) inventory, regulation, unit standardization, information asymmetry, governance, transparency, and adoption by the buy-side. I will go into each of these differences and their implications in a separate post, but the headline is that I believe such differences will limit full-scale adoption, particularly for the highest-value inventory. The exchanges are going to make a significant impact, for certain, but I do think this impact will happen a bit more slowly than the many predict.


What are your predictions for investment in the sector going forward, including how the VC/Angel landscape will evolve, if at all?


Venture Capital, as an asset class, has not performed well over the past decade and we continue to have too much money chasing too few deals. We haven’t seen a major shakeout (yet) as many funds have 10+ year lives and as such, the wind-down cycle will take time. I do, however, think there will be plenty of examples of firms that are not able to raise their next fund (or need to significantly downsize) in part due to bad bets they have made in this sector.
Some have argued that there has been too much capital poured into this space on the part of the VC/Angel community. While that may be true with respect to the majority of companies that won’t have successful exits, let’s not forget the significant aggregate value that has been created in the sector as a direct result of the investments made. There have been some tremendous exits; by my math, since 2006, there has been over $18 billion in value realized across 20+ “scale” exits such as Admob, Quattro, Omniture, Interwoven, Blue Lithium, Right Media, aQuantive, DoubleClick, etc. Most VC’s I know would be pretty happy with the >7x return (based on $2.5 billion invested) that this market has generated. Of course, the returns have been generated across only a small number of firms. But in aggregate, the sector has created, and will create, significant value relative to the capital invested.


I think there continues to be very attractive opportunities for investors in this sector. The overall global advertising market stands at over $200 billion in aggregate value, versus only ~$45 billion allocated to digital. Not only will core online segments, such as search and display, continue to show robust growth in the future, but over time an increasing percentage of media spend will become addressable (TV among the most notable examples) and therefore present new opportunities. Buyers, for their part, will become increasingly comfortable with the online channel, further driving digital marketing spend.


Among the VC’s I spend time with, key investment themes tend to be around mobile, video, and multi-channel (particularly online / offline) analytics. In the exchange / DSP space, I am seeing a lot of interest around platforms that not only facilitate RTB transactions, but also technologies that optimize RTB campaigns by leveraging both 3rdparty and primary CRM data. Given the abundance of data, new tools are needed to add an “intelligence layer” and to optimize audience buys based on that intelligence. And as I have mentioned, there is a void that needs to be filled by a new breed of technology-enabled marketing services company that can act as a trusted advisor in media planning and campaign management, and that has the proprietary tools to optimize campaign performance across channels. I believe that these themes, as well as others we have not yet thought of, will continue to create significant opportunities both for investors and strategic acquirers for some time.


Follow Ken Allen (@kw_allen) and AdExchanger.com (@adexchanger) on Twitter.

Exit Alternatives

I recently represented Blackstone on a panel discussion at the most recent OnDemand conference in Palo Alto. The discussion, titled “Is Cloud Computing the Next Bubble” debated whether the massive amount of VC investment currently pouring into the sector to fund companies building the next generation of IT infrastructure foreshadows a future bubble, similar to others we have recently experienced, such as those preceding the Internet and credit collapses.
In reflecting on the question, both during and after the discussion, I believe that it is possible we could be developing early signs of a bubble, but that it is not likely. Rather, I believe we are in the very early innings of a trend that will shape the IT industry for the next 20 years, and that there is little evidence to suggest that a bubble is here, or forming, today. In reaching a conclusion on this topic, in my view there are 3 fundamental questions that need to be addressed first:
Has the market hit an inflection point yet?
Are assets overvalued?
Has the buyer / investor universe “capitulated”?
I believe that the answer to these 3 questions is, decidedly, no.
Has the market hit an inflection point yet?
With respect to whether the market has yet hit an inflection point, I find that hard to believe. Admittedly, it is true that we are currently at the top of Gartner’s famed “hype cycle”, a designation typically only achieved once an IT term has become a buzzword, with market hype in excess of market reality. The term “Cloud Computing” certainly falls into this category. It is also true that nearly every company, in the infrastructure software market, at least, is now a “cloud” company, regardless of whether their business model has the first thing to do with Cloud Computing. These ominous signs might suggest that we are nearing a bubble.
Make no mistake, however, the cloud is real, and will happen. It is simply a question of when. Unlike previous investment “bubbles”, such as RFID, grid computing, and Open Source software, embodied by interesting technologies or business models in search of a problem, the trends driving cloud computing – ease-of-use, lower cost, efficiency, collaboration – have been central tenets of the IT industry since the industry’s first days. In other words, cloud computing is a solution to entrenched, difficult-to-solve problems, not simply clever solutions looking for a market.
Viewed through this prism, cloud computing is just beginning to get its legs. We are in the very early innings of a very important trend. The Internet of 1994, not 2001, if you will; the O/S wars of 1983, not 1995. These cycles take time to develop and play themselves out; even if the hype is ahead of the reality today, the reality will come, and until that inflection point is reached, it is a bit early to call a market top.
Are assets overvalued?
To answer this question, we need to look at observable prices paid in the market for Cloud Computing assets, both in terms of publicly-traded multiples and prices paid in precedent transactions. With respect to the former, public multiples, I don’t believe assets in this sector are grossly over- or under-valued in the market today. It is true that many of the “cloud” companies, whether you consider a Salesforce.com or a VMware, sell at premiums to the market averages, this is predominantly a function of, and justified by, one thing: growth. Publicly-traded software companies trade on their future growth prospects, with a correlation of over 80%. In the case of the aforementioned companies, the reason they trade at such high multiples is due to their long-term growth prospects, which are well above 25% on an annual basis. Investors are willing to pay higher multiples of cash flow because their cash flow is growing so rapidly; and as such the present value of those cash flows is higher.
Considering multiples paid in precedent transactions, it also does not appear that we are in a bubble. First off, there simply have not been many pureplay “cloud computing” transactions we can observe. One could argue recent deals such as VMware / SpringSource or Salesforce.com / Jigsaw fall within this category. While true these deals fetched healthy multiples (8 and 14x trailing sales, respectively), it is also true that these multiples may well be justified from a corporate finance point of view. In the case of VMware / SpringSource, VMware was willing to pay a strategic multiple to acquire a strategic piece of the cloud computing stack it did not own: a leading development platform. In so doing, VMware will be able to begin controlling and managing cloud-based applications at the point they are created, something, arguably, no one other than Microsoft can do in the market today. In its quest to build a next-generation “Cloud Operating System”, VMware was willing to make a big bet that Infrastructure-as-a-service and Platforms-as-a-service will, in the long-run, be inexorably linked.
Has the buyer / investor universe “capitulated” yet?
In my view, we are far from the “irrational exuberance” that has characterized previous bubbles, with respect not only to the “frenzy” around strategic assets but also the prices paid for those assets. In prior bubbles, investors bid up assets and bid down risk premiums to levels unjustified by underlying fundamentals. From the public equity perspective, investors during these times are willing to own assets and any nearly price, not due to rational analysis based on business fundamentals, but simply because they believe the price of assets will continue to move higher. From the M&A standpoint, these periods frequently witness a frenzy around scarce assets and a perception on the part of buyers that they “need to own” certain businesses at any price, either because they are playing catch up in a market, or to block a competitor from gaining too big a lead. Despite the relatively high valuations of deals mentioned above, I would argue we are far from witnessing the feeding frenzy and associated sky-high valuations that are characteristic of other bubbles.
The moderator of the OnDemand panel began by referencing the Google / Youtube deal from 2006, in which Google paid $1.5 billion for a company with no revenue, citing it as an example of the pending “bubble” in online video, and asking whether we have seen such a deal yet in the Cloud Computing market. Given the three criteria I have outlined, I actually think it is debatable whether Google’s purchase was indeed evidence of a bubble. In the case of that transaction, Google paid a strategic premium to own a leader in the space, and may have gotten a bargain for the potential synergies to be realized from the deal. The acquisition was also de-risked due to the fact that the price paid represented only approximately 1% of Google’s market capitalization at the time. Time will tell whether the purchase was in fact justified from a corporate valuation standpoint, but it is highly possible and perhaps likely, that this buy was, in fact, a “rational” one.
We will not know for a number of years whether we are at the front edge of a Cloud Computing bubble. Perhaps we will look back someday and view VMware / SpringSource as the catalyst that led to future irrational exuberance in this market, followed by a precipitous decline in asset values. But I suspect not. My view, rather, is that we are not there yet, and may not be for many years to come.

The Cloud: Hype or Reality (or a Bit of Both)

Cloud Computing represents a fundamental change in the way that compute resources are managed, utilized, consumed, and delivered. The common view among industry followers today is that Cloud Computing has the potential as a paradigm to be as disruptive as other major technological shifts over the past century. Indeed, the current Cloud "foundation" being constructed is, in many ways, analogous to the construction of the public electric grid or telecommunications networks, as noted by Nicholas Carr in The Big Switch. In the Cloud Computing paradigm, compute power is transferred from the edge to the core, challenging the very fabric embodied by the distributed computing architectures that have evolved over the past 25 years. Doubtless, this powerful trend will have major implications for how we work and live for many years to come.


As with many disruptive trends in technology, however, comes hype; and the Cloud is no exception. The hype, in many ways, hearkens us back to the turn of the millennium, at which time we were led to believe that the "new economy" would re-write the fundamental laws of economics; when companies were valued based on a multiple of eyeballs, impressions served, or long-term forecasts of profits they might hope to receive someday. Whether reflected in the widespread mis-use of Cloud terminology, misconceptions about where we are on the adoption curve, the fact that nearly every company today claims to be a play on Cloud Computing, or the increasingly common view that Cloud Computing will, overnight, drive computing costs to zero and computing efficiency to infinity, the hype of Cloud Computing has in many ways taken on an over-inflated life of its own.


I do not mean to discount the ability of the Cloud to transform our lives; quite the contrary, I am a big believer in its potential and have focused a great deal of time studying cloud architectures and ecosystems. In this blog, however, I will attempt to offer a balanced perspective on the Cloud based on my research and discussions with an exciting new crop of start-ups I see emerging. In short, I will attempt to offer the perspective of a realist, albeit a hopeful one. In the process, I aim to offer some insight into what I believe to be a truly disruptive phase within the infrastructure computing industry.

Whose Fault is it?

One of the intriguing debates I continually hear surrounds the following question: "When will the Cloud be ready for prime-time?". In truth, and despite all the hype surrounding Cloud computing, the Cloud, for the most part, has been predominantly confined to test environments, with few tangible examples of live, mission-critical deployments. Of course, this is changing rapidly and I believe we are quickly migrating along a roadmap in which we will see the adoption of virtualization and the application of virtual infrastructure to mission-critical "private cloud" deployments.


However, there are very real roadblocks to this vision becoming a reality. Chief among them is making sure one's systems are running reliably and consistently -- always. The requirement of "5-nines" uptime (uptime of 99.999%) must apply not only to physical environments, but also to virtual ones. In the physical world we have dealt with system uptime through a number of means, including such technology innovations as clustering and fault-tolerant hardware. With the uptake of virtualization and the strong demand for running live workloads in VMs, we are beginning to see such innovations migrate into software as well.


The latest releases of the major virtualization platforms indicate how fault tolerance is drawing increased focus. VMware, which has had a High Availability or "HA" feature for some time, in its vSphere product, the latest incarnation of the company's "Datacenter Operating System", has embedded a Fault Tolerance feature as well. Marathon Technologies, a Boston-based start-up, has also been an early innovator in software-based Fault Tolerance with its everRun VM product. Marathon has formed strategic alliances with both Microsoft and Citrix and provides the underlying technology of the FT solutions for both virtualization platforms.


I expect to see these solutions and others increase in prevalence as cloud computing continues to evolve and increasingly supports mission-critical workloads, particularly as we move beyond private deployments into public cloud infrastructures, where SLAs supporting near 100% availability will be increasingly commonplace (and critical).

Ceci n'est Pas un Nuage (This is not a Cloud)

There seems to be a widespread acceptance within the industry that the Cloud, to borrow a phrase from famed author Michael Lewis, is the "New New Thing" in Enterprise Computing. I generally subscribe to this view, although I think the term "Thing" in Lewis' phrase is particularly apt in this instance. The irony, as I see it, is that we have not as yet truly defined what the Cloud means, and, as such, Cloud Computing continues to be one of the most mis-used terms in IT today.


I believe we are still very much in a phase where basic definitions are still being written. I take as evidence a recent survey that was done by one of the major research houses that posed a question to nearly 2,000 companies asking which public cloud computing providers they currently use. The number one company that respondents cited was IBM. IBM, incidentally, did not have a public cloud computing platform at the time of the survey (putting aside Blue Cloud, which was and still is predominantly used as test bed and managed hosting infrastructure for IBM's enterprise clients). Nor did Microsoft, which similarly was cited in the top 5. Azure, Microsoft's Cloud Computing Platform, had not yet been released at the time of the survey, so I question how one could have "used it"? The answer is that you couldn't.


This same survey asked companies how they are currently using public cloud platforms. The number one response to this question was Internet Application Hosting. The second was Databases, followed by Disaster Recovery and Remote Storage. I would argue that, with the possible exception of application hosting, none of these are truly leveraging public cloud infrastructure.


Contrary to popular belief, the Cloud is not the Internet, nor is it SaaS. It is not Utility or Grid Computing. And it is most certainly not Managed Hosting or Co-location. It is much broader than these concepts and far more powerful. Cloud Computing in its purest form encapsulates the notion that compute power, the network, and storage are abstracted away from the underlying hardware and delivered as-a-service, on-demand. Simply hosting an internal e-mail server or a database, whether on-premise or in a hosted datacenter, is not Cloud Computing, but rather simply an application hosting and delivery framework. Remote storage is simply that; granted, it may leverage a distributed computing architecture, just not necessarily a Cloud Computing one.


In my view in order for a computing architecture to be a "Cloud", it needs to have certain characteristics, outlined as follows:
  • It must be virtualized. The rise of Cloud Computing did not coincide with the rapid adoption of virtualization technology by accident. Quite the contrary, virtualization enables and catalyzes the adoption of cloud computing. Only by abstracting the hardware from the software can compute, storage and networking resources be pooled and delivered as a service. Virtualization enables this to happen and, as such, is a precursor to true Cloud architectures and services.
  • It must be automated. A clear advantage of Cloud computing is that it facilitates the pooling of hardware resources and execution of workloads on-demand. One does not need to know the details of how a workload is executed, merely that it is. Under the hood, workloads, executed in Virtual Machines, may be executed on a single server or many, may be running on one SAN or multiple arrays, all the while leveraging a networked infrastructure to optimize computing tasks. A key element here is the notion of workload automation and optimization; in essence, that the underlying infrastructure is tuned and "intelligent" enough to self-correct and automate computing functions, such as VM migration, network optimization, provisioning, configuration, and lifecycle management, without manual intervention.
  • It must be delivered as a service. Central to the Cloud Computing model is the notion of services-orientation. To use the analogy of the electric grid, we do not necessarily need to know how the computing power is generated or even where it comes from, only that we are able to "plug in" to the network and begin using Cloud-enabled applications, platforms, and infrastructure. Part of the thesis here, once again, is that Cloud services are delivered over an IP network, analagous to the electric grid. That network could be inside the firewall, as in the case of Private clouds, or outside of it, in the case of Public ones. Regardless of the deployment model, Cloud-enabled workloads must be executed on-demand in a services framework.
  • It must leverage centralized computing resources. Central to the Cloud Computing model is the idea that the execution of workloads happens centrally, within the datacenter. This is a dramatic departure from the client-server model whereby workloads are executed locally, on either the desktop or in a branch office. By centralizing workload execution, in theory, you should be able to realize higher utilization rates and manage execution more efficiently. This does not mean that all execution must happen in a centrally-located datacenter. To the contrary, workload execution often happens in a distributed manner, much of it at the edge of the network (i.e. through platforms such as Akamai's EdgePlatform). Nonetheless, centralized computing resources, whether at the edge or the core of the network, must be leveraged for a true Cloud Computing architecture to be in effect.
In mis-using the term Cloud Computing and misunderstanding its implications, Companies run the risk of missing the many, significant opportunities the trend presents. I have had a number of conversations with companies recently during which they describe Cloud "roadmaps" encapsulating their vision of Cloud Computing and how their current or prospective products play into it. While sensible from a corporate planning and strategy perspective, more often than not these roadmaps are simply last year's strategic plan with the latest buzzword attached. Five years ago that buzzword may have been SOA or Grid Computing, this year it's the Cloud. The roadmap hasn't changed, only the marketing.


This is dangerous in my view. The issue for companies is that the Cloud truly does fundamentally change computing architectures, and opens new opportunities both to leverage and create new Cloud services. While it may be convenient for a company to make the claim that its products are a play on Cloud Computing simply because they touch a network or help enable a datacenter, it is simply not that easy. This is different than previous infrastructure computing shifts - the transition to x86, blade servers, grid and utility computing, etc. were largely about packing more compute power into a smaller footprint. Cloud computing, beyond the obvious gains in utilization and improvement in costs, is more about improved management, automation, and optimization, as well as offering a framework for networked, collaborative, internet-based applications that are far more powerful than those offered in a static client-server world. Companies need to understand how the Cloud will impact fundamental computing architectures before they can truly change their underlying business model to take advantage of this powerful trend.