Thursday, October 28, 2010

Cloud Computing - Part 3 - Life Cycles and Transitions

http://cloudcomputingexpo.com/
I will be attending the 7th International Cloud Computing Expo at Santa Clara, CA next week.
My blog will have some posts reflecting the experiences in this convention.

This is part three of a series of posts before the conference that will serve as a good basic introduction to the benefits, challenges, and risks associated with Cloud Computing technologies and business strategies.

I have been a lead user and user-innovator of computer technology for about 20 years. My MIT SDM education has provided me with powerful frameworks by which I can better study the strengths and weaknesses of technologies and their interaction in the greater business landscape.

A core part of the SDM program involved the study of technology dvelopment/deployment and business strategy. The next four foundation-level posts are directly based upon a series of four papers that I prepared as part in course 15.965 - Technology Strategy for SDM, during Spring 2009 [1]. These papers focused on a study of the different technological and business opportunities associated with Cloud Computing.

Please note that these blog posts aim to bring the discussion to a level that is more understandable to the general technology-saavy public. A thorough examination of the technology and strategy associated with Cloud Computing technologies would easily qualify for a doctorate thesis.

A Brief Introduction to Technology Evolution of the Cloud Computing "Concept"
The research into the technological concepts behind Cloud Computing could be traced back to the development of the first computer networking infrastructure, the Advanced Research Projects Agency Network (ARPAnet), in 1962. Charles Herzfeld, ARPAnet Director from 1965 to 1967 described the project intentions:
“The ARPAnet was not started to create a Command and Control System... Rather, the ARPAnet came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators who should have access to them were geographically separated from them.” [3]
ARPAnet Geographic Map - 1969



ARPAnet Geographic Map - 1971
ARPAnet Geographic Map - 1980
Research, development, and diffusion of prerequisite technologies to Cloud Computing followed: Ethernet standards in the 1970s, TCP/IP protocols in the 1980s, and the advent of widespread high-speed internet access during the 1990s and early 2000s.

Application of the Evolutionary Life-Cycle Framework to Cloud Computing Development

Fernando Suarez, Associate Professor of Management at Boston University, has identified five key phases in the evolutionary life-cycle of a technology [2] as shown in Figure below. Each phase is bounded by seminal events in the technology’s development and adoption in a market. While these events have been standardized by Professor Suarez, they are unique to the particular circumstances in the development and diffusion of each technology.
The Key Phases in Technological Evolution [2]
Using Professor Suarez’s framework, a brief description of the phases, seminal events, and how they relate to Cloud Computing follow:

Phase 1 – Research and Development:
This phase, initialed at the very beginning of technological development (To), includes the initial research into a technology before a working prototype (TP) is developed. While computer clustering has been used with supercomputers for many years, To for the current incarnation of cloud computing can be traced back to the early 1990s when Ian Foster, Carl Kresselman, and Steve Tuecke began research into a modern form of distributed computing for science and engineering called “The Grid” [4] [5]. Analogous to the electric utility grid, this technology would allow a user to use metered computing resources.

Early research was focused within academic institutions. The innovation trajectory at this stage concentrated on the coordination of resource sharing among different computers, leading to the establishment of Virtual Organizations (VOs), a new type of architecture that supported problem solving based on computer collaboration. Key metrics of success, embodied in the first working prototypes, included acceptable data transfer rates among network nodes, reliability in the transfer of data, and the ability to dynamically integrate output from computer collaboration onto a central server.

Phase 2 – Technical Feasibility:
Initiated by the development of the first prototype (TP), this phase focuses on refinement of the technology until the first release of a commercial product (TL). The prototypes of the shared computational aspects of Cloud Computing, then called Grid Computing, were focused in academic volunteer research projects.

The first of these was the Great Internet Mersenne Prime Search, started in January 1996, focused on finding Mersenne numbers, positive integers that are one less than a power of two, which are also prime numbers. [6]

This free project, and others like it, such as SETI@home in 1999, proved that the use of centralized servers interacting with personal computers over high-speed internet connections was a feasible technology to pursue commercially.





Phase 3 – Creation of a Market for the Technology:
Initiated by the release of the first commercial product (TL), this phase focuses on alignment of the technology to the needs of potential customer segments, ending when an early front runner in the technology appears in the market (TF). The need for centralization and standardization in an increasingly digital globally networked world is self-evident at different levels to many consumers. However, the radical architectural innovation presented by cloud computing compared with existing technologies has slowed market adoption of many innovations associated with this new technology.

In order to effectively address different customer segments, Cloud Computing has fragmented into various types of products and services with different development trajectories.
  • Online File Storage - The remote on-line file storage service, first pioneered during the “dot com boom” of the late 1990s, is currently inferior to internal hard disk drives and local network servers in speed. However, it has the ability to offer virtually unlimited storage capacity to its clients. In addition to free storage services that are typically limited to a few gigabytes, these service providers have been offering low monthly rates ($5 to $10) per month for unlimited backup capacity. These rates allow remote storage to be an economical alternative to local storage of archival files.
  • Social Networking - While preceded by an early dial-up modem service called “The Well” in 1985, the mid 1990s saw the dawn of the World Wide Web based social networking sites, such as Geocities in 1994, Classmates.com in 1995, Linked-in in 2003, and Facebook in 2004. These services are leveraging the advantages of multiple users contributing to a centralized database. Users, after contributing some basic information about themselves to the database, are able to exploit the connections this centralized database provides. Most sites offer limited services for free in order to easily build up a database, but require monthly payments for premium services ($24.95 - $499.95 per month in linked-in depending on level of access)
  • Software as a Service (SAAS) - One of the first outlines of Software as a Service (SAAS) applications was made in the paper “Service- Based Software: The Future for Flexible Software”. [8] In this paper, the authors mention that there were three key models for SAAS in 2000:
    • A Rental Model in which the software distributor leases the software license to the user. Distributors must focus on maintaining additional content or beneficial upgrades for the user in order to maintain the contract. An example of the rental model can be seen in anti-virus software, which need continuous virus definition updates
       
    • A Server Model offers software from a central server on a pay-per-use basis. The software is not downloaded to a local machine.
    • A Service Package Model has components that enhance the base software package to use the networking capabilities of the internet. Many applications, have “value-added features” that require internet access to various central servers.
  • Productivity Suites - An increased need for web based productivity tools became evident as personal and business consumers became increasingly dependent on the internet for collaboration and communication in the mid 2000s. Hotmail in 1996 was the first webmail service and was promptly sold to Microsoft a year later. Google Gmail, a competing webmail client, was released in 2004. Google later chose to leverage the advantages of the centralized servers of Cloud Computing and offered a now dominant package of interlinked software. Google offered Google Talk for voice over internet protocol (VOIP) in 2005, and Google Calendar for scheduling and Google Docs for word processing, spreadsheets, and presentations in 2006.
  • Total Solution Software, Storage, and Inter-connectivity as a Service - A new paradigm of the personal computer may emerge. All the above market solutions assume that the user’s machine would be the focus of the processing of data. Microsoft has passed on making a first-move in many of these markets. In one of the most ambitious projects to leverage Cloud Computing, Microsoft is planning to offer a new Cloud Computing based operating system, Microsoft Windows Azure, that not only offers many of the services listed above, but also allows computational tasks to be assigned to centralized servers, turning the home computer into an advanced terminal.
  • Editor's Note - Since the writing of this analysis in early 2009, it has become obvious how several of the above Cloud Computing services have become strong synergistic complementors of mobile technologies, including smartphones, e-readers, and the new paradigm in tablet-based technology, pioneered by the Apple iPad. The continued widespread adoption of this hardware, enabled by mobile storage and software, will help to drive Cloud service expansion and vice versa.
Phase 4 – The Decisive Battle for Dominance
Once an early front runner in the technology field appears on the market (TF), a battle for dominance in the technology begins. This battle ends when a dominant technology emerges (TD). Latecomers often will have difficulty in gaining market share unless they expand to new markets, or offer a disruptive version of the technology and change the market dynamics.

While the online file storage, individual software as a service, and social networking have developed in parallel to dominant architectures as described in detail above, the key battle for dominance of the Cloud Computing architecture lies in two issues:
  • Will the dominant architecture of Cloud Computing involve separately sourced services as is most often seen today, or a packaged group of interconnected applications used as complementary assets as can be seen in Google Apps and will be seen in Microsoft Windows Azure?
  • If the future Cloud Computing dominant architecture will be an integrated set of applications in a suite, what will be the role of the personal computer? Will the Cloud serve as a source of centralized storage, data, and base applications with the personal computer or mobile device as the focus of activity as Google Apps proposes? Alternatively, will there be a new type of operating system, as proposed by Microsoft Windows Azure, which starts to leverage distributed computing and shifts the focus of the processing into the Cloud?
Phase 5 – Post Dominance Battles:
Proceeding from the establishment of clear technology architecture (TD), post dominance battles focus on technology development within the dominant architecture. Major participants typically have a license to core components of the architecture or have already amassed a significant market share or complementary assets supporting the technology. Online storage and social networking have developed standardized architectures for the present time and are evolving in parallel to meet different social and business needs. However, software as a service and application suites are still under heavy contention for dominance based on the two key issues described in Phase 4.

Key Player Cloud Computing Adoption and Responses to Technological Innovation
The table below indicates how the key distributors and consumers have responded to different technological innovations in Cloud Computing. As of 2009, there appears to be two overall architectures that are competing for long-term dominance of Cloud Computing technology – separate specialized services or highly integrated packages:


Specialized Services (à la carte)
  • IBM’s “Blue Cloud” business hardware/software solution packages targeting IT groups.
  • Amazon.com packages addressing a broad spectrum of services for both end-users and service providers.
  • Google Apps (as isolated applications).
  • Miscellaneous Software and Service Providers who offer limited services (not shown).
Integrated Services
  • Google Apps (as an integrated suite of applications – a “Cloud Computing Office”).
  • Microsoft Windows Azure – Cloud Computing based operating system offering storage, SAAS, and distributed computing solutions in an integrated package.
Government legislation may affect the outcome of this battle if it prohibits the all-encompassing consolidation of data and services under one package as presented by Windows Azure. Whatever the result, the architecture that becomes dominant will force services using competing architectures to redefine their services, find a niche, or leave the market.

The discussion of Cloud Computing will be refined/expanded upon in the next part of this series:


References:
[1]    Atencio, Charles, “Demand Opportunity of Cloud Computing with Personal Computers”, 15.965 - Technology and Strategy (Massachusetts Institute of Technology, Spring 2009)
[2]    Suarez, Fernando, “Battles for Technological Dominance: An Integrative Framework”, in Research Policy 33, Pages 271-286
[3]    “Charles Herzfeld on ARPAnet and Computers”, About.com:Inventors,
http://inventors.about.com/library/inventors/bl_Charles_Herzfeld.htm
[4]    Foster, Ian, et al, “The Anatomy of the Grid”, Intl J. Supercomputer Applications, 2001,
http://www.globus.org/alliance/publications/papers/anatomy.pdf
[5]    Wallis, Paul, “A Brief History of Cloud Computing: Is the Cloud There Yet?”, August 22, 2008, Cloud Computing Journal, http://cloudcomputing.sys-con.com/node/581838
[6]    Great Internet Mersenne Prime Search – GIMPS,
http://www.mersenne.org/
[7]    Atencio, Charles, “Technological Innovation of Cloud Computing with Personal Computers”, 15.965 - Technology and Strategy (Massachusetts Institute of Technology, Spring 2009)
[8]    Bennett, Keith, et al, “Service-Based Software: The Future for Flexible Software”, 2000,
http://www.bds.ie/Pdf/ServiceOriented1.pdf

No comments:

Post a Comment