Thursday, September 5, 2013

List of waterfalls in India by height



Waterfall Height Location Remarks Single drop
Vajrai Waterfall 560 metres (1,840 ft) [1] Satara district, Maharashtra
Kunchikal Falls 455 metres (1,493 ft) [1][2] Shimoga district, Karnataka Multi-tiered waterfalls
Barehipani Falls 399 metres (1,309 ft)[1] Mayurbhanj district, Orissa 2 tiered waterfalls
Langshiang Falls 337 metres (1,106 ft)[1] West Khasi Hills district, Meghalaya
Nohkalikai Falls 335 metres (1,099 ft)[1] East Khasi Hills district, Meghalaya tallest plunge type waterfalls Yes
Nohsngithiang Falls 315 metres (1,033 ft)[1] East Khasi Hills district, Meghalaya segmented type waterfalls Yes
Dudhsagar Falls 310 metres (1,020 ft)[1] Goa 4 tiered waterfalls
Kynrem Falls 305 metres (1,001 ft)[1] East Khasi Hills district, Meghalaya 3 tiered waterfalls
Meenmutty Falls 300 metres (980 ft)[1] Wayanad district, Kerala 3 tiered waterfalls
Thalaiyar Falls 297 metres (974 ft)[1] Dindigul district, Tamil Nadu horsetail type waterfalls Yes
Barkana Falls 259 metres (850 ft)[1] Shimoga district, Karnataka tiered waterfalls
Jog Falls 253 metres (830 ft)[1] Shimoga district, Karnataka segmented waterfalls Yes
Khandadhar Falls 244 metres (801 ft)[1] Sundargarh district, Orissa Horse tail type falls Yes
Vantawng Falls 229 metres (751 ft)[1] Serchhip district, Mizoram 2 tiered waterfalls
Kune Falls 200 metres (660 ft)[1] Lonavla, Maharashtra 3 tiered waterfalls
Soochipara Falls 200 metres (660 ft)[1] Wayanad district, Kerala 3 tiered waterfalls
Magod Falls 198 metres (650 ft)[1] Uttara Kannada district, Karnataka 2 tiered waterfalls
Hebbe Falls 168 metres (551 ft)[1] Chikkamagaluru district, Karnataka 2 tiered waterfalls
Duduma Falls 175 metres (574 ft)[1] Koraput district, Orissa horsetail type waterfalls Yes
Joranda Falls 157 metres (515 ft)[1] Mayurbhanj district, Orissa plunge type waterfalls Yes
Palani Falls 150 metres (490 ft)[1] Kullu district, Himachal Pradesh
Lodh Falls 143 metres (469 ft)[1] Latehar district, Jharkhand 2 tiered waterfalls
Bishop Falls 135 metres (443 ft)[1] Shillong, Meghalaya 3 tiered waterfalls
Chachai Falls 130 metres (430 ft)[1] Rewa district, Madhya Pradesh
Keoti Falls 130 metres (430 ft)[1] Rewa district, Madhya Pradesh segmented type waterfall Yes
Kalhatti Falls 122 metres (400 ft)[1] Chikkamagaluru district, Karnataka
Beadon Falls 120 metres (390 ft)[1] Shillong, Meghalaya
Keppa Falls 116 metres (381 ft)[1] Uttara Kannada district, Karnataka fan type waterfall Yes
Koosalli Falls 116 metres (381 ft)[1] Udupi, Karnataka 6 tiered waterfall
Pandavgad Falls 107 metres (351 ft)[1] Thane, Maharashtra
Rajat Prapat 107 metres (351 ft)[1] Hoshangabad district, Madhya Pradesh horsetail type waterfall Yes
Bundla Falls 100 metres (330 ft)[1] Kangra district, Himachal Pradesh
Shivanasamudra Falls 98 metres (322 ft)[1] Mysore, Karnataka segmented type Yes
Agaya Gangai 92 metres (302 ft)[3] Tamil Nadu 1 tiered waterfalls
Lower Ghaghri Falls 98 metres (322 ft)[1] Latehar district, Jharkhand
Hundru Falls 98 metres (322 ft)[1] Ranchi district, Jharkhand segmented type Yes
Sweet Falls 98 metres (322 ft)[1] Shillong, Meghalaya horsetail type Yes
Gatha Falls 91 metres (299 ft)[1] Panna district, Madhya Pradesh
Kiliyur Falls 91 metres (299 ft)[1] Yercaud, Tamil Nadu fan type waterfall Yes
Kedumari Falls 91 metres (299 ft)[1] Udupi district, Karnataka horsetail type waterfall Yes
Muthyala Maduvu Falls 91 metres (299 ft)[1] Bangalore, Karnataka
Palaruvi Falls 91 metres (299 ft)[1] Kollam district, Kerala horsetail type waterfall Yes

Infographic: continuing reinvention at Nokia and Microsoft


HAPPY TEACHER'S DAY


Tuesday, September 3, 2013

Cloud Computing Interview Questions and Answers



This page contains the collection of Cloud Computing Interview Questions and Answers / Frequently Asked Questions (FAQs) under category Cloud Computing. These questions are collected from various resources like informative websites, forums, blogs, discussion boards including MSDN and Wikipedia. These listed questions can surely help in preparing for Cloud Computing interview or job.
How does cloud computing provides on-demand functionality?
Cloud computing is a metaphor used for internet. It provides on-demand access to virtualized IT resources that can be shared by others or subscribed by you. It provides an easy way to provide configurable resources by taking it from a shared pool. The pool consists of networks, servers, storage, applications and services.
What is the difference between scalability and elasticity?
Scalability is a characteristic of cloud computing through which increasing workload can be handled by increasing in proportion the amount of resource capacity. It allows the architecture to provide on demand resources if the requirement is being raised by the traffic. Whereas, elasticity is being one of the characteristic provide the concept of commissioning and decommissioning of large amount of resource capacity dynamically. It is measured by the speed by which the resources are coming on demand and the usage of the resources.
What are the different layers of cloud computing?
Cloud computing consists of 3 layers in the hierarchy and these are as follows:
1. Infrastructure as a Service (IaaS) provides cloud infrastructure in terms of hardware like memory, processor speed etc.
2. Platform as a Service (PaaS) provides cloud application platform for the developers.
3. Software as a Service (SaaS) provides cloud applications which are used by the user directly without installing anything on the system. The application remains on the cloud and it can be saved and edited in there only.
What resources are provided by infrastructure as a service?
Infrastructure as a Service provides physical and virtual resources that are used to build a cloud. Infrastructure deals with the complexities of maintaining and deploying of the services provided by this layer. The infrastructure here is the servers, storage and other hardware systems.
How important is platform as a service?
Platform as a Service is an important layer in cloud architecture. It is built on the infrastructure model, which provides resources like computers, storage and network. This layer includes organizing and operate the resources provided by the below layer. It is also responsible to provide complete virtualization of the infrastructure layer to make it look like a single server and keep it hidden from the outside world.
What does software as a service provide?
Software as Service is another layer of cloud computing, which provides cloud applications like google is doing, it is providing google docs for the user to save their documents on the cloud and create as well. It provides the applications to be created on fly without adding or installing any extra software component. It provides built in software to create wide varieties of applications and documents and share it with other people online.
What are the different deployment models?
Cloud computing supports many deployment models and they are as follows:
- Private Cloud
Organizations choose to build there private cloud as to keep the strategic, operation and other reasons to themselves and they feel more secure to do it. It is a complete platform which is fully functional and can be owned, operated and restricted to only an organization or an industry. More organizations have moved to private clouds due to security concerns. Virtual private cloud is being used that operate by a hosting company.
- Public Cloud
These are the platforms which are public means open to the people for use and deployment. For example, google, amazon etc. They focus on a few layers like cloud application, infrastructure providing and providing platform markets.
- Hybrid Clouds
It is the combination of public and private cloud. It is the most robust approach to implement cloud architecture as it includes the functionalities and features of both the worlds. It allows organizations to create their own cloud and allow them to give the control over to someone else as well.
What are the different datacenters deployed for this?
Cloud computing is made up of various datacenters put together in a grid form. It consists of different datacenters like:
- Containerized Datacenters
These are the traditional datacenters that allow high level of customization with servers, mainframe and other resources. It requires planning, cooling, networking and power to access and work.
- Low-Density Datacenters
These datacenters are optimized to give high performance. In these datacenters the space constraint is being removed and there is an increased density. It has a drawback that with high density the heat issue also creeps in. These datacenters are very much suitable to develop the cloud infrastructure.
What is the use of API�s in cloud services?
API stands for Application programming interface is very useful in cloud platforms as it allows easy implementation of it on the system. It removes the need to write full fledged programs. It provides the instructions to make the communication between one or more applications. It also allows easy to create application with ease and link the cloud services with other systems.
What are the different modes of software as a service?
Software as a Service provides cloud application platform on which user can create application with the tools provided. The modes of software as a service are defined as:
1. Simple multi-tenancy: in this each user has its own resources that are different from other users. It is an inefficient mode where the user has to put more time and money to add more infrastructure if the demand rises in less time to deliver.
2. Fine grain multi-tenancy: in this the functionality remains the same that the resources can be shared to many. But it is more efficient as the resources are shared not the data and permission within an application.
What is the security aspects provided with cloud?
Security is one of the major aspects which come with any application and service used by the user. Companies or organizations remain much more concerned with the security provided with the cloud. There are many levels of security which has to be provided within cloud environment such as:
- Identity management: it authorizes the application service or hardware component to be used by authorized users.
- Access control: permissions has to be provided to the users so that they can control the access of other users who are entering the in the cloud environment.
- Authorization and authentication: provision should be made to allow the authorized and authenticated people only to access and change the applications and data.
What is the difference between traditional datacenters and cloud?
Cloud computing uses the concept of datacenter as it is the datacenter is based on the tradition one so the difference between them are as follows:
- Cost of the traditional datacenter is higher, due to heating issues and other hardware/software related issues but this is not the case with the cloud computing infrastructure.
- It gets scaled when the demand increases. Most of the cost is being spent on the maintenance being performed on the datacenters, whereas cloud platform requires minimum maintenance and not very expert hand to handle them.
What are the three cost factors involves in cloud data center?
Cloud data center doesn't require experts to operate it, but it requires skilled people to see the maintenance, maintain the workloads and to keep the track of the traffic. The labor cost is 6% of the total cost to operate the cloud data center. Power distribution and cooling of the datacenter cost 20% of the total cost. Computing cost is at the end and is the highest as it is where lots of resources and installation has to be done. It costs the maximum left percentage.
How the cloud services are measured?
Cloud computing provides the services to the organizations so they can run their applications and install them on the cloud. Virtualization is used to deploy the cloud computing models as it provides a hidden layer between the user and the physical layer of the system. The cloud services are measured in terms of use. Pay as much as you use that can be on the basis of hours or months or years. Cloud services allow users to pay for only what they use and according to the demand the charges or the prices gets increased.
What are the optimizing strategies used in cloud?
To optimize the cost and other resources there is a concept of three-data-center which provides backups in cases of disaster recovery and allows you to keep all the data intact in the case of any failure within the system. System management can be done more efficiently by carrying out pre-emptive tasks on the services and the processes which are running for the job. Security can be more advanced to allow only the limited users to access the services.
What are different data types used in cloud computing?
Cloud computing is going all together for a different look as it now includes different data types like emails, contracts, images, blogs, etc. The amount of data increasing day by day and cloud computing is requiring new and efficient data types to store them. For example if you want to save video then you need a data type to save that. Latency requirements are increasing as the demand is increasing. Companies are going for lower latency for many applications.
What are the security laws which take care of the data in the cloud?
The security laws which are implements to secure data in the cloud are as follows: Input validation: controls the input data which is being to any system. Processing: control that the data is being processed correctly and completely in an application. File: control the data being manipulated in any type of file. Output reconciliation: control the data that has to be reconciled from input to output. Backup and recovery: control the security breaches logs and the problems which has occurred while creating the back.
How to secure your data for transport in cloud?
Cloud computing provides very good and easy to use feature to an organization, but at the same time it brings lots of question that how secure is the data, which has to be transported from one place to another in cloud. So, to make sure it remains secure when it moves from point A to point B in cloud, check that there is no data leak with the encryption key implemented with the data you sending.
What do you understand from VPN?
VPN stands for virtual private network; it is a private cloud which manages the security of the data during the transport in the cloud environment. VPN allows an organization to make a public network as private network and use it to transfer files and other resources on a network.
What does a VPN consists of?
VPN is known as virtual private network and it consists of two important things:
1. Firewall: it acts as a barrier between the public network and any private network. It filters the messages that are getting exchanged between the networks. It also protects from any malicious activity being done on the network.
2. Encryption: it is used to protect the sensitive data from professional hackers and other spammers who are usually remain active to get the data. With a message always there will be a key with which you can match the key provided to you.
Name few platforms which are used for large scale cloud computing
There are many platforms available for cloud computing but to model the large scale distributed computing the platforms are as follows:
1. MapReduce: is software that is being built by Google to support distributed computing. It is a framework that works on large set of data. It utilizes the cloud resources and distributes the data to several other computers known as clusters. It has the capability to deal with both structured and non-structured data.
2. Apache Hadoop: is an open source distributed computing platform. It is being written in Java. It creates a pool of computer each with hadoop file system. It then clusters the data elements and applies the hash algorithms that are similar. Then it creates copy of the files that already exist.
What are some examples of large cloud providers and their databases?
Cloud computing has many providers and it is supported on the large scale. The providers with their databases are as follows:
- Google bigtable: it is a hybrid cloud that consists of a big table that is spilt into tables and rows. MapReduce is used for modifying and generating the data.
- Amazon SimpleDB: is a webservice that is used for indexing and querying the data. It allows the storing, processing and creating query on the data set within the cloud platform. It has a system that automatically indexes the data.
- Cloud based SQL: is introduced by Microsoft and it is based on SQL database. it provides data storage by the usage of relational model in the cloud. The data can be accessed from the cloud using the client application.
What are some open source cloud computing platform databases?
Cloud computing platform has various databases that are in support. The open source databases that are developed to support it is as follows:
1. MongoDB: is an open source database system which is schema free and document oriented database. It is written in C++ and provides tables and high storage space.
2. CouchDB: is an open source database system based on Apache server and used to store the data efficiently
3. LucidDB: is the database made in Java/C++ for data warehousing. It provides features and functionalities to maintain data warehouse.
What essential things a user should know before going for cloud computing platform?
A user should know some parameters by which he can go for the cloud computing services. The parameters are as follows:
1. User should know the data integrity in cloud computing: It is a measure to ensure integrity like the data is accurate, complete and reasonable.
2. Compliance: user should make sure that proper rules and regulations are followed while implementing the structure.
3. Loss of data: user should know about the provisions that are provided in case of loss of data so that backup and recovery can be possible.
4. Business continuity plans: user should think about does the cloud services provide him uninterrupted data resources.
5. Uptime: user should know about the uptime the cloud computing platform provides and how helpful it is for the business.
6. Data storage costs: user should find out about the cost which you have to pay before you go for cloud computing.
What are system integrators?
Systems integrators are the important part of cloud computing platform. It provides the strategy of the complicated process used to design a cloud platform. It includes well defined architecture to find the resources and the characteristics which have to be included for cloud computing. Integrators plan the users cloud strategy implementation. Integrators have knowledge about data center creation and also allow more accurate private and hybrid cloud creation.
What is the requirement of virtualization platforms in implementing cloud?
Virtualization is the basis of the cloud computing and there are many platforms that are available like VMware is a technology that provides the provision to create private cloud and provide a bridge to connect external cloud with private cloud. There are three key features that have to be identified to make a private cloud that is:
- Cloud operating system.
- Manage the Service level policies
- Virtualization keeps the user level and the backend level concepts different from each other so that a seamless environment can be created between both.
What is the use of eucalyptus in cloud computing environment?
Eucalyptus stands for Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems and provides an open source software infrastructure to implement clusters in cloud computing platform. It is used to build private, public and hybrid clouds. It can also produce your own datacenter into a private cloud and allow you to extend the functionality to many other organizations. Eucalyptus provides APIs to be used with the web services to cope up with the demand of resources used in the private clouds.
Explain different layers which define cloud architecture
Cloud computing architecture consists of many layers which help it to be more organized and can be managed from one place. The layers are as follows:
1. Cloud controller or CLC is the top most level in the hirerachy which is used to manage the virtualized resources like servers, network and storage with the user APIs.
2. Walrus is used for the storage and act as a storage controller to manage the demands of the users. It maintains a scalable approach to control the virtual machine images and user data.
3. Cluster Controller or CC is used to control all the virtual machines for executions the virtual machines are stored on the nodes and manages the virtual networking between Virtual machines and external users.
4. Storage Controller or SC provides a storage area in block form that are dynamically attached by Virtual machines.
5. Node Controller or NC is at the lowest level and provides the functionality of a hypervisor that controls the VMs activities, which includes execution, management and termination of many instances.
How user will gain from utility computing?
Utility computing allow the user to pay per use means whatever they are using only for that they have to pay. It is a plug in that needs to be managed by the organizations on deciding what type of services has to be deployed from the cloud. Utility computing allows the user to think and implement the services according to them. Most organizations go for hybrid strategy that combines internal delivered services that are hosted or outsourced services.
Is there any difference in cloud computing and computing for mobiles?
Mobile cloud computing uses the same concept but it just adds a device of mobile. Cloud computing comes in action when a task or a data get kept on the internet rather then individual devices. It provides users on demand access to the data which they have to retrieve. Applications run on the remote server, and then given to the user to be able to, store and manage it from the mobile platform.

Cloud computing

Cloud computing is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network (typically the Internet).[1] Cloud computing is a jargon term[citation needed] without a commonly accepted unequivocal scientific or technical definition. In science, cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time. The phrase is also, more commonly used to refer to network-based services which appear to be provided by real server hardware, which in fact are served up by virtual hardware, simulated by software running on one or more real machines. Such virtual servers do not physically exist and can therefore be moved around and scaled up (or down) on the fly without affecting the end user - arguably, rather like a cloud.
The popularity of the term can be attributed to its use in marketing to sell hosted services in the sense of application service provisioning that run client server software on a remote location.

Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network.[2] At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.
The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but as dynamically re-allocated per demand. This can work for allocating resources to users. For example, a cloud computer facility, which serves European users during European business hours with a specific application (e.g. email) while the same resources are getting reallocated and serve North American users during North America's business hours with another application (e.g. web server). This approach should maximize the use of computing powers thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. is required for a variety of functions.
The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as you use it).
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure.[3] Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.[3][4][5]
Hosted services[edit source | editbeta]

In marketing, cloud computing is mostly used to sell hosted services in the sense of application service provisioning that run client server software at a remote location. Such services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a Service), 'IaaS' (Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS' (Everything as a Service). End users access cloud-based applications through a web browser, thin client or mobile app while the business software and user's data are stored on servers at a remote location.
History[edit source | editbeta]

The 1950s[edit source | editbeta]
The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe computers became available in academia and corporations, accessible via thin clients/terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal processing capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing.[6]
The 1960s–1990s[edit source | editbeta]
John McCarthy opined in the 1960s that "computation may someday be organized as a public utility."[7] Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.[8] Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture.
The 1990s[edit source | editbeta]
In the 1990s, telecommunications companies,who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extends this boundary to cover servers as well as the network infrastructure.[9]
As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform and applications with prioritized access to the CPU and efficiency for the end users.[10]
Since 2000[edit source | editbeta]
After the dot-com bubble, Amazon played a key role in all the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Services (AWS) on a utility computing basis in 2006.[11][12]
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[13] In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment.[14] By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them"[15] and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."[16]
On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet.[17] Among the various components of the Smarter Computing foundation, cloud computing is a critical piece.
Growth and popularity[edit source | editbeta]
The development of the Internet from being document centric via semantic data towards more and more services was described as "Dynamic Web".[18] This contribution focused in particular in the need for better meta-data able to describe not only implementation details but also conceptual details of model-based applications.
The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, autonomic, and utility computing have led to a growth in cloud computing.[19][20][21]
Financials Cloud vendors are experiencing growth rates of 90% per annum.[22]
Origin of the term[edit source | editbeta]
The origin of the term cloud computing is unclear. The expression cloud is commonly used in science to describe a large agglomeration of objects that visually appear from a distance as a cloud and describes any set of things whose details are not inspected further in a given context.
Meteorology: a weather cloud is an agglomeration.
Mathematics: a large number of points in a coordinate system in mathematics is seen as a point cloud;
Astronomy: stars that appear crowded together in the sky are known as nebula (Latin for mist or cloud), e.g. the Milky Way;
Physics: The indeterminate position of electrons around an atomic kernel appears like a cloud to a distant observer
In analogy to above usage the word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics and later to depict the Internet in computer network diagrams. The cloud symbol was used to represent the Internet as early as 1994,[23][24] in which servers were then shown connected to, but external to, the cloud symbol.
References to cloud computing in its modern sense can be found as early as 1996, with the earliest known mention to be found in a Compaq internal document.[25]
Urban legends claim that usage of the expression is directly derived from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems or that it derived from a marketing term.[citation needed]
The term became popular after Amazon.com introduced the Elastic Compute Cloud in 2006.
Similar systems and concepts[edit source | editbeta]

Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles.[26]
The main enabling technology for cloud computing is virtualization. Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.[26]
Users face difficult business problems every day. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from utility computing in order to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery.
Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.[26]
Cloud computing shares characteristics with:
Client–server model — Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).[27]
Grid computing — "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.[28]
Utility computing — The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[29][30]
Peer-to-peer — A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).
Cloud gaming — Also known as on-demand gaming, is a way of delivering games to computers. Gaming data is stored in the provider's server, so that gaming is independent of client computers used to play the game.
Characteristics[edit source | editbeta]

Cloud computing exhibits the following key characteristics:
Agility improves with users' ability to re-provision technological infrastructure resources.
Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts capital expenditure to operational expenditure.[31] This purportedly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation (in-house).[32] The e-FISCAL project's state-of-the-art repository[33] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
Device and location independence[34] enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.[32]
Virtualization technology allows sharing of servers and storage devices and increased utilization. Applications can be easily migrated from one physical server to another.
Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
peak-load capacity increases (users need not engineer for highest possible load-levels)
utilisation and efficiency improvements for systems that are often only 10–20% utilised.[11][35]
Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.[36]
Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time,[37][38] without users having to engineer for peak loads.[39][40][41]
Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.,[32][42]
Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels.[43] Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle.[44] However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. ...
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
—National Institute of Standards and Technology[2]
On-demand self-service[edit source | editbeta]
See also: Self-service provisioning for cloud computing services and Service catalogs for cloud computing services
On-demand self-service allows users to obtain, configure and deploy cloud services themselves using cloud service catalogues, without requiring the assistance of IT.[45][46] This feature is listed by the National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.[2]
The self-service requirement of cloud computing prompts infrastructure vendors to create cloud computing templates, which are obtained from cloud service catalogues. Manufacturers of such templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their cloud management platform[47] Hewlett-Packard (HP), which names its templates as HP Cloud Maps[48] RightScale[49] and Red Hat, which names its templates CloudForms.[50]
The templates contain predefined configurations used by consumers to set up cloud services. The templates or blueprints provide the technical information necessary to build ready-to-use clouds.[49] Each template includes specific configuration details for different cloud infrastructures, with information about servers for specific tasks such as hosting applications, databases, websites and so on.[49] The templates also include predefined Web service, the operating system, the database, security configurations and load balancing.[50]
Cloud computing consumers use cloud templates to move applications between clouds through a self-service portal. The predefined blueprints define all that an application requires to run in different environments. For example, a template could define how the same application could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat.[51] The user organization benefits from cloud templates because the technical aspects of cloud configurations reside in the templates, letting users to deploy cloud services with a push of a button.[52][53] Developers can use cloud templates to create a catalog of cloud services.[54]
Service models[edit source | editbeta]

Cloud computing providers offer their services according to several fundamental models:[2][55] infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key components in anything as a service (XaaS) are described in a comprehensive taxonomy model published in 2009,[56] such as Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were officially included by ITU (International Telecommunication Union) as part of the basic cloud computing models, recognized service categories of a telecommunication-centric cloud ecosystem.[57]
Cloud computing layers.png
Infrastructure as a service (IaaS)[edit source | editbeta]
See also: Category:Cloud infrastructure
In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[58] IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis[citation needed]: cost reflects the amount of resources allocated and consumed.
Examples of IaaS providers include: Amazon EC2, Google Compute Engine, HP Cloud, Joyent, Linode, NaviSite, Rackspace, rentVm, Windows Azure, ReadySpace Cloud Services, Terremark, and Internap Agile.
Cloud communications and cloud telephony, rather than replacing local computing infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-site Internet services.
Platform as a service (PaaS)[edit source | editbeta]
Main article: Platform as a service
See also: Category:Cloud platforms
In the PaaS model, cloud providers deliver a computing platform, typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually. The latter has also been proposed by an architecture aiming to facilitate real-time in cloud environments.[59]
Examples of PaaS include: AWS Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, Engine Yard, Mendix, OpenShift, Google App Engine, AppScale, Windows Azure Cloud Services, OrangeScape and Jelastic.
Software as a service (SaaS)[edit source | editbeta]
Main article: Software as a service
In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis. SaaS providers generally price applications using a subscription fee.
In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications are different from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand.[60] Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access point. To accommodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, test environment as a service, communication as a service.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,[61] so price is scalable and adjustable if users are added or removed at any point.[62]
Examples of SaaS include: Google Apps, Microsoft Office 365, Petrosoft, Onlive, GT Nexus, Marketo, Casengo, TradeCard, Rally Software, Salesforce, ExactTarget and CallidusCloud.
Proponents claim SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be unauthorized access to the data.
Network as a service (NaaS)[edit source | editbeta]
Main article: Network as a service
A category of cloud services where the capability provided to the cloud service user is to use network/transport connectivity services and/or inter-cloud network connectivity services.[63] NaaS involves the optimization of resource allocations by considering network and computing resources as a unified whole.[64]
Traditional NaaS services include flexible and extended VPN, and bandwidth on demand.[63] NaaS concept materialization also includes the provision of a virtual network service by the owners of the network infrastructure to a third party (VNP – VNO).[65][66]
Cloud management[edit source | editbeta]

Legacy management infrastructures, which are based on the concept of dedicated system relationships and architecture constructs, are not well suited to cloud environments where instances are continually launched and decommissioned.[67] Instead, the dynamic nature of cloud computing requires monitoring and management tools that are adaptable, extensible and customizable.[68]
Cloud management challenges[edit source | editbeta]
Cloud computing presents a number of management challenges. Companies using public clouds do not have ownership of the equipment hosting the cloud environment, and because the environment is not contained within their own networks, public cloud customers don’t have full visibility or control.[68] Users of public cloud services must also integrate with an architecture defined by the cloud provider, using its specific parameters for working with cloud components. Integration includes tying into the cloud APIs for configuring IP addresses, subnets, firewalls and data service functions for storage. Because control of these functions is based on the cloud provider’s infrastructure and services, public cloud users must integrate with the cloud infrastructure management.[69]
Capacity management is a challenge for both public and private cloud environments because end users have the ability to deploy applications using self-service portals. Applications of all sizes may appear in the environment, consume an unpredictable amount of resources, then disappear at any time.[70]
Chargeback—or, pricing resource use on a granular basis—is a challenge for both public and private cloud environments.[71] Chargeback is a challenge for public cloud service providers because they must price their services competitively while still creating profit.[70] Users of public cloud services may find chargeback challenging because it is difficult for IT groups to assess actual resource costs on a granular basis due to overlapping resources within an organization that may be paid for by an individual business unit, such as electrical power.[71] For private cloud operators, chargeback is fairly straightforward, but the challenge lies in guessing how to allocate resources as closely as possible to actual resource usage to achieve the greatest operational efficiency. Exceeding budgets can be a risk.[70]
Hybrid cloud environments, which combine public and private cloud services, sometimes with traditional infrastructure elements, present their own set of management challenges. These include security concerns if sensitive data lands on public cloud servers, budget concerns around overuse of storage or bandwidth and proliferation of mismanaged images.[72] Managing the information flow in a hybrid cloud environment is also a significant challenge. On-premises clouds must share information with applications hosted off-premises by public cloud providers, and this information may change constantly.[73] Hybrid cloud environments also typically include a complex mix of policies, permissions and limits that must be managed consistently across both public and private clouds.[73]
Cloud clients[edit source | editbeta]

Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screen-sharing technology.

Deployment models

Cloud computing types

Private cloud

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.[2] Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities.[74]
They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management,[75] essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".[76][77]
Comparison between Public and Private Clouds
Public cloud Private cloud
Initial cost Typically zero Typically high
Running cost Unpredictable Unpredictable
Customization Impossible Possible
Privacy No (Host has access to the data) Yes
Single sign-on Impossible Possible
Scaling up Easy while within defined limits Laborious but no limits

Public cloud

A cloud is called a 'Public cloud' when the services are rendered over a network that is open for public use. Technically there is no difference between public and private cloud architecture, however, security consideration may be substantially different for services (applications, storage, and other resources) that are made available by a service provider for a public audience and when communication is effected over a non-trusted network. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and offer access only via Internet (direct connectivity is not offered).[32]

It has been suggested that Public cloud be merged into this article. (Discuss) Proposed since February 2013.

Community cloud

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.[2]

Hybrid cloud

Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.[2] Such composition expands deployment options for cloud services, allowing IT organizations to use public cloud computing resources to meet temporary needs.[78] This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.[2]
Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for extra compute resources when they are needed.[79]
Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.[80]
By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault tolerance combined with locally immediate usability without dependency on internet connectivity. Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based cloud infrastructure.
Hybrid clouds lack the flexibility, security and certainty of in-house applications.[81] Hybrid cloud provides the flexibility of in house applications with the fault tolerance and scalability of cloud based services.

Personal cloud

Personal cloud is an application of cloud computing for individuals similar to a Personal Computer. While a vendor organization may help manage or maintain a personal cloud, it never takes possession of the data on the personal cloud, which remains under control of the individual.[82]
Distributed cloud[edit source | editbeta]
Cloud computing can also be provided by a distributed set of machines that are running at different locations, while still connected to a single network or hub service. Older examples of this include distributed computing platforms such as BOINC and Folding@Home, as well as new crowd-sourced cloud providers such as Slicify.
Cloud management strategies[edit source | editbeta]
Public clouds are managed by public cloud service providers, which include the public cloud environment’s servers, storage, networking and data center operations.[83] Users of public cloud services can generally select from three basic categories:
User self-provisioning: Customers purchase cloud services directly from the provider, typically through a web form or console interface. The customer pays on a per-transaction basis.
Advance provisioning: Customers contract in advance a predetermined amount of resources, which are prepared in advance of service. The customer pays a flat fee or a monthly fee.
Dynamic provisioning: The provider allocates resources when the customer needs them, then decommissions them when they are no longer needed. The customer is charged on a pay-per-use basis.
Managing a private cloud requires software tools to help create a virtualized pool of compute resources, provide a self-service portal for end users and handle security, resource allocation, tracking and billing.[84] Management tools for private clouds tend to be service driven, as opposed to resource driven, because cloud environments are typically highly virtualized and organized in terms of portable workloads.[85]
In hybrid cloud environments, compute, network and storage resources must be managed across multiple domains, so a good management strategy should start by defining what needs to be managed, and where and how to do it.[72] Policies to help govern these domains should include configuration and installation of images, access control, and budgeting and reporting.

Aspects of cloud management systems

A cloud management system is a combination of software and technologies designed to manage cloud environments.[86] The industry has responded to the management challenges of cloud computing with cloud management systems. HP, Novell, Eucalyptus, OpenNebula, Citrix and are among the vendors that have management systems specifically for managing cloud environments.
At a minimum, a loud management solution should be able to manage a pool of heterogeneous compute resources, provide access to end users, monitor security, manage resource allocation and manage tracking.[84] For composite applications, cloud management solutions also encompass frameworks for workflow mapping and management.[87]
Enterprises with large-scale cloud implementations may require more robust cloud management tools that include specific characteristics, such as the ability to manage multiple platforms from a single point of reference, include intelligent analytics to automate processes like application lifecycle management. And high-end cloud management tools should also be able to handle system failures automatically with capabilities such as self-monitoring, an explicit notification mechanism, and include failover and self-healing capabilities.,

Architecture

Cloud computing sample architecture

Cloud architecture,[88] the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

The Intercloud

Main article: Intercloud
The Intercloud[89] is an interconnected global "cloud of clouds"[90][91] and an extension of the Internet "network of networks" on which it is based.[92][93][94]
Cloud engineering[edit source | editbeta]
Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialisation, standardisation, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.
Issues[edit source | editbeta]

Threats and opportunities of the cloud[edit source | editbeta]
Critical voices including GNU project initiator Richard Stallman and Oracle founder Larry Ellison warned that the whole concept is rife with privacy and ownership concerns and constitute merely a fad.[95]
However, cloud computing continues to gain steam[96] with 56% of the major European technology decision-makers estimate that the cloud is a priority in 2013 and 2014, and the cloud budget may reach 30% of the overall IT budget.
According to the TechInsights Report 2013: Cloud Succeeds based on a survey, the cloud implementations generally meets or exceedes expectations across major service models, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS)".
Several deterrents to the widespread adoption of cloud computing remain. Among them, are: reliability, availability of services and data, security, complexity, costs, regulations and legal issues, performance, migration, reversion, the lack of standards, limited customization and issues of privacy. The cloud offers many strong points: infrastructure flexibility, faster deployment of applications and data, cost control, adaptation of cloud resources to real needs, improved productivity, etc. The early 2010s cloud market is dominated by software and services in SaaS mode and IaaS (infrastructure), especially the private cloud. PaaS and the public cloud are further back.

Privacy

Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to control—and thus, to monitor at will—communication between host company and end user, and access user data (with or without permission). Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million telephone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity.[99][100] A cloud service provider (CSP) can complicate data privacy because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud service.[101] CSP operations, customer or tenant data may not remain on the same system, or in the same data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal environment, providers such as Amazon still cater to major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones."[102] Cloud computing poses privacy concerns because the service provider can access the data that is on the cloud at any time. It could accidentally or deliberately alter or even delete information.[103]

Compliance

To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes that are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA"[104][105] and Rackspace Cloud or QubeSpace are able to claim PCI compliance.[106]
Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely.[107] Providers typically make this information available on request, under non-disclosure agreement.[108][109]
Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU regulations on export of personal data.[110]
U.S. Federal Agencies have been directed by the Office of Management and Budget to use a process called FedRAMP (Federal Risk and Authorization Management Program) to assess and authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to federal agency Chief Information Officers on December 8, 2011 defining how federal agencies should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security controls specifically selected to provide protection in cloud environments. A subset has been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief Information Officers from DoD, DHS and GSA. The JAB is responsible for establishing accreditation standards for 3rd party organizations who perform the assessments of cloud solutions. The JAB also reviews authorization packages, and may grant provisional authorization (to operate). The federal agency consuming the service still has final responsibility for final authority to operate.[111]
A multitude of laws and regulations have forced specific compliance requirements onto many companies that collect, generate or store data. These policies may dictate a wide array of data storage policies, such as how long information must be retained, the process used for deleting data, and even certain recovery plans. Below are some examples of compliance laws or regulations.
In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a contingency plan that includes, data backups, data recovery, and data access during emergencies.
The privacy laws of the Switzerland demand that private data, including emails, be physically stored in the Switzerland.
In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a Business contingency plan that includes policies for data storage.
In a virtualized cloud computing environment, customers may never know exactly where their data is stored. In fact, data may be stored across multiple data centers in an effort to improve reliability, increase performance, and provide redundancies. This geographic dispersion may make it more difficult to ascertain legal jurisdiction if disputes arise.

Legal

As with other changes in the landscape of computing, certain legal issues arise with cloud computing, including trademark infringement, security concerns and sharing of proprietary data resources.
The Electronic Frontier Foundation has criticized the United States government during the Megaupload seizure process for considering that people lose property rights by storing data on a cloud computing service.[113]
One important but not often mentioned problem with cloud computing is the problem of who is in "possession" of the data. If a cloud company is the possessor of the data, the possessor has certain legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the data. Many Terms of Service agreements are silent on the question of ownership.[114]
These legal issues are not confined to the time period in which the cloud based application is actively being used. There must also be consideration for what happens when the provider-customer relationship ends. In most cases, this event will be addressed before an application is deployed to the cloud. However, in the case of provider insolvencies or bankruptcy the state of the data may become blurred.[112]
Vendor lock-in[edit source | editbeta]
Because cloud computing is still relatively new, standards are still being developed.[115] Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering.[115] This can make migrating off a proprietary cloud platform prohibitively complicated and expensive.[115]
Three types of vendor lock-in can occur with cloud computing:[116]
Platform lock-in: cloud services tend to be built on one of several possible virtualization platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform to a cloud provider using a different platform could be very complicated.
Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the data once it lives on a cloud platform, are not yet developed, which could make it complicated if cloud computing users ever decide to move data off of a cloud vendor's platform.
Tools lock-in: if tools built to manage a cloud environment are not compatible with different kinds of both virtual and physical infrastructure, those tools will only be able to manage data or apps that live in the vendor's particular cloud environment.
Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models.[117] The absence of vendor lock-in lets cloud administrators select his or her choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise.[118]
A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers.[119] Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors.[120]
Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other.[121] The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations.[121] This has been described as a problem of vendor lock-in.[121] The solution to this is for clouds to adopt common standards.[121]
Heterogeneous cloud computing differs from homogeneous clouds, which have been described as those using consistent building blocks supplied by a single vendor.[122] Intel General Manager of high-density computing, Jason Waxman, is quoted as saying that a homogenous system of 15,000 servers would cost $6 million more in capital expenditure and use 1 megawatt of power.[122]
Open source[edit source | editbeta]
See also: Category:Free software for cloud computing
Open-source software has provided the foundation for many cloud computing implementations, prominent examples being the Hadoop framework[123] and VMware's Cloud Foundry.[124] In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to run over a network.[125]

Open standards

Most cloud providers expose APIs that are typically well-documented (often under a Creative Commons license[126]) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, with a view to delivering interoperability and portability.[127] As of November 2012, the Open Standard with broadest industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now governed by the OpenStack Foundation.[128] OpenStack supporters include AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now VMware.[129]
Security[edit source | editbeta]
Main article: Cloud computing security
As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through adoption of this new model.[1][130] The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model can differ widely from those of traditional architectures.[131] An alternative perspective on the topic of cloud security is that this is but another, although quite broad, case of "applied security" and that similar security principles that apply in shared multi-user mainframe security models apply with cloud security.[132]
The relative security of cloud computing services is a contentious issue that may be delaying its adoption.[133] Physical control of the Private Cloud equipment is more secure than having the equipment off site and under someone else's control. Physical control and the ability to visually inspect data links and access ports is required in order to ensure data links are not compromised. Issues barring the adoption of cloud computing are due in large part to the private and public sectors' unease surrounding the external management of security-based services. It is the very nature of cloud computing-based services, private or public, that promote external management of provided services. This delivers great incentive to cloud computing service providers to prioritize building and maintaining strong management of secure services.[134] Security issues have been categorised into sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious insiders, management console security, account control, and multi-tenancy issues. Solutions to various cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers, standardisation of APIs, and improving virtual machine support and legal support.[131][135][136]
Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase, it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying challenges and risks in cloud computing increase the threat of data compromise. To mitigate the threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the system encrypts to protect data, establishes trusted foundation to secure the platform and infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns must be addressed to maintain trust in cloud computing technology.[1]
Sustainability[edit source | editbeta]
Although cloud computing is often assumed to be a form of green computing, no published study substantiates this assumption.[137]
The primary environmental problem associated with the cloud is energy use. In fact, the cloud uses so much electricity that if it were a country it would be in the top five nations in terms of electricity consumption. Greenpeace ranks the energy usage of the top ten big brands in cloud computing, and successfully urged several companies to switch to clean energy. Phil Radford of Greenpeace said “we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today.”[138] On Thursday, December 15, 2011, Greenpeace and Facebook announced together that Facebook would shift to use clean and renewable energy to power its own operations.[139][140] Soon thereafter, Apple agreed to make all of its data centers ‘coal free’ by the end of 2013 and doubled the amount of solar energy powering its Maiden, NC data center.[141] Following suit, Salesforce agreed to shift to 100% clean energy by 2020.[142]
Citing the servers' effects on the environmental effects of cloud computing, in areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. (The same holds true for "traditional" data centers.) Thus countries with favorable conditions, such as Finland,[143] Sweden and Switzerland,[144] are trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from energy-aware scheduling and server consolidation.[145] However, in the case of distributed clouds over data centers with different sources of energy including renewable energy, the use of energy efficiency reduction could result in a significant carbon footprint reduction.[146]
Abuse[edit source | editbeta]
As with privately purchased hardware, customers can purchase the services of cloud computing for nefarious purposes. This includes password cracking and launching attacks using the purchased services.[147] In 2009, a banking trojan illegally used the popular Amazon service as a command and control channel that issued software updates and malicious instructions to PCs that were infected by the malware.[148]
IT governance[edit source | editbeta]
Main article: Corporate governance of information technology
The introduction of cloud computing requires an appropriate IT governance model to ensure a secured computing environment and to comply with all relevant organizational information technology policies.[149][150] As such, organizations need a set of capabilities that are essential when effectively implementing and managing cloud services, including demand management, relationship management, data security management, application lifecycle management, risk and compliance management.[151] A danger lies with the explosion of companies joining the growth in cloud computing by becoming providers. However, many of the infrastructural and logistical concerns regarding the operation of cloud computing businesses are still unknown. This over-saturation may have ramifications for the industry as whole.[152]
Consumer end storage[edit source | editbeta]
The increased use of cloud computing could lead to a reduction in demand for high storage capacity consumer end devices, due to cheaper low storage devices that stream all content via the cloud becoming more popular.[citation needed] In a Wired article, Jake Gardner explains that while unregulated usage is beneficial for IT and tech moguls like Amazon, the anonymous nature of the cost of consumption of cloud usage makes it difficult for business to evaluate and incorporate it into their business plans.[152] The popularity of cloud and cloud computing in general is so quickly increasing among all sorts of companies, that in May 2013, through its company Amazon Web Services, Amazon started a certification program for cloud computing professionals.
Ambiguity of terminology[edit source | editbeta]
Outside of the information technology and software industry, the term "cloud" can be found to reference a wide range of services, some of which fall under the category of cloud computing, while others do not. The cloud is often used to refer to a product or service that is discovered, accessed and paid for over the Internet, but is not necessarily a computing resource. Examples of service that are sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing, crowd funding, cloud manufacturing.[153][154]
Performance interference and noisy neighbors[edit source | editbeta]
Due to its multi-tenant nature and resource sharing, Cloud computing must also deal with the "noisy neighbor" effect. This effect in essence indicates that in a shared infrastructure, the activity of a virtual machine on a neighboring core on the same physical host may lead to increased performance degradation of the VMs in the same physical host, due to issues such as e.g. cache contamination. Due to the fact that the neighboring VMs may be activated or deactivated at arbitrary times, the result is an increased variation in the actual performance of Cloud resources. This effect seems to be dependent also on the nature of the applications that run inside the VMs but also other factors such as scheduling parameters and the careful selection may lead to optimized assignment in order to minimize the phenomenon. This has also led to difficulties in comparing various cloud providers on cost and performance using traditional benchmarks for service and application performance, as the time period and location in which the benchmark is performed can result in widely varied results.[155]
Monopolies and privatization of cyberspace[edit source | editbeta]
Philosopher Slavoj Žižek points out that, although cloud computing enhances content accessibility, this access is "increasingly grounded in the virtually monopolistic privatization of the cloud which provides this access". According to him, this access, necessarily mediated through a handful of companies, ensures a progressive privatization of global cyberspace. Žižek criticises the argument purported by supporters of cloud computing that this phenomenon is part of the "natural evolution" of the Internet, sustaining that the quasi-monopolies "set prices at will but also filter the software they provide to give its "universality" a particular twist depending on commercial and ideological interests."[156]
Research

Many universities, vendors, Institutes and government organizations are investing in research around the topic of cloud computing:[157][158]
In October 2007, the Academic Cloud Computing Initiative (ACCI) was announced as a multi-university project designed to enhance students' technical knowledge to address the challenges of cloud computing.[159]
In April 2009, UC Santa Barbara released the first open source platform-as-a-service, AppScale, which is capable of running Google App Engine applications at scale on a multitude of infrastructures.
In April 2009, the St Andrews Cloud Computing Co-laboratory was launched, focusing on research in the important new area of cloud computing. Unique in the UK, StACC aims to become an international centre of excellence for research and teaching in cloud computing and provides advice and information to businesses interested in cloud-based services.[160]
In October 2010, the TClouds (Trustworthy Clouds) project was started, funded by the European Commission's 7th Framework Programme. The project's goal is to research and inspect the legal foundation and architectural design to build a resilient and trustworthy cloud-of-cloud infrastructure on top of that. The project also develops a prototype to demonstrate its results.[161]
In December 2010, the TrustCloud research project [162][163] was started by HP Labs Singapore to address transparency and accountability of cloud computing via detective, data-centric approaches[164] encapsulated in a five-layer TrustCloud Framework. The team identified the need for monitoring data life cycles and transfers in the cloud,[162] leading to the tackling of key cloud computing security issues such as cloud data leakages, cloud accountability and cross-national data transfers in transnational clouds.
In June 2011, two Indian Universities i.e. University of Petroleum and Energy Studies and University of Technology and Management introduced cloud computing as a subject in India, in collaboration with IBM.[165]
In July 2011, the High Performance Computing Cloud (HPCCLoud) project was kicked-off aiming at finding out the possibilities of enhancing performance on cloud environments while running the scientific applications - development of HPCCLoud Performance Analysis Toolkit which was funded by CIM-Returning Experts Programme - under the coordination of Prof. Dr. Shajulin Benedict.
In June 2011, the Telecommunications Industry Association developed a Cloud Computing White Paper, to analyze the integration challenges and opportunities between cloud services and traditional U.S. telecommunications standards.[166]
In December 2011, the VISION Cloud EU-funded project proposed an architecture along with an implementation of a cloud environment for data-intensive services aiming to provide a virtualized Cloud Storage infrastructure.[167]
In December 2012, a study released by Microsoft and the International Data Corporation (IDC)showed that millions of cloud-skilled workers would be needed. Millions of cloud-related IT jobs are sitting open and millions more will open in the coming couple of years, due to a shortage in cloud-certified IT workers.
In February 2013, the BonFIRE project launched a multi-site cloud experimentation and testing facility. The facility provides transparent access to cloud resources, with the control and observability necessary to engineer future cloud technologies, in a way that is not restricted, for example, by current business models.[168]
In April 2013, A 2013 report by IT research and advisory firm Gartner., Inc. says that app developers will embrace cloud services, predicting that in three years, 40% of the mobile app development projects will use cloud backed services. Cloud mobile backed services offer a new kind of PaaS, used to enable the development of mobile apps.

Monday, September 2, 2013

VIirtualization Concept and History






What is Virtualization?

Virtualization is a broad topic, as Bob Muglia, Senior vice president for server and tools business at Microsoft Corporation, says “Virtualization is an approach to deploying computer resource that isolate different layers – hardware, software, data, network, storage – from each other”.
So simply we can define virtualization as:
A framework or methodology of dividing the resources of a computer hardware into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time sharing, partial or complete machine simulation, emulation, quality of service, and many others.
Bob goes on and says “typically today, operating system is installed directly onto computer’s hardware. Applications are installed directly onto the operating system. The interface is presented through a display connected directly to the local machine. Altering one layer often affects the others, making changes difficult to implement.
“by using software to isolate these layers from each other, virtualization makes it easier to implement changes. The result is simplified management, more efficient use of it resources, and the flexibility to provide the right computing resources, when and where they are needed.”
Now to understand the concept of virtualization more and more let us take a closer look to the history of virtualization.
History of Virtualization
In its conceived form, virtualization was better known in the 1960s as time sharing. Christopher Strachey, the first Professor of Computation at Oxford University and leader of the Programming Research Group, brought this term to life in his paper Time Sharing in Large Fast Computers. Strachey, who was a staunch advocate of maintaining a balance between practical and theoretical work in computing, was referring to what he called multiprogramming. This technique would allow one programmer to develop a program on his console while another programmer was debugging his, thus avoiding the usual wait for peripherals. Multiprogramming, as well as several other groundbreaking ideas, began to drive innovation, resulting in a series of computers that burst onto the scene. Two are considered part of the evolutionary lineage of virtualization as we currently know it -t h e Atlas and IBM's M44/44X.
The Atlas Computer
The first of the supercomputers of the early 1960s took advantage of concepts such as time sharing, multiprogramming, and shared peripheral control, and was dubbed the Atlas computer. A project run by the Department of Electrical Engineering at Manchester University and funded by Ferranti Limited, the Atlas was the fastest computer of its time. The speed it enjoyed was partially due to a separation of operating system processes in a component called the supervisor and the component responsible for executing user programs. The supervisor managed key resources, such as the computer's processing time, and was passed special instructions, or extra codes, to help it provision and manage the computing environment for the user program's instructions. In essence, this was the birth of the hypervisor, or virtual machine monitor. In addition, Atlas introduced the concept of virtual memory, called one-level store, and paging techniques for the system memory. This core store was also logically separated from the store used by user programs, although the two were integrated. In many ways, this was the first step towards creating a layer of abstraction that all virtualization technologies have in common.
The M44/44X Project
Determined to maintain its title as the supreme innovator of computers, and motivated by the competitive atmosphere that existed, IBM answered back with the M44/44X Project. Nested at the IBM Thomas J. Watson Research Center in Yorktown, New York, the project created a similar architecture to that of the Atlas computer. This architecture was first to coin the term virtual machines and became IBM's contribution to the emerging time-sharing system concepts. The main machine was an IBM 7044 (M44) scientific computer and several simulated 7044 virtual machines, or 44Xs, using hardware and software, virtual memory, and multiprogramming, respectively.
Unlike later implementations of time-sharing systems, M44/44X virtual machines did not implement a complete simulation of the underlying hardware. Instead, it fostered the notion that virtual machines were as efficient as more conventional approaches. To nail that notion, IBM successfully released successors of the M44/44X project that showed this idea was not only true, but could lead to a successful approach to computing.
CP/CMS
A later design, the IBM 7094, was finalized by MIT researchers and IBM engineers and introduced Compatible Time Sharing System (CTSS). The term "compatible" refers to the compatibility with the standard batch processing operating system used on the machine, the Fortran Monitor System (FMS). CTSS not only ran FMS in the main 7094 as the primary facility for the standard batch stream, but also ran an unmodified copy of FMS in each virtual machine in a background facility. The background jobs could access all peripherals, such as tapes, printers, punch card readers, and graphic displays, in the same fashion as the foreground FMS jobs as long as they did not interfere with foreground time-sharing processors or any supporting resources.
MIT continued to value the prospects of time sharing, and developed Project MAC as an effort to develop the next generation of advances in time-sharing technology, pressuring hardware manufacturers to deliver improved platforms for their work. IBM's response was a modified and customized version of its System/360 (S/360) that would include virtual memory and time-sharing concepts not previously released by IBM. This proposal to Project MAC was rejected by MIT, a crushing blow to the team at the Cambridge Scientific Center (CSC), whose only purpose was to support the MIT/IBM relationship through technical guidance and lab activities.
The fallout between the two, however, led to one of the most pivotal points in IBM's history. The CSC team, lead by Norm Rassmussen and Bob Creasy, a defect from Project MAC, contributed to the development of CP/CMS. In the late 1960s, the CSC developed the first successful virtual machine operating system based on fully virtualized hardware, the CP-40. The CP-67 was released as a reimplementation of the CP-40, as was later converted and implemented as the S/360-67 and later as the S/370. The success of this platform won back IBM's credibility at MIT as well as several of IBM's largest customers. It also led to the evolution of the platform and the virtual machine operating systems that ran on them, the most popular being VM/370. The VM/370 was capable of running many virtual machines, with larger virtual memory running on virtual copies of the hardware, all managed by a component called the virtual machine monitor (VMM) running on the real hardware. Each virtual machine was able to run a unique installation of IBM's operating system stably and with great performance.
Virtualization Explosion (1990’s and after)
Many companies, such as Sun, Microsoft, and VMware, have released enterprise class products that have wide acceptance, due in part to their existing customer bases.
CIO Magazine even has an article on up and coming virtualization vendors to keep your eyes on (“10 virtualization vendors to watch in 2008” http://www.cio.com/article/print/160951 ). But why all this happened suddenly and why the intense interest from all kind of customers to implement the virtualization technologies in their environments?

ಆರ್ಯ ಮತ್ತು ದ್ರಾವಿಡ ನಡುವಿನ ವ್ಯತ್ಯಾಸವೇನು?

  ಮೊದಲಿಗೆ ಈ ಪದಗಳ ಪರಿಚಯ ನೋಡೋಣ. ಆರ್ಯ  ಅನ್ನೋದು ಉತ್ತಮ/ದೊಡ್ಡವರು ಎಂಬ ಅರ್ಥ ನೀಡುತ್ತದೆ  ಅಷ್ಟೇ.  ಉತ್ತಮ ಕುಲದಲ್ಲಿ ಹುಟ್ಟಿದವನು, ಯಜಮಾನ, ಹಿಡಿದ ಕೆಲಸವನ್ನು ಬಿಡ...