Chat with us, powered by LiveChat For a long time, organizations have viewed their information systems as proprietary. This means that they developed systems in house running on their own hardware. In other words - Wridemy Essaydoers

For a long time, organizations have viewed their information systems as proprietary. This means that they developed systems in house running on their own hardware. In other words

Q1. For a long time, organizations have viewed their information systems as proprietary. This means that they developed systems in house running on their own hardware. In other words, managing IT meant managing both the systems and the underlying infrastructure. Over time, as they developed and maintained various systems, the underlying infrastructure became a mix of sometimes incompatible systems that are dispersed throughout various organizational units.

Organizations have begun to recognize the need to simplify infrastructure focusing on common platforms. One approach is the "Software as Service" model where organizations buy services such as order processing or customer management rather than developing systems in house. This is often considered part of what has been termed "Cloud Computing".

How do software services and cloud computing change the way that organizations view their IT infrastructure and IT investments?


Q2. IT Security and Risk

Some of this week's readings deal with risk and security.  These are topics that are extremely important and have been widely discussed with the recent news of security breaches at Marriott and other companies (it’s almost getting too many to keep track of).

Are companies taking IT security seriously enough?  Why or why not? 

50 C o M M u N i C at i o N s o f t H e a C M | j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1


L i V i n G i n A condominium (commonly known as a condo) has its constraints and its services. By defining the lifestyle and limits on usage patterns, it is possible to pack many homes close together and to provide the residents with many conveniences. Condo living can offer a great value to those interested and willing to live within its constraints and enjoy the sharing of common services.

Similarly, in cloud computing, applications run on a shared infrastructure and can gain many benefits of flexibility and cost savings. To get the most out of this arrangement, a clear model is needed for the usage pattern and constraints to be imposed in order

to empower sharing and concierge ser- vices. It is the clarity of the usage pat- tern that can empower new Platform as a Service (PaaS) offerings supporting the application pattern and providing services, easing the development and operations of applications complying with that pattern.

Just as there are many different ways of using buildings, there are many styles of application patterns. This article looks at a typical pattern of implementing a Software as a Service (SaaS) application and shows how, by constraining the application to this pattern, it is possible to provide many concierge services that ease the devel- opment of a cloud-based application.

Over the past 50 years, it has become increasingly common for buildings to be constructed for an expected usage pattern. Not all buildings fit this mold. Some buildings have requirements that are so unique, they simply need to be constructed on demand—steel mills, baseball stadiums, and even Su- per Walmarts are so specialized you cannot expect to find one using a real estate agent.

Such custom buildings are becom- ing increasingly rare, however, while more and more buildings—whether industrial parks, retail offices, or hous- ing—are being constructed in a com- mon fashion and with a usage pattern in mind. They are built with a clear idea of how they will be used but not neces- sarily who will use them. Each has stan- dard specifications for the occupants, and the new occupants must fit into the space.

A building’s usage pattern may impose constraints but, in turn, of- fers shared concierge services. A con- dominium housing development, for example, imposes constraints on parking, noise levels, and barbequ- ing. Residents cannot work on garage projects or gardening projects. In ex- change, someone is always on hand to accept their packages and dry clean- ing. They may have a shared exercise facility and pool. Somebody else fixes things when they break.

Condos and Clouds

D o i : 1 0 . 1 1 4 5 / 2 3 9 8 3 5 6 . 2 3 9 8 3 7 4

Article development led by

Constraints in an environment empower the services.

By Pat HeLLaND

c r

e d

i t

t k

j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1 | C o M M u N i C at i o N s o f t H e a C M 51

52 C o M M u N i C at i o N s o f t H e a C M | j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1


An office building may have shared bathrooms, copy rooms, and lobby. The engineering for the building is typ- ically common for the whole structure. To get these shared benefits, tenants may have a fixed office layout, as well as some rules for usage. Normally, people cannot sleep at work, cannot have pets at work, and may even have a dress code for the building.

A retail mall provides shared engi- neering, parking, security, and com- mon space. An advertising budget may benefit all the mall tenants. In exchange, there are common hours, limits on the allowable retail activities, and constraints on the appearance of each store.

Each of these building types man-

dates constraints on usage patterns and offers concierge services in ex- change. To enjoy the benefits, you need to accept the constraints.

Similarly, cloud computing typically has certain constraints and, hence, can offer concierge services in return. What can the shared infrastructure do to make life easier for a sharing appli- cation? What constraints must a shar- ing application live within to fit into the shared cloud?

What is Cloud Computing? Cloud computing delivers applications as services over an intranet or the Inter- net. A number of terms have emerged to characterize the layers of cloud-com- puting solutions.

˲ SaaS. This refers to the user’s ability to access an application across the In- ternet or intranet. SaaS has been around for years now (although the term for it is more recent). What is new is the ability to project the application over the Web without building a data center.

˲ PaaS. This is a nascent area in which higher-level application ser- vices are supplied by the vendor with two goals: first, a good PaaS can make developing an application easier; sec- ond, a good PaaS can make it easier for the cloud provider to share resources efficiently and provide concierge ser- vices to the app. Today, the leading ex- amples of PaaS are Salesforce’s Force. com5 and Google’s App Engine.2

˲ IaaS (infrastructure as a service). Sometimes called utility computing,

this is virtualized hardware and com- puting available over the Web. The user of an IaaS can access virtual machines (VMs) and storage to accompany them on demand.

Figure 1 shows the relationship between the cloud and SaaS provid- ers and users. (The figure was derived from a technical report from the University of California at Berkeley, “Above the Clouds: A Berkeley View of Cloud Computing.”3) As observed in “Above the Clouds,” cloud computing has three new aspects: the illusion of infinite computing resources on de- mand; the elimination of upfront com- mitment by cloud users; and the abil- ity to pay for computing resources on a short-term basis.

Cloud computing allows the de- ployment of SaaS—and scaling on de- mand—without having to build or pro- vision a data center.

Public and private clouds. Clouds are about sharing. The question is whether you share within a company or go to a third-party provider and share across companies.

In a public cloud, a cloud-comput- ing provider owns the data center. Oth- er companies access their computing and storage for a pay-as-you-go fee. This has tremendous advantages of scale, but it is more challenging to manage the trust relationship. Trust ensures that the computing resources are avail- able when they are needed (this could be called an SLA, or service-level agree- ment). In addition, there are issues of privacy trust in which the subscribing company needs to have confidence its private data will not be accessed by pry- ing eyes. Demonstrating privacy is easi- er if the company owns the data center.

A public cloud can project its shared resources as VMs and low-level storage requiring the application to build on what appears to be a pool of physical machines (even though they are really virtual). This would be a public-cloud IaaS. Amazon’s AWS (Amazon Web Ser- vice)1 is a leading example of this.

Alternatively, in a public-cloud PaaS, higher-level abstractions can be presented to the applications that al- low finer-grained multitenancy than a VM. The shape and form of these ab- stractions are undergoing rapid evolu- tion. Again, and App Engine are emerging examples.

Web applications

SaaS user

SaaS Provider / Cloud user

Cloud Provider

utility computing

figure 1. Cloud computing, utility computing, and software as a service.

SaaS Computing and Storage internet


data Feeds Results

user and System data

Back-end data Analysis

Front-end online Web-Serving

Large Read-only and/or updateable


Reference data for online

figure 2. saas computing and storage.


j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1 | C o M M u N i C at i o N s o f t H e a C M 53

Power is an ever-increasing portion of data-center costs, and it can be obtained more effectively by placing the data center near inexpensive sources of electricity such as hydroelectric dams.

In a private cloud the data center, physical machines, and storage are all owned by the company that uses them. Sharing happens within the company. The usage of the resources can ebb and flow as determined by different depart- ments and applications. The size of the shared cloud is likely to be smaller than within a public cloud, which may reduce the value of the sharing. Still, it is attractive to many companies be- cause they do not need to trust an out- side cloud provider. So far, we have seen only private-cloud IaaS. The new PaaS offerings are not yet being made available for individual companies to use in their private clouds.

Forces driving us to the cloud. A number of forces are prompting in- creased movement of applications to the cloud:

Data-center economics. Very large data centers can offer computation, storage, and networking at a relatively cost-effective price. Power is an ever-in- creasing portion of data-center costs, and it can be obtained more effectively by placing the data center near inex- pensive sources of electricity such as hydroelectric dams. Internet ingress and egress is less expensive near Inter- net main lines. Containerized servers with thousands of machines delivered in a shipping container offer lower cost for computation and storage. Shared administration of the servers offers cost savings in operations. All of this is included in the enormous price tag for the data center. Few companies can afford such a large investment. Sharing (and charging for) the large investment reduces the costs. This provides eco- nomic drive for both the cloud provid- ers and users.

Shared data. Increasingly, compa- nies are finding huge (and serendipi- tous) value in maintaining a “big-data” store. More and more, vast amounts of corporate data are placed into one store that can be addressed uniformly and analyzed in large computations. In many cases the value of the discov- eries grows as the size of the data store increases. It is becoming a goal to store all of an enterprise’s data in a common store, allow analysis, and see surpris- ing value.

Shared resources. By consolidating computation and storage into a shared cloud, it is possible to provide higher

utilization of these resources while maintaining strong SLAs for the high- er-priority work. Low-priority work can be done during slack times while be- ing preempted for higher-priority work during the busy times. This requires that resources are fluid and fungible so that the lower-priority work can be bumped aside and the resources re- allocated to the higher-priority work.

SaaS: Front end, back end, and de- cision support. Let’s look more closely at a typical pattern seen in a SaaS im- plementation. In general, the applica- tion has two major sections: the front end, which handles incoming Web requests; and the back end, which per- forms offline background processing to prepare the information needed by the front end. In addition to its work preparing data for the front end, the back-end application is usually shared with decision-support processing (see Figure 2).

In a typical SaaS implementation the front end offers user-facing servic- es dealing with Web services or HTML. It is normal for this Web-serving code to have aggressive SLAs, typically of only 300ms–500ms, sometimes even tighter. The back-end processing con- sumes crawled data, partner feeds, and logged information generated by the front end and other sources, and it generates reference data for use by the front end. You may see product cata- logs and price lists as reference data, or you may see inverted search indices to support systems such as Google or Bing search. In addition to the genera- tion of reference data, the back-end processing typically performs deci- sion-support functions for the SaaS owner. These allow “what-if” analyses that guide the business.

Patterns in saas apps: the front end Here, I explore a common pattern used in building the front-end por- tion of SaaS applications. By lever- aging the pattern used by these ap- plications, a number of very useful concierge services can be supplied by the PaaS plumbing.

Many service applications fit nicely within a pattern of behavior. The goal of these applications is to implement the front end of a SaaS application. In- coming Web-service requests or HTML

54 C o M M u N i C at i o N s o f t H e a C M | j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1


Session-state management. As each request with a partner is processed, it has the option to record session state that describes the work in progress for that partner. This is automatically managed so that subsequent requests can easily fetch the state to contin- ue work. The session-state manager works with dynamically scalable and load-balanced services. It implements the application’s policy for session- state survival and fault tolerance.

Each of these concierge services de- pends on the application abiding by the constraints of the usage pattern as described for the typical front-end SaaS application.

stateless Request Processing Incoming requests for a service are routed to one of many servers capable of responding to the request. At this point, no state is associated with the session present in the target server (we will get it later if needed). It is rea- sonable to consider this a stateless request (at least so far) and select any available server.

The plumbing keeps a pool of serv- ers that can implement the service. Incoming requests are dynamically routed and load balanced. As demand increases and decreases, the concierge services of the plumbing can automati- cally increase and decrease the num- ber of servers.

Composite request processing. Frequently, a service calls another ser- vice to get the job done. The called ser- vice may, in turn, call another service. This composite call graph may get quite complex and very deep. Each of these services will need to complete to build the user’s response. As requests come in, the work fans out, gets pro- cessed, and then is collected again. In 2007, Amazon reported that a typical request to one of its e-commerce sites resulted in more than 150 service re- quests.4 Many SaaS applications fol- low the pattern shown in Figure 3a:

1. A request arrives from outside (ei- ther Web service or HTML).

2. The service optionally requests its session state to refresh its memory about the ongoing work.

3. The response comes back from the session-state manager.

4. Other services are consulted if needed.

requests arrive at the system and are processed with a request-response pat- tern using session state, other services, and cached reference data.

When a front-end application fits into the constraints of the pattern just described, a lot of concierge services may be supplied to the application. These services simplify the develop- ment of the app, ease the operational challenges of the service, and facilitate sharing of cloud resources to efficient- ly meet SLAs defined for the applica- tions. Some possible concierge servic- es include:

Auto-scaling. As the workload rises, additional servers are automatically al- located for this service. Resources are taken back when load drops.

Auto-placement. Deployment, migra- tion, fault boundaries, and geographic transparency are all included. Applica- tions are blissfully ignorant.

Capacity planning. This includes analysis of traffic patterns of service

usage back to incoming user work- load. Trends in incoming user work- load are tracked.

Resource marketplace. The concierge plumbing automatically tracks a ser- vice’s cost as it directly consumes re- sources and indirectly consumes them (by calling other services). This allows the cost of shared services to be attrib- uted and charged back to the instigat- ing work.

A/B testing and experimentation. The plumbing makes it easy to deploy a new version of a service on a subset of the traffic and compare the results with the previous version.

Auto-caching and data distribution. The back end of the SaaS application generates reference data (for example, product catalog and price list) for use by the front end. This data is auto- matically cached in a scalable way, and changes to items within the reference data are automatically distributed to the caches.

figure 3. (a) the typical front-end saas application pattern. (b) the application’s focus on business logic.

Application data Cache

Backend Feed Processing

other Service


Session State Manager

Session State


Request 1






7 8


Application data Cache

Backend Feed Processing

other Service


Session State Manager

Session State





j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1 | C o M M u N i C at i o N s o f t H e a C M 55

5. The other service responds. 6. The application data cache (cu-

rated by the back-end processing) is consulted.

7. Cached reference data is returned to the service for use by its front-end app.

8. The response is issued to the caller. SLAs and request depth. Requests

serviced by the SaaS front end will have an SLA. A typical SLA may be a “300ms response for 99.9% of the requests as- suming a traffic rate of 500 requests per second.”

It is common practice when build- ing services to measure an SLA with a percentile (for example, 99.9%) rather than an average. Averages are much easier to engineer and deploy but will lead to user dissatisfaction because the outlying cases are typi- cally very annoying.

Pounding on the services at the Bottom To implement a systemwide SLA with a composite call graph, there is a lot of pressure on the bottom of the stack. Because the time is factored into the caller’s SLA, deeper stacks mean more pressure on the SLAs.

In many systems, the lowest-level services (such as the session-state manager and the reference-data cach- es) may have SLAs of 5ms–10ms 99.9% of the time. Figure 4 shows how com- posite call graphs can get very complex and put a lot of SLA pressure down the call stack.

a Quick Refresher on simple Queuing theory The expected response time is depen- dent on both the minimum response time (the response time on an empty system) and the utilization of the sys- tem. Indeed, the equation is:

Expected Response

Time =

Minimum Response

× Time

1 – Utilization

This makes intuitive sense. If the system is 50% busy, then the work must be done in the slack, so it takes twice the minimum time. If the system is 90% busy, then the work must get done in the 10% slack and takes 10 times the minimum time.

Automatic provisioning to meet SLAs. When the SLA for a service is

slipping, one answer is to reduce the utilization of the servers providing the service. This can be done by add- ing more servers to the server pool and spreading the work thinner.

Suppose each user-facing or exter- nally facing service has an SLA. Also, assume the system plumbing can track the calling pattern and knows which internal services are called by the externally facing services. This means that the plumbing can know the SLA requirements of the nested in- ternal services and track the demands on the services deep in the stack.

Given the prioritized needs and the SLAs of various externally facing ser- vices, the plumbing can increase the number of servers allocated to impor- tant services and borrow or steal from lower-priority work.

Accessing data and state. When a request lands into a service, it initially has no state other than what arrived with the request. It can fetch the ses- sion state and/or cached reference data if needed.

The session state provides informa- tion from previous interactions that this service had over the session. It is fetched at the beginning of a request and then

stored back with additional information as the request is completing.

Most SaaS applications use ap- plication-specific information that is prepared in the background and cached for use by the front end. Prod- uct catalog, price list, geographical in- formation, sales quotas, and prescrip- tion drug interactions are examples of reference data. Cached reference data is accessed by key. Using the key, the services within the front end can read the data. From the front end, this data is read only. The back-end portion of the application generates changes to (or new versions of) the reference data. An example of read-only cached reference data can be seen on the retail site. Look at any product page for the ASIN (Amazon Standard Identification Number), a 10-character identifier usually be- ginning with “0” or “B.” This unique identifier is the key for all the product description you see displayed, includ- ing images.

Managing scalable and reliable state. The session state is keyed by a session-state ID. This ID comes in on the request and is used to fetch the state from the session-state manager.

figure 4. Composite call graphs.

A composite call graph in an SaaS front-end can get

very complex.

to meet a systemwide SLA, each service deeper

in the call stack must meet an ever tighter SLA.

the bottom of the call stack can be under

enormous pressure to meet tight SLAs.

R es

p on


R eq

u es


R es

p on


R eq

u es


R es

p on


R eq

u es


v er

y t

ig h

t S


C on

st ra

in ts

56 C o M M u N i C at i o N s o f t H e a C M | j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1


cess session state are easy to call. This provides support for a scalable and ro- bust SaaS application.

The application session state, ap- plication reference-data cache, and calls to other services are available as concierge services. The platform pre- scribes how to access these services, and the application need not know what it takes to build them. By con- straining the application functional- ity, the platform can increase the con- cierge services.

Patterns in saas applications: the Back end This section explores the patterns used in the back-end portion of a typical SaaS application. What does this back end do for the application? How does it typically do it?

The back end of a SaaS application receives data from a number of sources:

˲ Crawling. Sometimes the back end has applications that look at the Inter- net or other systems to see what can be extracted.

˲ Data feeds. Partner companies or departments may send data to be in- gested into the back-end system.

˲ Logging. Data is accumulated about the behavior of the front-end system. These logs are submitted for analysis by the back-end system.

˲ Online work. Sometimes, the front end directly calls the back end to do some work on behalf of the front-end part of the application. This may be called synchronously (while the user waits) or asynchronously.

All of these sources of data are fed into the back end where they are re- membered and processed.

Many front-end applications use reference data that is periodically up- dated by the back end of the SaaS ap- plication. Applications are designed to deal with reference data that may be stale. The general model for process- ing reference data is:

1. Incoming information arrives at the back end from partners’ feeds, Web crawling, or logs from system ac- tivity. Online work may also stimulate the back-end processing.

2. The application code of the back end processes the data either as batch jobs, event processing with shorter la- tency, or both.

3. The new entries in the reference-

An example of session state is a shop- ping cart on an e-commerce site such as

The plumbing for the session-state manager handles scaling. As the num- ber of sessions grows, the session- state manager automatically increases its capacity.

This plumbing is also responsible for the durability of the session state. Typically, the durability requirements mandate that the session state survive the failure of a single system. Should this be written to disk on a set of sys- tems in the cloud? Should it be kept in memory over many systems to provide acceptable durability? Increased du- rability requires more system imple- mentation cost and may require in- creased latency to ensure the request is durable.

Typically, a session-state manager is used so frequently that it must pro- vide a very aggressive SLA for both reads and writes. A 5ms or 10ms guar- antee is not unusual. This means it is not practical to wait for the ses- sion state to be recorded on disk. It is common for the session state to be acknowledged as successfully written when it is present on two or three rep- licas in memory. Shortly thereafter, it will likely be written to disk.

Applying changes to the back end.

Sometimes, front-end requests actu- ally “do work” and apply application changes to the back end. For example, the user pushes Submit and asks for its work to be completed.

Application changes to the back end may be either synchronous, in which the user waits while the back end gets the work done and answers the re- quest; or asynchronous, in which the work is enqueued and processed later. provides an exam- ple of asynchronous back-end app changes. When the user presses Sub- mit, a portion of the front end quickly acknowledges the receipt of the work and replies that the request has been accepted. Typically, the back end promptly processes the request, and the user receives an email message in a second or two. Occasionally, the email message takes 30 minutes or so when the asynchronous processing at the back end is busy.

Automatic services, state, and data. By understanding the usage pattern of a SaaS application, the platform can lessen the work needed to develop an application and increase its benefits. As suggested in Figure 3b, the appli- cation should simply worry about its business logic and not about the sys- tem-level issues. Interfaces to call oth- er services, access cached data, and ac-

figure 5. saas application interface from the back-end to the front-end.

Feeds from






















incoming Read Requests

Backend Processing

(Feed and Crawl)

Automatic Pub-Sub distribution of


Crawl the Web


j A n u A R Y 2 0 1 3 | v o L . 5 6 | n o . 1 | C o M M u N i C at i o N s o f t H e a C M 57

data caches are distributed to the cach- ing machines. The changes may be new versions made by batch updates or incremental updates.

4. The front-end apps read the refer- ence-data caches. These are gradually updated, and the users of the front end see new information.

The reference-data cache is a key- value store. One easy-to-understand model for these caches has parti- tioned and replicated data in the cache. Each cache machine typically has an in-memory store (since disk access is too slow). The number of partitions increases as the size of the data being cached increases. The number of replicas increases initially to ensure fault tolerance and then to support increases in read traffic from the front end.

It is possible to support this pattern in the plumbing with a full concierge service. The plumbing on the back end can handle the partition for data scale (and repartitioning for growth or shrinkage). It can handle the firing up of new replicas for read-rate scale. Also, the plumbing can manage the distribu- tion of the changes made by the back end (either as a batch or incremental- ly). This distribution understands par- titioning, dynamic repartitioning, and the number of replicas dynamically as- signed to partitions.

Figure 5 illustrates how the inter- face from the back end to the front end in a SaaS application is typically a key-value cache that is stuffed by the back end and read-only by the front end. This clear pattern allows for the

creation of a concierge service in a PaaS system, which eases the imple- mentation and deployment of these applications.

Note that this is not the only scheme for dynamic management of caches. Consistent hashing (such as implemented by Dynamo,4 Cassan- dra6, and Riak8) provides an excellent option when dealing with reference data. The consistency semantics of the somewhat stale reference data, which is read-only by the front end and up- dated by the back end, are a very good match. These systems have excellent self-managing characteristics.

Styles of back-end processing. The back-end portion of the SaaS app may be implemented in a number of differ- ent ways, largely dependent on the scale of processing required. These include:

Relational database and normal app. In this case, the computational approach is reasonably traditional. The data is held in a relational data- base, and the computation is done in a tried-and-true fashion. You may see database triggers, N-tier apps, or other application forms. Typically in a cloud environment, the N-tier or other form of application will run in a VM. This can produce the refer- ence data needed for the front end, as well as what-if business analytics. This approach has the advantage of a relational database but scales to only a few large machines.

Big data and MapReduce. This ap- proach is a set-oriented massively par- allel processing solution. The under- lying data is typically stored in a GFS

(Google File System) or HDFS (Hadoop Distributed File System), and the com- putation is performed by large batch jobs using MapReduce, Hadoop, or some similar technology. Increasingly, higher-level languages declaratively express the needed computation. This can be used to produce reference data and/or to perform what-if business analytics. Over time, we will see MapRe- duce/Hadoop over all of an enter- prise’s data.

Big data and event processing. The data is sti

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?