banner ad

Experts.com features Member articles and case scenarios free of charge. Contact Us if you are interested in having your work published on our website and linked to your Profile(s).

Articles on Computers

< 1 2 3 >
Sort by:  

10/10/2018 · Computers
Dynamic capacity provisioning is a useful technique for handling the multi-time-scale variations seen in Internet workloads. In this article, we propose a novel dynamic provisioning technique for multi-tier Internet applications that employs (1) a flexible queuing model to determine how much of the resources to allocate to each tier of the application, and (2) a combination of predictive and reactive methods that determine when to provision these resources, both at large and small time scales. We propose a novel data center architecture based on virtual machine monitors to reduce provisioning overheads. Our experiments on a forty-machine Xen/Linux-based hosting platform demonstrate the responsiveness of our technique in handling dynamic workloads. In one scenario where a flash crowd caused the workload of a three-tier application to double, our technique was able to double the application capacity within five minutes, thus maintaining responsetime targets. Our technique also reduced the overhead of switching servers across applications from several minutes to less than a second, while meeting the performance targets of residual sessions.

9/20/2018 · Computers
Since many Internet applications employ a multi-tier architecture, in this paper, we focus on the problem of analytically modeling the behavior of such applications. We present a model based on a network of queues, where the queues represent different tiers of the application. Our model is sufficiently general to capture (i) the behavior of tiers with significantly different performance characteristics and (ii) application idiosyncrasies such as session-based workloads, concurrency limits, and caching at intermediate tiers. We validate our model using real multi-tier applications running on a Linux server cluster. Our experiments indicate that our model faithfully captures the performance of these applications for a number of workloads and configurations. For a variety of scenarios, including those with caching at one of the application tiers, the average response times predicted by our model were within the 95% confidence intervals of the observed average response times. Our experiments also demonstrate the utility of the model for dynamic capacity provisioning, performance prediction, bottleneck identification, and session policing. In one scenario, where the request arrival rate increased from less than 1500 to nearly 4200 requests/min, a dynamic provisioning technique employing our model was able to maintain response time targets by increasing the capacity of two of the application tiers by factors of 2 and 3.5, respectively.

8/31/2018 · Computers
With public cloud providers poised to become indispensable utility providers, neutrality-related mandates will likely emerge to ensure a level playing field among their customers ("tenants"). We analogize with net neutrality to discuss: (i) what form cloud neutrality might take, (ii) what lessons might the net neutrality debate have to offer, and (iii) in what ways cloud neutrality would be different from (and even more difficult than) net neutrality. We use idealized thought experiments and simple workload case studies to illustrate our points and conclude with a discussion of challenges and future directions. Our paper points to a rich and important area for future work.

8/13/2018 · Computers
Modern Internet-scale distributed networks have hundreds of thousands of servers deployed in hundreds of locations and networks around the world. Canonical examples of such networks are content delivery networks (called CDNs) that we study in this paper. The operating expenses of large distributed networks are increasingly driven by the cost of supplying power to their servers. Typically, CDNs procure power through long-term contracts from co-location providers and pay on the basis of the power (KWs) provisioned for them, rather than on the basis of the energy (KWHs) actually consumed. We propose the use of batteries to reduce both the required power supply and the incurred power cost of a CDN.

7/20/2018 · Computers
Recent technological advances in the development of flashmemory based devices have consolidated their leadership position as the preferred storage media in the embedded systems market and opened new vistas for deployment in enterprise-scale storage systems. Unlike hard disks, flash devices are free from any mechanical moving parts, have no seek or rotational delays and consume lower power. However, the internal idiosyncrasies of flash technology make its performance highly dependent on workload characteristics. The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. We examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs the virtual-to-physical address translations and hides the erase-before-write characteristics of flash. We propose a complete paradigm shift in the design of the core FTL engine from the existing techniques with our Demand-based Flash Translation Layer (DFTL) which selectively caches page-level address mappings. We develop a flash simulation framework called FlashSim. Our experimental evaluation with realistic enterprise-scale workloads endorses the utility of DFTL in enterprise-scale storage systems by demonstrating: (i) improved performance, (ii) reduced garbage collection overhead and (iii) better overload behavior compared to state-of-the-art FTL schemes. For example, a predominantly random-write dominant I/O trace from an OLTP application running at a large financial institution shows a 78% improvement in average response time (due to a 3-fold reduction in operations of the garbage collector), compared to a state-of-the-art FTL scheme. Even for the well-known read-dominant TPC-H benchmark, for which DFTL introduces additional overheads, we improve system response time by 56%.

7/5/2018 · Computers
Companies invest considerable time, effort and money when selecting and implementing a major mission critical computer system. Successfully completing the process can be complicated and frustrating; as a result, it doesn't take a lot for implementations to fail. The problem is that the business organization that undertook the implementation in the first place is stuck - stuck with the time, money and the inconvenience of not having the system they purchased. The company has not only left without the new system, they are back to using the systems that they originally thought so inadequate that they committed to spending the time and money for a new system.

6/28/2018 · Computers
The Reg A Conference is the largest gathering of deal-makers and investors interested in Regulation A, a prime opportunity for companies to network with like-minded business executives, as well as financial professionals who assist in bringing capital to companies (https://theregaconference.com/presenting-companies/). Many such companies are today basing their new business ventures and projects, and their search and submissions for funding, on blockchain technology applications. So-called cryptocurrencies such as bitcoin are just one example of the use of this functionality. The business implications of this secure online record-keeping tech are huge – and not only in cryptocurrency. This presentation provides a probing and extensive expert critique of blockchain, its cryptocurrency, distributed ledger and smart contract applications, and argues for a cautionary, savvy approach to implementing and investing in such business systems, on grounds of professional due diligence, rigorous corporate governance and wide experience of past leading-edge ICT systems failures.

6/27/2018 · Computers
Cloud-based hosting promises cost advantages over conventional in-house (on-premise) application deployment. One important question when considering a move to the cloud is whether it makes sense for 'my' application to migrate to the cloud. This question is challenging to answer due to following reasons. Although many potential benefits of migrating to the cloud can be enumerated, some benefits may not apply to 'my' application. Also, there can be multiple ways in which an application might make use of the facilities offered by cloud providers. Answering these questions requires an in-depth understanding of the cost implications of all the possible choices specific to 'my' circumstances. In this study we identify an initial set of key factors affecting the costs of a deployement choice. Using benchmarks representing two different applications (TPC-W and TPC-E) we investigate the evolution of costs for different deployment choices. We show that application characteristics such as workload intensity, growth rate, storage capacity and software licensing costs produce complex combined effect on overall costs. We also discuss issues regarding workload variance and horizontal partitioning.

6/7/2018 · Computers
In this paper, we present techniques for provisioning CPU and network resources in shared hosting platforms running potentially antagonistic third-party applications. The primary contribution of our work is to demonstrate the feasibility and benefits of overbooking resources in shared platforms, to maximize the platform yield: the revenue generated by the available resources. We do this by first deriving an accurate estimate of application resource needs by profiling applications on dedicated nodes, and then using these profiles to guide the placement of application components onto shared nodes. By overbooking cluster resources in a controlled fashion, our platform can provide performance guarantees to applications even when overbooked, and combine these techniques with commonly used QoS resource allocation mechanisms to provide application isolation and performance guarantees at run-time.

6/5/2018 · Computers
Implementing a mission critical computer system has a significant impact upon a business organization. Successfully completing the process can be complicated and frustrating; as a result, it doesn't take a lot for the implementation to fail. Once an implementation fails, there is always enough blame to go around. The problem is that the business organization that undertook the implementation in the first place is stuck – stuck with the time, money and the inconvenience of not having the system they purchased. The company has not only left without the new system, they are back to using the systems that they originally thought sufficiently inadequate that they committed to spending the time and money for a new system. Successful litigation will help – but will not make them whole. But worse, a failed litigation only compounds the problem. Preparing the strategy for litigating a failed implementation takes time and thought. This article will take the reader through the development and implement of a litigation strategy that worked and discuss why it worked.

< 1 2 3 >