banner ad

Microcomputers Expert Witnesses

Sort Non-Featured Profiles by
Check for SynapsUs
Bhuvan Urgaonkar, PhD
Available Nationally and Internationally
W371 Westgate Building
University Park PA 16802
USA
phone: 814-933-7211
Bhuvan-Urgaonkar-Computer-Science-Expert-Photo.jpg
Bhuvan Urgaonkar, PhD, has over 15 years of experience in the field of Software Engineering and Computers. His work includes research in computer systems software, distributed computing (including systems such as Zookeeper, Redis, Memcached, Cassandra, Kafka), datacenters, cloud computing, storage systems, energy efficiency of computers and datacenters, big data (including systems such as Hadoop, Spark).

Dr. Urgaonkar has published over 80 research papers in competitive peer-reviewed conferences and journals on these topics with several best paper awards. He has procured research funding totaling more than $3 Million USD from federal agencies and industrial labs (Google, HP, IBM, Cisco, Microsoft, Amazon) to lead and train several Phd and MS students who now work in industry and academia. Dr. Urgaonkar has also delivered lectures on topics including Cloud Computing and Data Centers to Faculty at Microsoft and numerous other technology companies and universities around the world.

Education / Training: Ph.D. in Computer Science from the University of Massachusetts Amherst (2005) B.Tech (honors) in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur, India

Awards / Honors: Test of Time Award, ACM Sigmetrics 2016 National Science Foundation Career award, 2010 IBM Faculty Fellowships, 2016, 2014 Several best paper awards (see CV)

Litigation Support - Dr. Urgaonkar serves as an expert / technical consultant with multiple firms helping them (i) understand technical content related to state of the art products in areas such as content distribution, distributed computing, datacenter design, among others and (ii) interpret patents in these areas and connections between them and state of the art products and services. Services are available to law firms, government agencies, schools, firms / corporations, and hospitals. They include case review, deposition, and trial testimony as needed.

Areas of Expertise:
  • Software Design and Analysis
  • Computer Systems
  • Operating Systems
  • File Systems
  • Databases
  • Computer Security
  • Performance
  • Distributed Computing
  • Data Centers
  • Cloud Computing
  • Big Data Systems
  • Web
  • Internet
  • Content Distribution Networks
  • Software Reliability and Testing
  • Fault Tolerance
  • Hadoop
  • Spark
  • Kubernetes
  • Linux
  • AWS
  • Azure
  • Machine Learning
  • Scheduling
  • 7/20/2018 · Computers
    Recent technological advances in the development of flashmemory based devices have consolidated their leadership position as the preferred storage media in the embedded systems market and opened new vistas for deployment in enterprise-scale storage systems. Unlike hard disks, flash devices are free from any mechanical moving parts, have no seek or rotational delays and consume lower power. However, the internal idiosyncrasies of flash technology make its performance highly dependent on workload characteristics. The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. We examine one of the important causes of this poor performance: the design of the Flash Translation Layer (FTL) which performs the virtual-to-physical address translations and hides the erase-before-write characteristics of flash. We propose a complete paradigm shift in the design of the core FTL engine from the existing techniques with our Demand-based Flash Translation Layer (DFTL) which selectively caches page-level address mappings. We develop a flash simulation framework called FlashSim. Our experimental evaluation with realistic enterprise-scale workloads endorses the utility of DFTL in enterprise-scale storage systems by demonstrating: (i) improved performance, (ii) reduced garbage collection overhead and (iii) better overload behavior compared to state-of-the-art FTL schemes. For example, a predominantly random-write dominant I/O trace from an OLTP application running at a large financial institution shows a 78% improvement in average response time (due to a 3-fold reduction in operations of the garbage collector), compared to a state-of-the-art FTL scheme. Even for the well-known read-dominant TPC-H benchmark, for which DFTL introduces additional overheads, we improve system response time by 56%.

    6/27/2018 · Computers
    Cloud-based hosting promises cost advantages over conventional in-house (on-premise) application deployment. One important question when considering a move to the cloud is whether it makes sense for 'my' application to migrate to the cloud. This question is challenging to answer due to following reasons. Although many potential benefits of migrating to the cloud can be enumerated, some benefits may not apply to 'my' application. Also, there can be multiple ways in which an application might make use of the facilities offered by cloud providers. Answering these questions requires an in-depth understanding of the cost implications of all the possible choices specific to 'my' circumstances. In this study we identify an initial set of key factors affecting the costs of a deployement choice. Using benchmarks representing two different applications (TPC-W and TPC-E) we investigate the evolution of costs for different deployment choices. We show that application characteristics such as workload intensity, growth rate, storage capacity and software licensing costs produce complex combined effect on overall costs. We also discuss issues regarding workload variance and horizontal partitioning.

    6/7/2018 · Computers
    In this paper, we present techniques for provisioning CPU and network resources in shared hosting platforms running potentially antagonistic third-party applications. The primary contribution of our work is to demonstrate the feasibility and benefits of overbooking resources in shared platforms, to maximize the platform yield: the revenue generated by the available resources. We do this by first deriving an accurate estimate of application resource needs by profiling applications on dedicated nodes, and then using these profiles to guide the placement of application components onto shared nodes. By overbooking cluster resources in a controlled fashion, our platform can provide performance guarantees to applications even when overbooked, and combine these techniques with commonly used QoS resource allocation mechanisms to provide application isolation and performance guarantees at run-time.