Configurable SOAP Proxy Cache for Data Provisioning Web Services

Configurable SOAP Proxy Cache for Data Provisioning Web Services Peep K¨ ungas1 and Marlon Dumas2 1 Software Technology and Applications Competence C...
Author: Maurice Greer
2 downloads 2 Views 252KB Size
Configurable SOAP Proxy Cache for Data Provisioning Web Services Peep K¨ ungas1 and Marlon Dumas2 1

Software Technology and Applications Competence Center, Estonia [email protected] 2 Institute of Computer Science, University of Tartu, Estonia marlon.dumas}@ut.ee

Abstract. This paper presents a proxy cache for data provisioning SOAP-based Web services. The proposed solution relies on a novel caching scheme to efficiently identify cache hits and provides fine-grained configurability by supporting the definition of caching policies at the service and at the operation levels. The proposal has been evaluated using the logs of service calls of a Web service marketplace. The evaluation results show that an appropriately configured SOAP cache provides major performance benefits in practical settings. Furthermore, it is shown that the LRU (Least Recently Used) replacement policy provides the most effective hit ratio, byte hit ratio and delay savings ratio.

1

Introduction

Web caching is an established technology for reducing bandwidth utilization, server load and latency. Three types of Web caches can be distinguished [7]— client-side caches, proxy caches and reverse caches. Client-side caching is built into virtually every modern Web browser are intended for caching objects for a single user. Proxy caches are often located near network gateways and they enable multiple users to share their cached objects. Finally, reverse caches (a.k.a. httpd accelerators) are placed directly in front of a server in order to reduce the number of requests the server must handle. Web caching is traditionally associated with HTML pages and associated Web objects [21, 15]. But caching has also been investigated for SOAP Web services [8, 18]. The fundamental difference between caching of Web objects and SOAP caching lies in the fact that Web objects are fully identified by a URL and the caching mechanism uses the URLs to match cached objects. In contrast, when caching SOAP Web service responses, the URL only determines the service endpoint. SOAP caches need to manipulate the contents of Web services requests and responses to identify and to process cache hits. This requirement imposes an additional overhead on the cache which needs to be taken into account when designing SOAP caching schemes. Also, in traditional Web content caching, the type of operation (GET, POST) and the associated HTTP fields, greatly help in determining whether or not to cache. Meanwhile, in the context of SOAP Web services, the nature of the operation (transaction vs. retrieval) is usually

not explicitly exposed. Accordingly, fine-grained configurability at the service or even at the operation level is essential in a SOAP caching solution. A number of previous studies on SOAP caching are driven by the need to handle communication failures on handheld devices, and thus focus on client-side caching (e.g. [8, 18]). Other relevant studies have focused on server-side SOAP caching with an emphasis on reducing server load originating from serialization and deserialization of SOAP messages [1, 19]. But only a handful of studies [13, 14] have addressed the problem of SOAP proxy caching. Yet, it is understood that proxy caching has potentially high benefits in practical settings [20, 9]. Furthermore, existing SOAP proxy caching methods require cache-specific modifications at the client side and are thus not transparent. This paper presents a high-performance, configurable and transparent proxy cache for SOAP traffic. In order to minimize the performance overhead, the proposed caching solution applies a novel caching scheme to efficiently identify cached SOAP message content that can be reused to serve a given request. Configurability is achieved by allowing users to define caching policies at the service and the operation level. Finally, the proxy cache operates entirely on the SOAP messages and does not require any modifications in the client applications, thus providing full transparency. The proposed caching solution has been evaluated based on real traffic logs of a Web service marketplace. The evaluation results show that SOAP caching can provide substantial benefits in practical settings. Furthermore, we identify that the LRU (Least Recently Used) replacement policy provides the most effective hit ratio, byte hit ratio and delay savings ratio – a result that is in line with similar results in traditional Web object caching . The following section describes the proposed proxy cache solution. Next, Section 3 presents the experimental setup used to evaluate to proposed solution while Section 4 analyzes the experimental results. Section 5 reviews related work while Section 6 concludes and outlines future research directions.

2

Proxy Cache Solution

The workflow of the default caching scheme of the proposed proxy cache is depicted in Fig. 1. After receiving a request from a client, the request body content is first normalized by using C14N for XML canonicalization and N12N for namespace normalization. The main reason for applying normalization is that different applications generate SOAP requests with the same content in a slightly different manner and using different styles for exposing namespaces, XML tags and for general formatting of the XML structure. After normalization, the MD5 hashing algorithm is applied to the normalized SOAP body – SOAP headers are not considered at this stage of the workflow. The constructed hash value of the normalized Web service request is used to look up whether a response for the request is available in the cache. The rationale for hashing is that matching of hash values is significantly faster than comparison of XML documents either structure- or content-wise. In the proposed workflow,

full document comparison is only performed for a small number of documents in a hash bucket.

Fig. 1. Default caching workflow.

If a match is found for a request (an event called a “cache hit”), the corresponding response is returned. If no cache entry is found (cache miss), a request is forwarded to the service provided and the cache waits for a response. In case caching is applied for a particular operation, the response is first cached (together with the request hash value and with a copy of the full request) and then the answer is returned to the requester. In case the operation is marked as uncacheable the response is not be cached. In the case of a cache hit, the default behavior is to return the cached response (including both headers and body). Alternatively, administrators can associate an XSL transformation to an operation or to a service. Every time a SOAP message related to this operation/service is reused from the cache, this transformation is applied to a document consisting of the SOAP header of the incoming request, the SOAP header of the cached response, and the SOAP body of the cached response. This transformation is useful (for example) in the case of messages containing WS-Addressing headers, in order to populate the relatesTo field and other WS-Addressing headers appropriately. For determining which services to cache or not, caching policies can be described at a global level (applies to all operations of all services), service-level (applies to all operations of a service) or operation-level (applies to a specific operation of a service). This feature enables administrators to apply differential caching depending on the nature of operations – e.g. to distinguish between a transactional operation for which results should not be cached vs. an operation that serves to query a static or rarely changing data source. The following attributes can be used for defining caching policies: – for error compensation only—use a cached response in case of a fault at the request’s target endpoint. Note however that the cache is not use in the presence of reliability-related headers (sequence numbers) in order to avoid interference with the reliability protocol. – maximum number of hits—use a cached response for a fixed number of times. – maximum validity period—use a cached response for a fixed time period.

– expiry deadline—use a cached response until a given date/time. Also, the following cache-wide configuration parameters are supported: – – – –

Caching with or without SOAP request normalization (C14N, N12N). Maximum cache size in bytes. Cache size for triggering replacement. The cache replacement policy: LRU (Least Recently Used), LFU (Least Frequently Used) or Size (largest request-response is removed first/last).

The effect of the replacement policy on cache hits is discussed in the following sections, along with the benefits of caching on overall performance.

3

Evaluation setup

The proposed proxy cache solution has been implemented as a servlet for the JBoss AS 4.2.3 application server and uses MySQL 5.0+ as a database for storing cached results. The implementation is available at http://code.google.com/p/soa-trader-web-services-delivery-middleware/. In this section we report on an experimental setup used to evaluate the proposal. 3.1

Methodology

Davison [7] reviews proxy cache evaluation methodologies based on source of workload used and algorithms applied. He identifies the following workload sources for experimental evaluations: artificial, captured logs and current requests. In another dimension he identifies algorithm implementations as based on simulated systems/network, real systems/isolated network and real systems/real network. The author identifies the most commonly used cache evaluation method as that of simulation on a benchmark log of object requests (mostly actual client request logs or in few cases an artificial dataset with the necessary characteristics (appropriate average and median object sizes, or similar long-tail distributions of object sizes and object repetitions)) where byte and page hit ratio savings can be calculated as well as estimates of latency improvements. In order to strike a tradeoff between a realistic setup and reproducibility, we chose, according to Davison [7], to use actual client request logs and to replay them on a simulated network. Since our collection of logs includes SOAP message bodies of requests and responses, the results are not affected by the life-cycle of particular Web services, e.g. availability issues. Furthermore, we used recorded request-response delivery times in the logs to simulate network latency. A similar technique is used by Gadde et al [10] to evaluate the Crispy Squid Web proxy. 3.2

Data

For the evaluation, we extracted client request logs from an actual SOAP message broker – the SOA Trader marketplace (http://ws.soatrader.com). Logs were

collected for the period 6 June 2007 to 15 March 2009. The logs included bodies of 12149 request-response message pairs with additional metadata for each pair including message delivery and processing latency, timestamps of received and returned messages, user identifiers etc. The majority of requests were to services of the Estonian Business Registry, regarding business registration details, official contact information and annual reports. All request-responses in the evaluation were cacheable (non-cacheable responses were not included in the dataset because they do not contribute to the evaluation of the solution). Before performing the experiments we processed the data as follows. We replaced the performance (message delivery time) records for already cached results in the logs with the average time to call a particular operation. We also discarded from the dataset requests that resulted in SOAP fault messages and requests without responses. After processing we had 9 996 request-response pairs with accumulated request size of 6,115,509 bytes and accumulated response size of 51,154,353 bytes. Cumulative response time for all messages was originally 4,081,234 milliseconds. The request response size distribution graph is shown in Fig. 2. Kim and Rosu [11] have estimated the average size of SOAP messages while analyzing publicly available Web services and concluded that 92% of SOAP messages are smaller than 2kB. Our distribution in Fig. 2 follows the same trend—majority of requests are below the 2kB limit.

Fig. 2. Response size distribution.

3.3

Cache Size

Cache size determines the amount of space available for storing cached objects. Arlitt et al [2] examined in their study seven different levels for this factor: 256 MB, 1 GB, 4 GB, 16 GB, 64 GB, 256 GB and 1 TB. Each level was a factor of four larger than the previous—this allowed the authors to easily compare the performance improvement relative to the increase in cache size. The smaller cache sizes (e.g., 256 MB to 16 GB) indicated likely cache sizes for Web proxies.

The larger values (64 GB to 1 TB) indicated the performance of the cache when a significant fraction of the total requested object set was cached. The largest cache size (1 TB) enabled to store the entire object set and thus indicated the maximum achievable performance of the cache. Although the accumulated response size of our dataset was 51,154,353 bytes only 8,641,381 bytes were required to store all cacheable responses because of high degree of potential reuse. Thus, in the experiments we varied the cache size from 1000kB to 8000kB—1000kB, 2000kB, 4000kB and 8000kB to evaluate cache behavior and effectiveness of cache replacement policies under different conditions. A cache size of 8000kB allowed us to cache almost all requests without any need to apply cache replacement policies, while 1000kB represented a situation where just enough responses were cached to observe some cache hits.

4

Analysis

4.1

Load balancing

To understand the benefits of the proxy cache from the server load balancing viewpoint, we run 2 simulations: one with 1 client and another with 25 concurrent clients, with cache size of 1000kB. We used the Size replacement policy since it has been recommended for Web proxy caches as identified by Arlitt et al [2]. Simulation results for a single concurrent access are depicted in Fig. 3(a) while simulation results with 25 concurrent access are depicted in Fig. 3(b). The figures show average response times to the services from the load simulator with respect to cache hit ratio varying from 0 to 1, where 0 means that the cache is empty and 1 means that the cache has been full.

(a)

(b)

Fig. 3. Average response time with (3(b)) and without (3(a)) concurrent access.

When running simulations without concurrent access no next request was issued until after the reply was received from the current request. With concurrent accesses at most 25 requests were issued simultaneously but each client did not issue a new request before receiving a response. Response times were simulated according to response times of particular requests in the original logs.

We found that caching significantly helps to balance the load with concurrent access especially in a heavy traffic situation. Fig. 3(a) shows that the effect of cache in its warm-up phase does not contribute much to balance the peak load, while Fig. 3(b) demonstrates that the proxy cache has a significant effect to balancing peak loads from concurrent access even in its warm-up phase, where hit ratio is relatively small. Specifically, Fig. 3(a) shows that the proxy cache with about 0.05 hit ratio has about 100 times higher response time than the cache with hit ratio nearly 1. The same trend, though less pronounced, applies for concurrent access where usage of cache balances the load more effectively. 4.2

Hit ratios

Two common metrics for evaluating the performance of a Web proxy cache are hit ratio and byte hit ratio [3]. While the hit ratio is the percentage of all requests that can be satisfied by searching the cache for a copy of the requested object, the byte hit ratio represents the percentage of all data that is transferred directly from the cache rather than from the origin server. Sometimes also delay-savings ratio [15], which was introduced to show the improved performance of algorithms using download delays, is used (like in HYBRID replacement policy). Due to the statistical fluctuations of download delays, performance results based on this metric can be unstable and thus this metric is used seldom though we made use of it since our data set included particular information. Finally, cost savings ratio as well as different QoS metrics have been mentioned in the literature. To figure out which cache replacement policies suits best for Web services caching we measured hit ratio, byte hit ratio and delay savings ratio with LFU, LRU and Size for cache size of 1000 kB. This cache size turned out to be suitable to identify the main characteristics of particular replacement policies with our dataset under stress conditions. Hit ratio, byte hit ratio and delay savings ratio are respectively depicted in Fig. 4(a), Fig. 4(b) and Fig. 4(c). All these figures show in the x-axis proportionally filled cache size throughout the entire simulation session. The cache hit ratio curves for Size and LRU in Fig. 4(a) fits well the curve presented by Wubin et al [13] in their paper on Web services caching without any replacement policies and thus confirms that their synthetically generated data set has the characteristics provided through execution of real Web services in practical settings. All the figures identify that LRU outperforms all other replacement policies in stress conditions. LFU has significant drawbacks compared to Size and LRU in stressed settings due to the fact that while using LFU first these cache records are discarded, which have not been reused yet. This indeed means that after a while cache will be filled with requests, which have been reused several times while no space is left for storing new requests. Size replacement policy provides mediocre byte hit ratio since it deletes first the requests with largest responses, whose reuse would give the highest performance. Finally perfect performance of LRU can be explained with the usage of Web services, where similar requests are made within the same time window and while the window is shifted new requests

(a)

(b)

(c) Fig. 4. Ratios for different replacement policies.

become relevant. Therefore removing older request responses has virtually no effect on hit, byte hit and delay savings ratios.

Fig. 5. LRU ratios with respect to cache usage with cache size of 1000 kB.

To understand better the behavior of LRU with respect to our dataset we analyzed how the policy is affected by different cache sizes when maximum cache size is set to 1000kB. The results in Fig. 5 show a very sharp raise in hit ratio in the early phases of cache usage which almost monotonically grows until the cache gets filled. Similar growth pattern is inspected with byte hit ratio and delay savings ratio though there is a sharp decrease in particular ratios, after

about 5% of cache gets filled while hit ratio goes through only a minor decrease. The reason for this is that in our dataset there is a large number of requests with small response size and response time in the beginning. This in fact means that although cache hit rate remain almost the same while caching such requests, negative effect to byte hit and delay savings ratio is seen due to relatively small effect of response reuse. Finally, given the great performance of LRU in stress conditions we were interested in figuring out how LRU and other replacement policies perform with different cache capacities. Therefore we run simulations with maximum cache sizes of 1000kB, 2000kB, 4000kB and 8000kB to evaluate cache behavior and effectiveness with respect to cache replacement policies under different stress conditions, where maximum cache size of 1000kB was the stress condition we used as a baseline, and 8000kB was enough to simulate a cache with unlimited capacity. Experimental results of the simulations with LRU, Size and LFU are depicted respectively in Fig. 6(a), Fig. 6(b) and Fig. 6(c). Performance history in Fig. 6(a) shows that 1000kB cache for LRU gives the same hit ratio performance as a cache with unlimited capacity. At the same time the same capacity of 1000kB gives near-optimal hit ratio performance for Size while for LFU relatively poor performance is seen with cache capacities of 1000kB and 2000kB.

(a)

(b)

(c) Fig. 6. Effect of cache size under LRU.

4.3

Threats to Validity

While choosing to use the captured logs and by using local servers and isolated networks we eliminated the variability in services performance, which may affect reliability of measured performance. On the other hand, this methodology is widely accepted in the caching community because it allows reproducibility of the experimental results. The dataset was relatively small to draw definite conclusions. On the other hand, the data was collected over about a two year period and characterizes actual Web service usage. Furthermore, the cache hit ratio curves for Size and LRU in Fig. 4(a) fits well the curve presented by Wubin et al [13] – meaning that our data set has similar characteristics to data in the previous research. Finally we did not measure throughput of our proxy cache. The reason is that when measuring performance of individual modules of the cache, we observed that the application of MD5, N12N, C14N and other operations took only a few nanoseconds per request. Thus we did not see a need to precisely measure and report processing overhead.

5

Related Work

Arlitt et al [2] studied the effects of different workload characteristics on the replacement decisions made by the Web cache. The workload characteristics include object size, recency of reference, frequency of reference and turnover in the active set of objects. The evaluation was based on more than 117 million requests, which were recorded at an active proxy cache. The authors conclude that frequency-based policies work best when the goal is to achieve higher byte hit ratios as they keep the most popular objects, regardless of size, in the cache. At the same time the probability of achieving a high hit ratio would have been increased if the cache were used to store a large number of small objects since most requests are for small objects, as suggested by the study. Arlitt et al [2] also identify that over 60% of the distinct objects seen in the proxy log were requested only one time—such objects as also called one-timers in the literature [4]. Similar observations have been made by other researchers [5, 16]. Liu and Deters [14] focus on the challenges of enabling PDAs to host Web Services consumers and introduce a dual caching approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth. The authors describe an implemented model-driven dual caching system. Two transparent caches, one on the client side and one on the server side are required to overcome loss of connectivity during sending or receiving of SOAP traffic. The pre-fetching component of the solution is responsible for pre-fetching service response messages based on pre-fetching algorithm, which takes advantage of idle time and uses background threads to send pre-fetching requests to the service provider. On the client side, the prediction algorithm is based on utilizing BPEL files that describe the user workflows. On the server-side, the prediction algorithm is based on the dependency relationship among services.

Terry and Ramasubramanian [20] provide a solution applicable to all Web services while interposing storage and computation transparently in the communication path of the client and the server without modifications to Web-service implementations on the client or the server. Continued access to Web services from mobile devices during disconnections is provided by a client-side requestresponse cache that mimics the behavior of Web services to a limited extent. To study the suitability of caching to support disconnected operation on Web services, the authors conducted an experiment in which a caching proxy was placed between Microsofts .NET My Services and the sample clients that ship with these services. The authors built an HTTP proxy server as defined in the HTTP protocol standard RFC 2616 and deployed it on the client device. The authors also added a cache for storing SOAP messages to the proxy server. All cache policies for expiration and replacement were implemented as recommended in the HTTP standard. Compared to our solution this approach does not to specify caching policies for Web services. Neither did it recognize queries from different users/client. Finally, the paper provided only high-level details of the implementation and did not provide any performance metrics, nor suggestions for particular caching schemes for SOAP communication. Andresen et al [1] present a new approach to implement the SOAP protocol using caching on the SOAP server. The authors claim that compared to their previous work, where they implemented caching on the client-side SOAP engine, and achieved speedups of over 800% for the client [8] (the authors used Apache SOAP 1.2 with Tomcat 4.1 application server), in this paper, they optimize the server-side processing of a SOAP request, achieving speedups of 250% for structured data types and achieving at least a small optimization for all transactions. The authors used the Java implementation of in Tomcat 5.0.14, and chose the RPC-style of SOAP. The main advances both at client-side and server-side caching are achieved through eliminating the need to regenerate the XML from scratch on every call. Thus the main effect of caching is reduced serialization/deserialization costs of SOAP messages. In this approach a response in XML format is cached by using the client request as the key in conjunction with usage of hash tables as a fast response discovery mechanism. Although being hash-based and thus similar to our approach this solution is a server-side cache and not suitable for a proxy cache. Takase and Tatsubori [18] describe a transparent response cache mechanism for a Web services client middleware. Three optimization methods are introduced to improve the performance of the response cache. The first optimization is caching the post-parsing representation instead of the XML message itself. The second is caching application objects. For this optimization, it is showed that some copying processes that are dependent on the type of cached objects. Finally, the third optimization is for read-only objects. These methods reduce the overhead of XML processing or object copying. Tatemura et al [19] propose a middleware architecture called WReX that bridges the gap between the cached web service responses and the backend dynamic data source. The authors assume that the back-end source has been

described with a general XML logical data model. The authors show how their solution can be implemented when the XML data source is implemented on top of an RDBMS. This approach, however, required cache-specific extensions to the services limiting applicability of the solution. Elbashir and Deters [9] focus on the use of caching SOAP request-response pairs in order to compensate for fluctuating bandwidth and loss of connectivity. The authors introduce the concept of embedded SOAP caching and implement a novel SOAP cache (CRISP) that can be embedded into the application or used as an independent proxy-cache. Their evaluation shows that caching of SOAP traffic is not only an effective means to compensate for loss of connectivity but also enables reducing network loads which is particularly interesting when dealing with bandwidth-constrained wireless connections. Li et al [13] propose a consistency-preserving mechanism for Web services response caching, which reduces the volume of data transmitted through the use of hashing to detect similarities with previous results. Experiments with an initial prototype called SigsitAcclerator indicate that the mechanism can lead to significant performance improvement over more straightforward techniques. The solution attempts to cache Web services response messages at both ends (client side and service side). SigsitAcclerator relies similarly to our approach on hashing to detect similarities with previous results. In the paper usage of SHA-1 was mentioned as the hash function, and therefore, the size of each compact result weighs 160 bits. The main difference compared to our approach lies in the detail that SigsitAcclerator assumes its presence at both client-side and proxyside to allow delivery of hashed results instead of complete results making this proxy cache nontransparent to the client. The proxy is responsible for inspecting response results received from the Web services provider, generating hash-based encodings of these results and caching these hash-based encodings. If hash of a result is identical to a hash of a cached result, only result’s hash-encoding would be sent to the client side installation, which is assumed to retrieve the locally stored response based on the hash value. Podlipnig and B¨osz¨ ormenyi [15] identify differentiated cache replacement as one of the new areas for future research topic in Web caching. Differentiated cache replacement is a way to make the replacement process QoS aware. Original caching does not support quality of service (QoS), meaning that caching can be seen as a best-effort service and all objects are handled equally. In SOAP caching differentiated cache replacement is especially important since different Web services have different characteristics such as QoS, pricing etc. Bahn et al [6] evaluate a number of Web cache replacement algorithms and propose least-unified value algorithm, which performs better than existing algorithms for replacing nonuniform data objects in wide-area distributed environments. The same source provides a quick overview of Web cache replacement algorithms. Finally, Wang [21] summarizes the elements of a Web caching system and its desirable properties. The properties include fast access, robustness (availability from users’ point of view), transparency, scalability, efficiency, adaptivity, sta-

bility, load balancing, ability to deal with heterogeneity and simplicity. Then, he surveys systematically the state-of-art techniques which have been used in Web caching systems in the light of the identified properties.

6

Conclusion

In this paper we described a proxy cache solution for SOAP traffic. The proposed solution provides higher configurability than previous solutions by supporting multiple cache replacement policies as well as service-level and operation-level caching policies. In addition, the proposed solution allows administrators to associate transformations to each operation (or service). These transformations allow one to manipulate client- or request-specific SOAP headers before delivering a message either to a service endpoint or back to a client. We also described a novel caching scheme for Web services which, combined with effective cache replacement policies, provides minimal cache overhead and high hit ratios. Furthermore, we showed that the LRU replacement policy provides better hit ratios – a result that mirrors similar results for traditional Web caching [2]. Our future work includes extending the proposed cache solution to support pre-fetching Web service requests, which is technically more challenging than pre-fetching of Web objects. Yang et al [22] have shown that Web object access patterns extracted by means of Web log mining can significantly increase Web caching performance for certain caching and pre-fetching policies. We plan to explore this idea for SOAP caching and pre-fetching. Also new caching schemes and technological caching combinations such as usage of MemCache for higher performance, semantic-aware caching schemes, and selective hashing of request elements are of interest of our future research.

Acknowledgement We would like to thank Andrei Porvkin for performing initial experiments with different caching schemes and Enn Petersoo for implementing the caching proxy and making it possible to run the experiments.

References 1. D. Andresen, D. Sexton, K. Devaram, and V.P. Ranganath. Lye: a highperformance caching soap implementation. In Parallel Processing, 2004. ICPP 2004. International Conference on, pages 143–150 vol.1, Aug. 2004. 2. M. Arlitt, R. Friedrich, and T. Jin. Performance evaluation of Web proxy cache replacement policies. Performance Evaluation, 39(1-4):149–164, 2000. 3. Martin Arlitt, Rich Friedrich, and Tai Jin. Workload characterization of a web proxy in a cable modem environment. SIGMETRICS Perform. Eval. Rev., 27(2):25–36, 1.99.

4. Martin F. Arlitt and Carey L. Williamson. Internet web servers: workload characterization and performance implications. IEEE/ACM Trans. Netw., 5(5):631–645, 1997. 5. Michael Baentsch, Lothar Baum, Georg Molter, Steffen Rothkugel, and Peter Sturm. Enhancing the web’s infrastructure: From caching to replication. IEEE Internet Computing, 1:18–27, 1997. 6. Hyokyung Bahn, Kern Koh, Sang Lyul Min, and Sam H. Noh. Efficient replacement of nonuniform objects in web caches. Computer, 35:65–73, 2002. 7. B.D. Davison. A survey of proxy cache evaluation techniques. In Proceedings of the Fourth International Web Caching Workshop (WCW99), pages 67–77, 1999. 8. K. Devaram and D. Andresen. SOAP optimization via parameterized client-side caching. In Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS 2003), pages 785–790, 2003. 9. Kamal Elbashir and Ralph Deters. Transparent caching for nomadic ws clients. In ICWS ’05: Proceedings of the IEEE International Conference on Web Services, pages 177–184, Washington, DC, USA, 2005. IEEE Computer Society. 10. S. Gadde, J. Chase, and M. Rabinovich. A taste of crispy squid. In Workshop on Internet Server Performance, pages 129–136, 1998. 11. S. M. Kim and M. C. Rosu. A survey of public Web services. In Proceedings of 5th International Conference on E-Commerce and Web Technologies, EC-Web 2004, Zaragoza, Spain, August 31–September 3, 2004, volume 3182 of Lecture Notes in Computer Science, pages 96–105. Springer-Verlag, 2004. 12. Francesco Lelli, Gaetano Maron, and Salvatore Orlando. Improving the performance of xml based technologies by caching and reusing information. In ICWS ’06: Proceedings of the IEEE International Conference on Web Services, pages 689–700, Washington, DC, USA, 2006. IEEE Computer Society. 13. Wubin Li, Zhuofeng Zhao, Kaiyuan Qi, Jun Fang, and Weilong Ding. A consistency-preserving mechanism for web services response caching. In ICWS ’08: Proceedings of the 2008 IEEE International Conference on Web Services, pages 683–690, Washington, DC, USA, 2008. IEEE Computer Society. 14. Xin Liu and Ralph Deters. An efficient dual caching strategy for web serviceenabled pdas. In SAC ’07: Proceedings of the 2007 ACM symposium on Applied computing, pages 788–794, New York, NY, USA, 2007. ACM. 15. Stefan Podlipnig and Laszlo B¨ osz¨ ormenyi. A survey of web cache replacement strategies. ACM Comput. Surv., 35(4):374–398, 2003. 16. Luigi Rizzo and Lorenzo Vicisano. Replacement policies for a proxy cache. IEEE/ACM Transactions on Networking, 8:158–170, 1998. 17. Toyotaro Suzumura, Toshiro Takase, and Michiaki Tatsubori. Optimizing web services performance by differential deserialization. In ICWS ’05: Proceedings of the IEEE International Conference on Web Services, pages 185–192, Washington, DC, USA, 2005. IEEE Computer Society. 18. Toshiro Takase and Michiaki Tatsubori. Efficient web services response caching by selecting optimal data representation. In ICDCS ’04: Proceedings of the 24th International Conference on Distributed Computing Systems (ICDCS’04), pages 188–197, Washington, DC, USA, 2004. IEEE Computer Society. 19. Junichi Tatemura, Oliver Po, Arsany Sawires, Divyakant Agrawal, and K. Sel¸cuk Candan. Wrex: a scalable middleware architecture to enable xml caching for webservices. In Middleware ’05: Proceedings of the ACM/IFIP/USENIX 2005 International Conference on Middleware, pages 124–143, New York, NY, USA, 2005. Springer-Verlag New York, Inc.

20. Douglas B. Terry and Venugopalan Ramasubramanian. Caching xml web services for mobility. Queue, 1(3):70–78, 2003. 21. Jia Wang. A survey of web caching schemes for the internet. SIGCOMM Comput. Commun. Rev., 29(5):36–46, 1999. 22. Qiang Yang, Haining Henry Zhang, and Tianyi Li. Mining web logs for prediction models in www caching and prefetching. In KDD ’01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 473–478, New York, NY, USA, 2001. ACM.

Suggest Documents