Google+

Skip to content

 
   Main News Page

Web optimization options

Posted:

From load balancers to client caching, pick the proper product to fix specific performance problems.

When Web site delivery is no longer fast and furious, but more slow and spurious, visitors might quickly click to the competition. The challenge for Web managers is to provide speedy, robust Web services to a growing audience, often using fewer resources.

We surveyed the range of current Web acceleration approaches from first look-up to last mile, looking for real network speed without the hype. What follows is a guide to some of the products you should consider for solving specific performance problems.

Starting strong

The race is on from the first DNS query. Does your name server reverse the address quickly? Fail here and it's all over, so stop putting DNS on the oldest server in the farm and treat it as the critical service it is. To address volume look-up concerns and geographical diversity, and improve overall name service reliability, consider out sourcing at least your back-up DNS to a vendor such as UltraDNS or Nominum. Finally, you also can use DNS for global load balancing by directing initial requests to the closest server or farm.

Stop a slowdown

Once the request is on its way to a server, the real slowdown begins. The causes run the gamut from connections being held open by slow dial-up visitors to sudden flash crowds to excessive page-generation time or underpowered Web farms. These problems fall into two broad areas: page compute-related issues, and network and protocol effects.

Computewise, page-generation time can be excessive. When using any Web technology from Active Server Pages to Zope, build it only when necessary. Executing scripts or running database queries to fetch page components for each visitor makes no sense unless there is variation in what each sees. Software caches such as SpiderCache can be used to hold pre generated pages. Hardware caches such as those from CacheFlow can even help off-load the server from the request.

Yet neither acceleration approach works well for truly dynamic content. So-called dynamic caches such as Chutney Technologies' PreLoader can be used to hold portions of pages. But performance gains for dynamic sites are often hard won, so you're better served trying other Web site acceleration techniques first.

When you get to the network, speed problems run deep. It's well known that HTTP and TCP congestion-avoidance algorithms don't work well together. Moreover, Web servers are busy serving numerous small requests and often become network I/O bound, holding connections open for slow downloads.

A simple remedy is to avoid wasteful connections to a server. Don't bother to force your server to hand out needless "304 responses" indicating items haven't changed. Make your site cache-friendly and set expiration dates far in the future for elements that don't change, such as navigation graphics.

Another possibility is to off-load your server by having another device -- often dubbed an accelerator -- receive connections. Effectively a connection multiplexer, such devices manage and maintain all TCP connections with clients, minimizing the number of requests a server receives. Several vendors employ such connection pooling ideas in their accelerators, the most impressive being Redline Networks.

Lighten the load

The amount of content being delivered can also reduce speed. Deliver a smaller payload and you not only save bandwidth, but also you free up your servers to accommodate more users. Binary data such as images and flash content can hog a lot of space, so question the value of every image, rollover effect or multimedia element before including it. You'll find many popular sites adopting extremely lightweight designs.

With the system pared down to the essentials, the site administrator is now faced with how to deliver what's left. The first task is to consider compression. Web image-compression techniques such as color reduction, thought to be out of vogue given the lack of low-resolution users, still make sense. Further text reduction comes from removing white space in HTML, JavaScript and CSS files.

Many servers such as Internet Information Server support HTTP compression natively, although server add-ons such as BPVN Technologies' PipeBoost improve on basic compression offerings. High-traffic sites such as Google selectively compress HTML content, depending on the type of browser the user has, for huge aggregate savings in bandwidth. But be careful before rushing to simple HTTP compression, as bugs exist even in popular browsers such as Internet Explorer.

Diminish distance

The issue of latency arises when content is sent to end users. Serving content as close to the edge of the Internet as possible using a content delivery network (CDN) such as Akamai makes sense. But CDNs tend to be expensive and suffer from the same problems as caches when it comes to dynamic content. The Edge Side Includes markup language aims to make it easier to develop Web content that can be dynamically assembled at the edge of the network.

For truly exotic acceleration, you must use acceleration that works not only on the server or network, but also on the client side. For example, FineGround's Condenser takes advantage of the fact that many Web pages are very similar. Condenser tries to deliver only the "deltas," or differences between pages, and reassemble them client-side using JavaScript. It may sound unusual, but it actually works pretty well. And FireClick's NetFlame2 takes a unique approach by using JavaScript object prefetching combined with site-usage analytics to predictively precache the most likely next-requested content during the time a user is looking at a page.

Many products attempt to combine Web acceleration approaches and address caching, compression and connection multiplexing with Secure Sockets Layer acceleration or CDN integration thrown in. Vendors such as Netscaler, Packeteer and Redline offer popular acceleration gear that pays for itself with simple bandwidth reduction and hardware savings.

Yet before speeding up your site, make sure it's built properly and that it strikes a balance between delivery demands and customer experience. Otherwise, you'll go nowhere very fast.

  1. As site traffic increases and your server becomes overwhelmed, add a load balancer and build out a server farm.
  2. To keep servers from working too hard, try one or all of these methods:
    1. Pregenerate your frequently accessed pages if they are built from a database.
    2. Add a reverse proxy cache or cache appliance to serve content.
    3. Add hardware accelerators to free up servers from handling inbound requests.
  3. When addressing a large load across a widespread geographical area:
    1. Build a few server farms and globally load balance between them.
    2. Use a content-delivery network source.
  4. To reduce bandwidth requirements:
    1. Compress your responses.
    2. Author Web pages to be cache-aware and take particular advantage of local browser caches.
  5. When dealing with delivery-sensitive data such as video, move close to the user and deliver from the edge of the network.
Originally published on Network World, Published: September 30, 2002.

 

About PINT

Headquartered in San Diego since 1994, PINT Inc. (http://www.pint.com ) is a nationally recognized interactive Web agency providing web strategy, interactive design, development, user experience, analytics, search marketing, and optimization to global companies and institutions. PINT founder Thomas Powell is the author of eleven best-selling industry textbooks on HTML and Web design. Clients include San Diego Chargers, ViewSonic, Hewlett-Packard, Allergan, Biogen Idec, UCSD, Linksys, Scripps Health, and USC. For updates and information about PINT and the Web, please subscribe to the PINT blog at http://blog.pint.com and follow PINT on Twitter at http://twitter.com/PINTSD