-3-

Client/Server Computing

In its broadest sense, client/server computing can be described as the decentralization and redistribution of information technologies. The development of the client/server architecture was fueled by the introduction of local area network (LAN) technologies to the workplace. Research in the area of new LAN technologies originally was driven by the need to share resources, such as printers and magnetic storage, to reduce the relatively high cost of computer peripherals. As LAN technologies became more advanced, communications applications, such as electronic mail, extended the benefits of resource sharing on LANs to distributed, shared business applications and information.

Today, the client/server model of computing has given rise to business, scientific, and a variety of other applications that rely heavily on server-based processing. Individual users no longer require the massive computer technologies and processing power once needed to access huge data warehouses. The success and acceptance of the Web, intranets, and the Internet, for example, can be directly attributed to client/server technologies that enable both business and home users with desktop computers to access vast amounts of information.

Through the use of client/server technologies, corporate resources now can merge online transaction processing (OLTP), decision-support systems, intranet applications, and office automation tasks into a single environment. This mix of transactions results in a more complex information technology environment.

A Client/Server Overview

Client/server computing has been described as a logical extension of modular programming. The fundamental assumption of modular programming is that a large, complex piece of software can be separated into a set of constituent modules, each of which is designed to perform a limited set of functions very well. Each module then is invoked as part of a main program. Modular programming results in an environment where software development and maintainability is improved vastly. In a client/server computing model, all these modules do not need to be executed by the same program, and each part of the application doesn't need to execute on the same machine. Instead, an application can request that some processing occur by another process or program. In this kind of client/server computing environment, the process or program that requests a service is considered the client, whereas the process or program that provides a service is considered the server.

Taking this approach one step further, the client and server processes can be distributed. In other words, the processes can run on separate and dissimilar machines, each of which may run different software and operating systems. Many commercial and governmental organizations are using the client/server technologies to provide both intranet and Internet access to their vast amounts of information. A relational database management system (RDBMS) could be running on a UNIX server in California, while a Windows-based application that queries the database could be running on a user's desktop computer in Tokyo. The details of accessing a database system across the Pacific are abstracted to the user in Japan. The user might know that the database does not reside locally, but he doesn't need to worry about it; the client application takes care of the server communication details and obtains the requested data. This is part of the power of client/server computing. The method enables users to access computing and informational resources in a way not possible with conventional, non-distributed systems. An idea that began as a useful tool for local networks now works with global networks because of client/server computing.

The Client/Server Model

The client/server model is based on the concept that each application consists of two functional parts: one that initiates communication and another that responds to it. Generally, the process that initiates peer-to-peer communication is considered the client, whereas the process that responds to the initial request is considered the server. The server waits for incoming communications requests from a client, performs the requested actions for the client, and returns the result to the client. The client then retrieves data from the server.

Many conventional client/server application programs--such as FTP, Telnet, UseNet, Internet Relay Chat (IRC), and Mail--helped popularize the Internet and the Web. To perform file transfers, an FTP client process uses the File Transfer Protocol to communicate with an FTP server. To establish an interactive logon, a Telnet client uses the Telnet protocol to communicate with a Telnet server. News-reader programs use the Network News Transfer Protocol (NNTP) to communicate with UseNet news servers. IRC clients use the IRC protocol to send and receive messages from an IRC server. And, finally, Internet Mail client applications use the Simple Mail Transfer Protocol (SMTP) to communicate with a Post Office Protocol (POP) mail server. In each of these applications, the client initiates a connection with a server using the specified protocol, sends a request, and waits for a response. After receipt of the response, the client processes the information and continues its normal processing. Note that, in most cases, the client and server processes exist on separate computing platforms, but this does not always need to be the case. It certainly is possible to Telnet to the machine you currently are logged on to, for example. Although this capability might not be very practical, it does enable you to change the active user account without logging off the machine.

Web servers and browsers use the client/server architecture with the Hypertext Transport Protocol (HTTP). The client (browser) process requests a document from the server with a simple mouse click or keystroke, and the server returns the document for display by the browser. Behind the scenes, a client/server application handles all the transactions to request, receive, and process the document. By using event-driven architectures, preprocessing and post-processing can accomplish such tasks as building selection lists based on database queries, user input validation, and dynamic results presentation based on the returned content. The end-user is not required to know what is happening during the processing and most likely cares only that he gets the information requested. This ease of use has resulted in the phenomenal popularity of the WWW.

The Underlying Network

Client/server applications assume that an underlying network structure exists. The existence of this network structure is based on the need of multiple hosts to effectively communicate with each other. A user might access hundreds of Web servers during a single client session, for example. Consider what it would be like if communications on the Internet were not standardized or even reliable. With the number of machines now connected to the Web exceeding 30 million, it's extremely important to have an effective communications network.

Protocols enable client/server processes to communicate and exchange packets of data. Most protocols are based on some form of send-and-receive negotiation. A client might send a request to a server and expect the resulting information or an error message, for example. More advanced protocols include features such as packet identification and error-correction handlers.

The standard protocol used to guarantee reliable communications on the Web is TCP/IP. Before the Web became such a popular medium, TCP/IP was used primarily in UNIX-based networks. Almost all operating systems today support TCP/IP, however. All the latest versions of the Microsoft Windows 95 and NT operating systems include TCP/IP as part of the bundled software, for example. By using the Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP), remote users also can use TCP/IP over dial-up phone lines. Most home-based Web users use dial-up communications methods such as SLIP and PPP to access the Web through local Internet service providers.

The TCP/IP protocol is divided into five conceptual layers, as Figure 3.1 shows.

FIGURE 3.1.Conceptual TCP/IP protocol layers.

The Hardware Layer


The hardware (physical) layer is the lowest layer in the protocol stack. This layer is dependent on the actual physical implementation of the network. The most common hardware implementations usually are Ethernet (Thin Wire Ethernet, Thick Wire Ethernet, FDDI, 10BASE-T, and others) or Token Ring. For Ethernet implementations, the hardware layer uses an address (known as an Ethernet address) to uniquely identify each specific hardware component connected on the network. This layer is ultimately responsible for the delivery of network information packets. The Network Interface Layer The network interface layer is responsible for controlling the network hardware; it maps IP addresses to hardware addresses. Additionally, the network layer encapsulates and transmits outgoing data into standardized packets, and it accepts and de-multiplexes incoming data packets.

The Internet Protocol Layer


The Internet protocol (IP) layer sits above the physical layers (hardware and network interface) in the TCP/IP protocol layer stack. Internet datagrams are universal packets that resemble the frames of physical networks, and they enable hosts that use different technologies and different frame formats to communicate. The User Datagram Protocol (UDP) provides a datagram- oriented service. The IP layer accepts incoming datagrams and determines which module will process the datagram. If the datagram contains a Transmission Control Protocol (TCP) segment, for example, it must go to a TCP module for processing; if it contains a UDP segment, it must go to a UDP module for processing.


NOTE: It's important to note that the IP layer is independent of the network hardware. This independence enables the IP layer to encompass different types of networks regardless of how they are implemented physically. The IP layer uses an address (an IP address) to identify host computers and network-enabled peripherals connected to the network. Because the IP layer is not dependent on the underlying hardware layer, it can identify hosts on different physical networks.

The Transport Layer


The transport layer uses TCP and is independent of both the physical network and IP layers. After TCP receives a segment, it uses the TCP protocol port numbers to find the connection to which the segment belongs. If the segment contains data, TCP adds the data to a buffer associated with the connection and returns an acknowledgment to the sender. TCP uses this acknowledgment to guarantee reliable datagram delivery. The Application Layer At the top of the TCP/IP stack is the application layer, where an end user typically interacts. The protocols underlying the application layer are virtually transparent to the user. As a packet of information leaves the application layer, it enters the TCP layer, where a TCP header containing the destination and source port number is added. The packet then enters the IP layer, where an IP header containing a destination and source IP address is added. Next, the network frames are built, and the packet is passed to the hardware layer. Finally, the hardware layer adds its header and the appropriate address information and then sends the packet to its destination. When the packet reaches its destination, each layer removes and decodes its header information and then sends the remainder of the packet on until it reaches the destination application.

Architecture Diagrams

This section presents some architectural diagrams depicting sample intranet and WWW client/server applications.

Figure 3.2 shows an example of a typical intranet application. In this case, the departmental server may be a Web server, database server, or both. The local personal computer terminals in this figure may represent Telnet sessions, Structured Query Language (SQL) sessions, or even Web client sessions. Once connected to a server, users can execute protocol transactions supported by the specific server. A client database application can execute data queries, generate reports, and insert or update data in the database, for example.

FIGURE 3.2.A simple intranet application.

Figure 3.3 takes the example in Figure 3.2 one step further by segmenting the LAN into three smaller LANs: the Administrative LAN, the Sales LAN, and the Technical Support LAN. The network hub (sometimes called a bridge) can be used to bridge and segment network traffic. This certainly would be the case in organizations that want to segment their intranet from the Internet. Many organizations today use hardware components that not only include the features available in bridges, but also can handle the routing of network traffic. These types of components sometimes are nicknamed brouters because of their capability to perform both the bridging and routing functions.

In this example, the Human Resources group, which probably resides on the Administrative LAN, can post internal job opportunities, healthcare information, and stock dividend data onto the Web Server for corporate employees to access.

FIGURE 3.3. A corporate intranet application.

The term intranet does not mean that the network is physically internal to a location; it means that the network is internal to an organization and can be geographically distributed, as shown in Figure 3.4.

FIGURE 3.4. A corporate WAN intranet application.

This figure depicts the same scenario as Figure 3.3, except that some branch offices are dispersed across the country and probably are connected with a high-speed connection line. In this example, a sales manager in Gatlinburg still can review corporate postings on the internal Web server just as simply as he can send e-mail to another sales manager in Boston.

Figure 3.5 incorporates Internet connectivity into an intranet LAN. This increases complexities by giving outside users access to the internal network. The figure illustrates the insertion of a security firewall when allowing Internet access. Firewalls manage and restrict access based on specified configuration parameters. A firewall can be configured to permit only specified protocol traffic access beyond the firewall point, for example. Firewalls generally are used to restrict the access of external users and to help safeguard the organizational data, resources, and information made available. This information can come in the form of simple marketing material, access to online technical support material, contact phone numbers, e-mail access to company personnel, or, in some cases, Telnet access to organizational computers.

Although enabling Internet access to your site introduces some complexities, it also provides benefits. Company employees who have an ISP can work from home, for example. More and more companies today are acting as internal ISPs for corporate employees by providing dial-up network access. Also, by providing access to online technical-support materials, a company requires fewer technical specialists to work support lines.

FIGURE 3.5.A corporate WWW application.

Figure 3.6 illustrates a case where access to the corporate database is provided. Note that notwo functional components need to be on the same machine. The Web server can reside in Dallas, the product database in Washington, the sales group in Tokyo; the Internet users can be anywhere in the world. Several client/server applications actually are being used in this environment. The interaction between the Web server and the browser clients, as well as the communications between the Web server and the database server, are examples of client/server applications.

Security Issues

For many organizations, security might best be defined as the safeguarding of corporate resources. These resources can include such things as corporate secrets, computer data, or employee information. Many would argue that knowledge is absolute power. If a competitor knows your organization's weaknesses, it can use that knowledge to your disadvantage. When an organization provides Web access to its network resources, it also increases the potential for misguided intentions. This is why so much emphasis is placed on Web security.


RESOURCE: You can find the World Wide Web Security Frequently Asked Questions (FAQs) at
http://www-genome.wi.mit.edu/WWW/faqs/www-security-faq.html. 



FIGURE 3.6.A corporate WWW database application.

Many levels of security access controls are involved in the operation of a WWW database site. Each level has a specific relationship to the client/server applications that depend on its existence. This section discusses each type of security access control to provide a global view of the security measures that make up the WWW database site. This section is not the only source for security matters related to WWW database development. Hundreds of online resources provide details about security-related matters.

Web Server Security

This section is not intended to discuss the myriad Web server security nuances; entire books sometimes are devoted to the subject. Instead, this section informs you about the types of available security. For a more in-depth look at specific Web server security, check out the numerous online resources available for each of the major Web servers.

In basic HTTP authentication, passwords are passed over the network as uuencoded text. Although this is not human-readable text, anyone with a knowledge of network packets easily can trap and decipher a password. If you've ever attempted to download an HTML document and instead were prompted with a user ID/password dialog box, you've experienced basic HTTP authentication. This topic is covered more fully later in this chapter.

Web browsers, such as Netscape Navigator and Microsoft Internet Explorer, have options for notifying the user when the browser is about to transmit data that is not secure. By default, the dialog box appears every time your browser is about to transmit non-secure data. You can disable this warning by enabling the appropriate checkbox in the dialog box or by using an Options page entry. Figure 3.7 shows the Netscape Navigator Preferences dialog box used to set security options.

FIGURE 3.7.Netscape Navigator security preferences.

You can disable the use of the Java programming language in your Web browser by enabling the appropriate checkbox.


NOTE: Several security issues related to the use of Java currently are being addressed by Sun Microsystems.

Additional security notifications, such as entering a secure document space, leaving a secure document space, and viewing a document with a secure/nonsecure mix, can be configured in the Security Preferences dialog box.

Figure 3.8 shows the Security Information alert box that warns the user about non-secure data transmission.

Many additional security methods for controlling access to documents on your Web server exist. This section uses the National Center for Supercomputing Applications (NCSA) HTTP Web server as the basis for a brief discussion of security features available at the Web server level. The intent of this section is not to give exact details for configuration but to let you know that such security measures exist.

FIGURE 3.8.The Netscape Navigator Security Information alert box.

Global Access Hypertext Transport Protocol Daemon (httpd) global access security is maintained based on the directives contained in a global configuration file.


NOTE: Unless specified otherwise in the global configuration file, the default global access configuration file is conf/srm.conf, relative to the Web server root directory.

These directives determine host-level and user-level access to directories in the Web server directory tree. The global access configuration file is read-only when the Web server is started and therefore requires the Web server to be restarted after a modification is made to the configuration file. Per-Directory Access Per-directory access configuration enables users with write access permission to set up and configure access to documents they own. Users simply create a document in the directory tree to which they want to control access. This document maintains all the configuration information needed to authenticate hosts, users, and user passwords. This type of configuration does not require root access privileges on the system or write access to the server's primary configuration files. Unlike global-access security, per-directory configuration files are read and parsed for each access. Any modifications made to configuration files are enacted at runtime and are employed during the next file access.


CAUTION: Because per-directory configuration files are read and parsed by the server on each access, a degradation in file-access speeds can occur.

Host Filtering Host filtering enables specified hosts to access a specified document tree. You can specify host filtering on a global or per-directory basis. Hosts with access are identified in the configuration file by a domain name or IP address.

Suppose that an organization has two separate and distinct environments. The first, a production environment, supports software that is currently in use. The second, a development environment, is used for the development of new software technologies. To control access to the development environment, the Web administrator can use host filtering so that only corporate hosts can access new developments. This configuration still permits corporate access to the secured environment, regardless of the geographic location of the hosts. User Authentication User authentication enables you to authenticate usernames and passwords before a user retrieves documents from the Web server. As Figure 3.9 shows, the Username and Password Required dialog box appears in Netscape Navigator when user authentication is specified for a directory structure on the Web server.

FIGURE 3.9.The Netscape Navigator Username and Password Required dialog box.

Securing Your Server

Previous sections focused on user security and how to control access to your documents. This section looks at Web server security from a different perspective--that of securing the Web server itself. Although the following examples are not the only possible security holes in your Web server configuration, they are some of the most common. Control Your Document Domain One of the nicest features of most Web servers is the capability to search directories. If your server is configured for automatic directory listings, any user has free access to view all documents in your Web document domain. Of course, this assumes that you aren't managing document access on a directory or user level. You might think that is why you have documents in your Web domain--so that users can access them. But what about documents that internal users place in the document root? Temporary files you forget to remove from your document domain can be another potential access risk, because they might contain information about your domain.

Web administrators should closely monitor and manage both domain access and document placement. One method of controlling the document domain is to enforce strict document security so that only the Web administrator can store and modify documents in the Web domain. This might sound a bit harsh, but, for some environments, it might be the only means for ensuring that your Website is safeguarded. Some sites maintain standard operating procedures for the placement of documents on the Web server for public consumption. In many government agencies, this is standard practice to ensure that classified material is not accidentally published. Every piece of information you leave available to hackers is just another piece of the puzzle to them. Server-Side Includes Server-Side Includes (SSIs) are specially formatted HTML commands that enable Web authors to accomplish tasks such as including standard document text in all their Web documents, executing CGI scripts (operating system programs that are executed and can return HTML-formatted data), executing operating system commands, and so on.

You can use SSIs to provide a means for reusing code, automatically generating document updates, and time-stamping your documents. Unfortunately, SSIs also have downsides. One is the use of the exec feature, which permits the execution of a program by the Web server.

Here's a sample exec SSI entry:

Visitor odometer: <em><!--#exec cgi="/cgi-bin/counter"--></em>

In this code, the CGI script /cgi-bin/counter is executed by the Web server, and any resulting text is displayed in place within the document. If /cgi-bin/counter accidentally is configured with world-write permissions (that is, anybody can edit and save the document), however, the file can be changed to execute a remove command that deletes the contents of any file or file system.

Good hackers (if there are such things) can find ways to penetrate your system. One hacker managed to modify a CGI script and have it mail him the password file for the system. Although this procedure is a cut above most hackers, the point is that the moment you connect to the outside world, your system is vulnerable to attack. Turning off the exec form of SSIs makes good sense. Child Process Execution Privileges If your Web server is configured to start child processes as the user root, you're more vulnerable than you realize. On such a system, any CGI script executed with root privileges is a major hole in your security, because any user effectively can execute the program with root privileges. Child processes, launched after the server accepts an incoming connection, can be configured to set the effective user ID (setuid) to another user. By default, most well-administered Web servers are configured to start child processes as user nobody or some other non-root user. Check your server configuration file to ensure that you're not executing child processes as root.

Secure Sockets

The Secure Socket Layer (SSL) protocol is a security protocol that grants client/server applications the capability to communicate with a secure session over the Internet. As illustrated in Figure 3.10, SSL sits between the transport and application protocol layers and prevents unauthorized eavesdropping, tampering, and message forgery.

FIGURE 3.10.A Secure Sockets Layer diagram.

The SSL protocol is used to handle the transmission of secure information between the Web server and the Web client. This security can be in the form of server authentication, data encryption, and message integrity.


TIP: Netscape Navigator users can verify whether they are transmitting data using SSL by looking at the Key icon in the lower-left corner of the browser. If the key is not broken, SSL is being used.

CGI Execution Privileges

Similar to the exec feature of a server-side include, CGI scripts are executed by the server process supporting the client request for execution. This doesn't mean that CGI scripts are non-secure, but that they are a vulnerability that must be monitored closely.

A good practice for Web administrators is to not allow any internal user or developer to generate CGI scripts without first approving them for use prior to implementation. Additionally, only the Web administrator should have write permission to any CGI-executable directory. By following this type of strict CGI execution privilege management, you can more readily circumvent potential problems before they occur.

Transaction Security

Transaction security over the Internet recently has become a hot commodity. Companies are vying intensely for a market share of Internet commerce. Many companies are looking to transaction security to guarantee that personal information (such as credit-card numbers, Social Security numbers, phone numbers, and so on) remains confidential.

One such company, CyberCash, has developed a Secure Internet Payment Service, which uses strong encryption to scramble charge and credit-card transaction information so that it can pass safely over the Internet. CyberCash provides the software free of charge as a service to its customers. Secure Internet Payment Service has been approved for global export by the United States government.

Secure servers are used to manage secure commerce transaction on the Internet. Most secure servers provide advanced security features, such as server authentication, data encryption, data integrity, and user authorization. Secure servers use WWW standards such as HTML, HTTP, HTTPS, CGI, and SSL to communicate with client browsers.

Database Access Control

Database access control is dependent on the database management system (DBMS) software in use. Standard database security is inherent to most database platforms. It is implemented with a user ID and password, which are maintained in database files. The user (or process) inputs a user ID and password combination and is provided access privileges granted to that user ID. These access privileges are granted to the user ID by database object owners.

You might wonder how this relates to Web database development. Because you can't possibly provide the entire WWW community with user ID access to your corporate database, you need to find a practical means to accurately and feasibly control database access.

You can use many mechanisms to provide access to your data repository. One of the simplest is the use of gateway software. Gateway software is available commercially for almost all databases. Additionally, some scripting languages (such as Perl) contain support for accessing a database through a defined set of database commands. The output of these commands then can be manipulated to present output in any desired format. Using gateway software, Web developers can maintain greater control over access to your data repository by specifying database accounts to use when accessing the database. A Web application that provides a read-only interface to a database query, for example, can use a database account with read-only privileges. This removes the possibility of accidentally deleting data by disallowing write access. Gateway software is discussed in greater detail in Chapter 6, "Database and Data Repository Access Methods," and Chapter 11, "Developing HTML Forms for Database Access and Application Interfaces."

Firewalls

Firewalls provide another means to secure your client/server environment. Firewall implementations can be in the form of hardware or software, and they easily can be configured so that only traffic that meets specified rules can pass through it. You could configure a firewall to enable all outgoing traffic but allow only incoming Internet e-mail traffic, for example. This doesn't mean that your network is invulnerable to e-mail attack, but it does reduce the chance of attack by limiting resource access from the outside world.

A Host-Based Firewall

A host-based firewall implementation is based on software running on a computer generally dedicated to security. This machine authenticates the transaction before passing it to the destination machine. Although host-based firewalls usually are designed to manage security at the application level, they can be configured to handle security at the network level as well.

A Router-Based Firewall

A router-based firewall implementation bases its security on a screening router hardware device. Screening routers use network data-packet filtering to manage security. This usually is accomplished with a set of rules configured in the router. A router-based firewall can be configured to enable outbound packets (those from the intranet to the Internet) to pass through the router, for example, while implementing filtering on inbound packets to determine whether access is allowed. Figure 3.11 depicts a router-based firewall implementation.

In this figure, the term gateway should not be confused with the term database gateway. A gateway can be either hardware or software and provides an interface between two or more entities. A database gateway allows a software program to access a database, just as a mail gateway allows external mail users to access internal mail users, for example. The IP provider backbone in Figure 3.11 is simply the organization's link to the remainder of the Internet.

Pretty Good Privacy Program

The Pretty Good Privacy (PGP) program, written by Phil Zimmerman, encrypts e-mail and data files. An additional feature of PGP is its capability to add a digital signature to a message without encrypting the signature. If the original message or signature is altered, PGP detects the modification.

PGP uses public key encryption, in which the encryption and decryption keys are different. Because it's impossible to derive one key from the other, the encryption key can be public knowledge. In order to send a PGP message, a user must encrypt the message with the destination user's encryption key. The decryption key itself is encrypted on your computer so that nobody can access your computer to decipher it.

FIGURE 3.11. A router-based firewall network diagram.


RESOURCE: Information on downloading and configuring PGP is available at
http://netaccess.on.ca/ugali/crypt/ 



Client/Server Benefits

If you ask a group of computer users how client/server computing has benefited them, you'll get a mixed bag of answers. Some will even tell you there are no advantages to client/server computing. After looking more closely at the true savings, though, you'll see that client/server computing has many advantages over mainframe technologies. This section presents a few examples of the benefits you can reap from a client/server environment.

Platform Independence

Platform independence is probably one of the greatest advantages that client/server applications offer. With a standard communications protocol suite, applications can communicate from around the world and from completely different hardware architectures. The capability of Web technologies to be platform independent is most evidenced in the fact that virtually all operating system platforms today include support for Web access. Information technology (IT) managers can seamlessly integrate existing hardware and software platforms into Web-based solutions without having to purchase compatible, homogeneous hardware platforms or single-vendor solutions. The client/server model provides a tremendous amount of flexibility and previously unavailable functionality. System designs now can be formulated by selecting the hardware, software, and networking components that best fill an organization's application requirements without having to worry about interoperability.


NOTE:System and network administrators might argue that client/server computing actually makes things worse. There is some truth to their viewpoint. Distributing applications and systems (sometimes across large distances) and using heterogeneous hardware and software platforms can result in a difficult administration task when compared to a centralized system.

Other Resources

As mentioned earlier in this chapter, a primary benefit of client/server computing is the capability to make use of others' computing resources and information. The Internet and WWW are built on the TCP/IP protocol and the huge suite of client/server applications (such as HTTP, FTP, SMTP, NNTP, and so on) that run in a TCP/IP environment. You could argue that it was the Internet that brought about many of the advances in client/server computing, subsequently spawning the WWW phenomenon.

Database applications fit naturally into the client/server model of computing. WWW database applications are a logical extension of the WWW client/server environment. Visual Basic is a well-suited development platform for bringing the WWW and database application development together. A user interacting with an embedded HTML form or Visual Basic application on a WWW browser (a database client) can launch a query to a database server at a location belonging to a different organization, for example. The server responds to the request by interacting with the database management system, formulates a response to the query, formats the response for display on the user's browser, and ships it back to the requesting client.

This book details how to bridge WWW client/server and database client/server environments in a Visual Basic development arena. Given a strong understanding of how these environments operate separately, you can begin to see how powerfully the three technologies complement each other.

Summary

This chapter first presented the client/server model and described the network architecture that underlies client/server computing. Architecture diagrams of intranet, Internet, WWW, and WWW database configurations illustrated the concepts.

The section on security issues focused on some of the key ways to manage security in a client/server database environment. The concept that security must be implemented at several places was introduced, followed by a brief description of security at the Web server level and Secure Socket Layer. You then looked at secure servers and how they aid in electronic commerce security with encrypted data transmissions. A brief explanation of firewalls and database access control followed.

Finally, you examined some of the benefits of client/server computing and learned about the complementary nature of client/server and database technologies.

The next chapter adds to your client/server understanding with a presentation of the WWW components that make up the WWW client/server computing environment. In doing so, it describes some of the application platforms available for both client and server WWW applications; it also covers the features that make up those platforms.