Home
小杰的博客 Prev Page Prev Page
?
Main Page
Table of content
Copyright
Addison-Wesley Professional Computing Series
Foreword
Preface
Introduction
Changes from the Second Edition
Using This Book
Source Code and Errata Availability
Acknowledgments
Part 1: Introduction and TCP/IP
Chapter 1. Introduction
1.1 Introduction
1.2 A Simple Daytime Client
1.3 Protocol Independence
1.4 Error Handling: Wrapper Functions
1.5 A Simple Daytime Server
1.6 Roadmap to Client/Server Examples in the Text
1.7 OSI Model
1.8 BSD Networking History
1.9 Test Networks and Hosts
1.10 Unix Standards
1.11 64-Bit Architectures
1.12 Summary
Exercises
Chapter 2. The Transport Layer: TCP, UDP, and SCTP
2.1 Introduction
2.2 The Big Picture
2.3 User Datagram Protocol (UDP)
2.4 Transmission Control Protocol (TCP)
2.5 Stream Control Transmission Protocol (SCTP)
2.6 TCP Connection Establishment and Termination
2.7 TIME_WAIT State
2.8 SCTP Association Establishment and Termination
2.9 Port Numbers
2.10 TCP Port Numbers and Concurrent Servers
2.11 Buffer Sizes and Limitations
2.12 Standard Internet Services
2.13 Protocol Usage by Common Internet Applications
2.14 Summary
Exercises
Part 2: Elementary Sockets
Chapter 3. Sockets Introduction
3.1 Introduction
3.2 Socket Address Structures
3.3 Value-Result Arguments
3.4 Byte Ordering Functions
3.5 Byte Manipulation Functions
3.6 'inet_aton', 'inet_addr', and 'inet_ntoa' Functions
3.7 'inet_pton' and 'inet_ntop' Functions
3.8 'sock_ntop' and Related Functions
3.9 'readn', 'writen', and 'readline' Functions
3.10 Summary
Exercises
Chapter 4. Elementary TCP Sockets
4.1 Introduction
4.2 'socket' Function
4.3 'connect' Function
4.4 'bind' Function
4.5 'listen' Function
4.6 'accept' Function
4.7 'fork' and 'exec' Functions
4.8 Concurrent Servers
4.9 'close' Function
4.10 'getsockname' and 'getpeername' Functions
4.11 Summary
Exercises
Chapter 5. TCP Client/Server Example
5.1 Introduction
5.2 TCP Echo Server: 'main' Function
5.3 TCP Echo Server: 'str_echo' Function
5.4 TCP Echo Client: 'main' Function
5.5 TCP Echo Client: 'str_cli' Function
5.6 Normal Startup
5.7 Normal Termination
5.8 POSIX Signal Handling
5.9 Handling 'SIGCHLD' Signals
5.10 'wait' and 'waitpid' Functions
5.11 Connection Abort before 'accept' Returns
5.12 Termination of Server Process
5.13 'SIGPIPE' Signal
5.14 Crashing of Server Host
5.15 Crashing and Rebooting of Server Host
5.16 Shutdown of Server Host
5.17 Summary of TCP Example
5.18 Data Format
5.19 Summary
Exercises
Chapter 6. I/O Multiplexing: The 'select' and 'poll' Functions
6.1 Introduction
6.2 I/O Models
6.3 'select' Function
6.4 'str_cli' Function (Revisited)
6.5 Batch Input and Buffering
6.6 'shutdown' Function
6.7 'str_cli' Function (Revisited Again)
6.8 TCP Echo Server (Revisited)
6.9 'pselect' Function
6.10 'poll' Function
6.11 TCP Echo Server (Revisited Again)
6.12 Summary
Exercises
Chapter 7. Socket Options
7.1 Introduction
7.2 'getsockopt' and 'setsockopt' Functions
7.3 Checking if an Option Is Supported and Obtaining the Default
7.4 Socket States
7.5 Generic Socket Options
7.6 IPv4 Socket Options
7.7 ICMPv6 Socket Option
7.8 IPv6 Socket Options
7.9 TCP Socket Options
7.10 SCTP Socket Options
7.11 'fcntl' Function
7.12 Summary
Exercises
Chapter 8. Elementary UDP Sockets
8.1 Introduction
8.2 'recvfrom' and 'sendto' Functions
8.3 UDP Echo Server: 'main' Function
8.4 UDP Echo Server: 'dg_echo' Function
8.5 UDP Echo Client: 'main' Function
8.6 UDP Echo Client: 'dg_cli' Function
8.7 Lost Datagrams
8.8 Verifying Received Response
8.9 Server Not Running
8.10 Summary of UDP Example
8.11 'connect' Function with UDP
8.12 'dg_cli' Function (Revisited)
8.13 Lack of Flow Control with UDP
8.14 Determining Outgoing Interface with UDP
8.15 TCP and UDP Echo Server Using 'select'
8.16 Summary
Exercises
Chapter 9. Elementary SCTP Sockets
9.1 Introduction
9.2 Interface Models
9.3 'sctp_bindx' Function
9.4 'sctp_connectx' Function
9.5 'sctp_getpaddrs' Function
9.6 'sctp_freepaddrs' Function
9.7 'sctp_getladdrs' Function
9.8 'sctp_freeladdrs' Function
9.9 'sctp_sendmsg' Function
9.10 'sctp_recvmsg' Function
9.11 'sctp_opt_info' Function
9.12 'sctp_peeloff' Function
9.13 'shutdown' Function
9.14 Notifications
9.15 Summary
Exercises
Chapter 10. SCTP Client/Server Example
10.1 Introduction
10.2 SCTP One-to-Many-Style Streaming Echo Server: 'main' Function
10.3 SCTP One-to-Many-Style Streaming Echo Client: 'main' Function
10.4 SCTP Streaming Echo Client: 'str_cli' Function
10.5 Exploring Head-of-Line Blocking
10.6 Controlling the Number of Streams
10.7 Controlling Termination
10.8 Summary
Exercises
Chapter 11. Name and Address Conversions
11.1 Introduction
11.2 Domain Name System (DNS)
11.3 'gethostbyname' Function
11.4 'gethostbyaddr' Function
11.5 'getservbyname' and 'getservbyport' Functions
11.6 'getaddrinfo' Function
11.7 'gai_strerror' Function
11.8 'freeaddrinfo' Function
11.9 'getaddrinfo' Function: IPv6
11.10 'getaddrinfo' Function: Examples
11.11 'host_serv' Function
11.12 'tcp_connect' Function
11.13 'tcp_listen' Function
11.14 'udp_client' Function
11.15 'udp_connect' Function
11.16 'udp_server' Function
11.17 'getnameinfo' Function
11.18 Re-entrant Functions
11.19 'gethostbyname_r' and 'gethostbyaddr_r' Functions
11.20 Obsolete IPv6 Address Lookup Functions
11.21 Other Networking Information
11.22 Summary
Exercises
Part 3: Advanced Sockets
Chapter 12. IPv4 and IPv6 Interoperability
12.1 Introduction
12.2 IPv4 Client, IPv6 Server
12.3 IPv6 Client, IPv4 Server
12.4 IPv6 Address-Testing Macros
12.5 Source Code Portability
12.6 Summary
Exercises
Chapter 13. Daemon Processes and the 'inetd' Superserver
13.1 Introduction
13.2 'syslogd' Daemon
13.3 'syslog' Function
13.4 'daemon_init' Function
13.5 'inetd' Daemon
13.6 'daemon_inetd' Function
13.7 Summary
Exercises
Chapter 14. Advanced I/O Functions
14.1 Introduction
14.2 Socket Timeouts
14.3 'recv' and 'send' Functions
14.4 'readv' and 'writev' Functions
14.5 'recvmsg' and 'sendmsg' Functions
14.6 Ancillary Data
14.7 How Much Data Is Queued?
14.8 Sockets and Standard I/O
14.9 Advanced Polling
14.10 Summary
Exercises
Chapter 15. Unix Domain Protocols
15.1 Introduction
15.2 Unix Domain Socket Address Structure
15.3 'socketpair' Function
15.4 Socket Functions
15.5 Unix Domain Stream Client/Server
15.6 Unix Domain Datagram Client/Server
15.7 Passing Descriptors
15.8 Receiving Sender Credentials
15.9 Summary
Exercises
Chapter 16. Nonblocking I/O
16.1 Introduction
16.2 Nonblocking Reads and Writes: 'str_cli' Function (Revisited)
16.3 Nonblocking 'connect'
16.4 Nonblocking 'connect:' Daytime Client
16.5 Nonblocking 'connect:' Web Client
16.6 Nonblocking 'accept'
16.7 Summary
Exercises
Chapter 17. 'ioctl' Operations
17.1 Introduction
17.2 'ioctl' Function
17.3 Socket Operations
17.4 File Operations
17.5 Interface Configuration
17.6 'get_ifi_info' Function
17.7 Interface Operations
17.8 ARP Cache Operations
17.9 Routing Table Operations
17.10 Summary
Exercises
Chapter 18. Routing Sockets
18.1 Introduction
18.2 Datalink Socket Address Structure
18.3 Reading and Writing
18.4 'sysctl' Operations
18.5 'get_ifi_info' Function (Revisited)
18.6 Interface Name and Index Functions
18.7 Summary
Exercises
Chapter 19. Key Management Sockets
19.1 Introduction
19.2 Reading and Writing
19.3 Dumping the Security Association Database (SADB)
19.4 Creating a Static Security Association (SA)
19.5 Dynamically Maintaining SAs
19.6 Summary
Exercises
Chapter 20. Broadcasting
20.1 Introduction
20.2 Broadcast Addresses
20.3 Unicast versus Broadcast
20.4 'dg_cli' Function Using Broadcasting
20.5 Race Conditions
20.6 Summary
Exercises
Chapter 21. Multicasting
21.1 Introduction
21.2 Multicast Addresses
21.3 Multicasting versus Broadcasting on a LAN
21.4 Multicasting on a WAN
21.5 Source-Specific Multicast
21.6 Multicast Socket Options
21.7 'mcast_join' and Related Functions
21.8 'dg_cli' Function Using Multicasting
21.9 Receiving IP Multicast Infrastructure Session Announcements
21.10 Sending and Receiving
21.11 Simple Network Time Protocol (SNTP)
21.12 Summary
Exercises
Chapter 22. Advanced UDP Sockets
22.1 Introduction
22.2 Receiving Flags, Destination IP Address, and Interface Index
22.3 Datagram Truncation
22.4 When to Use UDP Instead of TCP
22.5 Adding Reliability to a UDP Application
22.6 Binding Interface Addresses
22.7 Concurrent UDP Servers
22.8 IPv6 Packet Information
22.9 IPv6 Path MTU Control
22.10 Summary
Exercises
Chapter 23. Advanced SCTP Sockets
23.1 Introduction
23.2 An Autoclosing One-to-Many-Style Server
23.3 Partial Delivery
23.4 Notifications
23.5 Unordered Data
23.6 Binding a Subset of Addresses
23.7 Determining Peer and Local Address Information
23.8 Finding an Association ID Given an IP Address
23.9 Heartbeating and Address Failure
23.10 Peeling Off an Association
23.11 Controlling Timing
23.12 When to Use SCTP Instead of TCP
23.13 Summary
Exercises
Chapter 24. Out-of-Band Data
24.1 Introduction
24.2 TCP Out-of-Band Data
24.3 'sockatmark' Function
24.4 TCP Out-of-Band Data Recap
24.5 Summary
Exercises
Chapter 25. Signal-Driven I/O
25.1 Introduction
25.2 Signal-Driven I/O for Sockets
25.3 UDP Echo Server Using 'SIGIO'
25.4 Summary
Exercises
Chapter 26. Threads
26.1 Introduction
26.2 Basic Thread Functions: Creation and Termination
26.3 'str_cli' Function Using Threads
26.4 TCP Echo Server Using Threads
26.5 Thread-Specific Data
26.6 Web Client and Simultaneous Connections (Continued)
26.7 Mutexes: Mutual Exclusion
26.8 Condition Variables
26.9 Web Client and Simultaneous Connections (Continued)
26.10 Summary
Exercises
Chapter 27. IP Options
27.1 Introduction
27.2 IPv4 Options
27.3 IPv4 Source Route Options
27.4 IPv6 Extension Headers
27.5 IPv6 Hop-by-Hop Options and Destination Options
27.6 IPv6 Routing Header
27.7 IPv6 Sticky Options
27.8 Historical IPv6 Advanced API
27.9 Summary
Exercises
Chapter 28. Raw Sockets
28.1 Introduction
28.2 Raw Socket Creation
28.3 Raw Socket Output
28.4 Raw Socket Input
28.5 'ping' Program
28.6 'traceroute' Program
28.7 An ICMP Message Daemon
28.8 Summary
Exercises
Chapter 29. Datalink Access
29.1 Introduction
29.2 BSD Packet Filter (BPF)
29.3 Datalink Provider Interface (DLPI)
29.4 Linux: 'SOCK_PACKET' and 'PF_PACKET'
29.5 'libpcap': Packet Capture Library
29.6 'libnet': Packet Creation and Injection Library
29.7 Examining the UDP Checksum Field
29.8 Summary
Exercises
Chapter 30. Client/Server Design Alternatives
30.1 Introduction
30.2 TCP Client Alternatives
30.3 TCP Test Client
30.4 TCP Iterative Server
30.5 TCP Concurrent Server, One Child per Client
30.6 TCP Preforked Server, No Locking Around 'accept'
30.7 TCP Preforked Server, File Locking Around 'accept'
30.8 TCP Preforked Server, Thread Locking Around 'accept'
30.9 TCP Preforked Server, Descriptor Passing
30.10 TCP Concurrent Server, One Thread per Client
30.11 TCP Prethreaded Server, per-Thread 'accept'
30.12 TCP Prethreaded Server, Main Thread 'accept'
30.13 Summary
Exercises
Chapter 31. Streams
31.1 Introduction
31.2 Overview
31.3 'getmsg' and 'putmsg' Functions
31.4 'getpmsg' and 'putpmsg' Functions
31.5 'ioctl' Function
31.6 Transport Provider Interface (TPI)
31.7 Summary
Exercises
Appendix A. IPv4, IPv6, ICMPv4, and ICMPv6
A.1 Introduction
A.2 IPv4 Header
A.3 IPv6 Header
A.4 IPv4 Addresses
A.5 IPv6 Addresses
A.6 Internet Control Message Protocols (ICMPv4 and ICMPv6)
Appendix B. Virtual Networks
B.1 Introduction
B.2 The MBone
B.3 The 6bone
B.4 IPv6 Transition: 6to4
Appendix C. Debugging Techniques
C.1 System Call Tracing
C.2 Standard Internet Services
C.3 'sock' Program
C.4 Small Test Programs
C.5 'tcpdump' Program
C.6 'netstat' Program
C.7 'lsof' Program
Appendix D. Miscellaneous Source Code
D.1 'unp.h' Header
D.2 'config.h' Header
D.3 Standard Error Functions
Appendix E. Solutions to Selected Exercises
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Chapter 14
Chapter 15
Chapter 16
Chapter 17
Chapter 18
Chapter 20
Chapter 21
Chapter 22
Chapter 24
Chapter 25
Chapter 26
Chapter 27
Chapter 28
Chapter 29
Chapter 30
Chapter 31
Bibliography
?
[ Team LiB ] Previous Section Next Section

30.6 TCP Preforked Server, No Locking Around accept

Our first of the "enhanced" TCP servers uses a technique called preforking. Instead of generating one fork per client, the server preforks some number of children when it starts, and then the children are ready to service the clients as each client connection arrives. Figure 30.8 shows a scenario where the parent has preforked N children and two clients are currently connected.

Figure 30.8. Preforking of children by server.

graphics/30fig08.gif

The advantage of this technique is that new clients can be handled without the cost of a fork by the parent. The disadvantage is that the parent must guess how many children to prefork when it starts. If the number of clients at any time ever equals the number of children, additional clients are ignored until a child is available. But recall from Section 4.5 that the clients are not completely ignored. The kernel will complete the three-way handshake for any additional clients, up to the listen backlog for this socket, and then pass the completed connections to the server when it calls accept. But, the client application can notice a degradation in response time because even though its connect might return immediately, its first request might not be handled by the server for some time.

With some extra coding, the server can always handle the client load. What the parent must do is continually monitor the number of available children, and if this value drops below some threshold, the parent must fork additional children. Also, if the number of available children exceeds another threshold, the parent can terminate some of the excess children, because as we'll see later in this chapter, having too many available children can degrade performance, too.

But before worrying about these enhancements, let's examine the basic structure of this type of server. Figure 30.9 shows the main function for the first version of our preforked server.

Figure 30.9 main function for preforked server.

server/serv02.c

 1 #include    "unp.h"

 2 static int nchildren;
 3 static pid_t *pids;

 4 int
 5 main(int argc, char **argv)
 6 {
 7     int     listenfd, i;
 8     socklen_t addrlen;
 9     void    sig_int(int);
10     pid_t   child_make(int, int, int);

11     if (argc == 3)
12         listenfd = Tcp_listen(NULL, argv[1], &addrlen);
13     else if (argc == 4)
14         listenfd = Tcp_listen(argv[1], argv[2], &addrlen);
15     else
16         err_quit("usage: serv02 [ <host> ] <port#> <#children>");
17     nchildren = atoi(argv[argc - 1]);
18     pids = Calloc(nchildren, sizeof(pid_t));

19     for (i = 0; i < nchildren; i++)
20         pids[i] = child_make(i, listenfd, addrlen); /* parent returns */

21     Signal(SIGINT, sig_int);

22     for ( ; ; )
23         pause();                /* everything done by children */
24 }

11鈥?8 An additional command-line argument is the number of children to prefork. An array is allocated to hold the PIDs of the children, which we need when the program terminates to allow the main function to terminate all the children.

19鈥?0 Each child is created by child_make, which we will examine in Figure 30.11.

Our signal handler for SIGINT, which we show in Figure 30.10, differs from Figure 30.5.

30鈥?4 getrusage reports on the resource utilization of terminated children, so we must terminate all the children before calling pr_cpu_time. We do this by sending SIGTERM to each child, and then we wait for all the children.

Figure 30.11 shows the child_make function, which is called by main to create each child.

7鈥? fork creates each child and only the parent returns. The child calls the function child_main, which we show in Figure 30.12 and which is an infinite loop.

Figure 30.10 Singal handler for SIGINT.

server/serv02.c

25 void
26 sig_int(int signo)
27 {
28     int     i;
29     void    pr_cpu_time(void);

30         /* terminate all children */
31     for (i = 0; i < nchildren; i++)
32         kill(pids[i], SIGTERM);
33     while (wait(NULL) > 0)     /* wait for all children */
34         ;
35     if (errno != ECHILD)
36         err_sys("wait error");

37     pr_cpu_time();
38     exit(0);
39 }
Figure 30.11 child_make function: creates each child.

server/child02.c

 1 #include    "unp.h"

 2 pid_t
 3 child_make(int i, int listenfd, int addrlen)
 4 {
 5     pid_t   pid;
 6     void    child_main(int, int, int);

 7     if ( (pid = Fork()) > 0)
 8         return (pid);            /* parent */

 9     child_main(i, listenfd, addrlen);     /* never returns */
10 }
Figure 30.12 child_main function: infinite loop executed by each child.

server/child02.c

11 void
12 child_main(int i, int listenfd, int addrlen)
13 {
14     int     connfd;
15     void    web_child(int);
16     socklen_t clilen;
17     struct sockaddr *cliaddr;

18     cliaddr = Malloc(addrlen);

19     printf("child %ld starting\n", (long) getpid());
20     for ( ; ; ) {
21         clilen = addrlen;
22         connfd = Accept(listenfd, cliaddr, &clilen);

23         web_child(connfd);      /* process the request */
24         Close(connfd);
25     }
26 }

20鈥?5 Each child calls accept, and when this returns, the function web_child (Figure 30.7) handles the client request. The child continues in this loop until terminated by the parent.

4.4BSD Implementation

If you have never seen this type of arrangement (multiple processes calling accept on the same listening descriptor), you probably wonder how it can even work. It's worth a short digression on how this is implemented in Berkeley-derived kernels (e.g., as presented in TCPv2).

The parent creates the listening socket before spawning any children, and if you recall, all descriptors are duplicated in each child each time fork is called. Figure 30.13 shows the arrangement of the proc structures (one per process), the one file structure for the listening descriptor, and the one socket structure.

Figure 30.13. Arrangement of proc, file, and socket structures.

graphics/30fig13.gif

Descriptors are just an index in an array in the proc structure that reference a file structure. One of the properties of the duplication of descriptors in the child that occurs with fork is that a given descriptor in the child references the same file structure as that same descriptor in the parent. Each file structure has a reference count that starts at one when the file or socket is opened and is incremented by one each time fork is called or each time the descriptor is duped. In our example with N children, the reference count in the file structure would be N + 1 (don't forget the parent that still has the listening descriptor open, even though the parent never calls accept).

When the program starts, N children are created, and all N call accept and all are put to sleep by the kernel (line 140, p. 458 of TCPv2). When the first client connection arrives, all N children are awakened. This is because all N have gone to sleep on the same "wait channel," the so_timeo member of the socket structure, because all N share the same listening descriptor, which points to the same socket structure. Even though all N are awakened, the first of the N to run will obtain the connection and the remaining N - 1 will all go back to sleep, because when each of the remaining N - 1 execute the statement on line 135 of p. 458 of TCPv2, the queue length will be 0 since the first child to run already took the connection.

This is sometimes called the thundering herd problem because all N are awakened even though only one will obtain the connection. Nevertheless, the code works, with the performance side effect of waking up too many processes each time a connection is ready to be accepted. We now measure this performance effect.

Effect of Too Many Children

The CPU time of 1.8 for the server in row 2 of Figure 30.1 is for 15 children and a maximum of 10 simultaneous clients. We can measure the effect of the thundering herd problem by just increasing the number of children for the same maximum number of clients (10). We don't show the results of increasing the number of children because the individual test results aren't that interesting. Since any number greater than 10 introduces superfluous children, the thundering herd problem worsens and the timing results increase.

Some Unix kernels have a function, often named wakeup_one, that wakes up only one process that is waiting for some event, instead of waking up all processes waiting for the event [Schimmel 1994].

Distribution of Connections to the Children

The next thing to examine is the distribution of the client connections to the pool of available children that are blocked in the call to accept. To collect this information, we modify the main function to allocate an array of long integer counters in shared memory, one counter per child. This is done with the following:


long   *cptr, *meter(int);     /* for counting # clients/child */

cptr = meter(nchildren);       /* before spawning children */

Figure 30.14 shows the meter function.

We use anonymous memory mapping, if supported (e.g., 4.4BSD), or the mapping of /dev/zero (e.g., SVR4). Since the array is created by mmap before the children are spawned, the array is then shared between this process (the parent) and all its children created later by fork.

We then modify our child_main function (Figure 30.12) so that each child increments its counter when accept returns and our SIGINT handler prints this array after all the children are terminated.

Figure 30.14 meter function to allocate an array in shared memory.

server/meter.c

 1 #include    "unp.h"
 2 #include    <sys/mman.h>

 3 /*
 4  * Allocate an array of "nchildren" longs in shared memory that can
 5  * be used as a counter by each child of how many clients it services.
 6  * See pp. 467-470 of "Advanced Programming in the Unix Environment."
 7  */

 8 long *
 9 meter(int nchildren)
10 {
11     int     fd;
12     long   *ptr;

13 #ifdef MAP_ANON
14     ptr = Mmap(0, nchildren * sizeof(long), PROT_READ | PROT_WRITE,
15                MAP_ANON | MAP_SHARED, -1, 0);
16 #else
17     fd = Open("/dev/zero", O_RDWR, 0);

18     ptr = Mmap(0, nchildren * sizeof(long), PROT_READ | PROT_WRITE,
19                 MAP_SHARED, fd, 0);
20     Close(fd);
21 #endif

22     return (ptr);
23 }

Figure 30.2 shows the distribution. When the available children are blocked in the call to accept, the kernel's scheduling algorithm distributes the connections uniformly to all the children.

select Collisions

While looking at this example under 4.4BSD, we can also examine another poorly understood, but rare phenomenon. Section 16.13 of TCPv2 talks about collisions with the select function and how the kernel handles this possibility. A collision occurs when multiple processes call select on the same descriptor, because room is allocated in the socket structure for only one process ID to be awakened when the descriptor is ready. If multiple processes are waiting for the same descriptor, the kernel must wake up all processes that are blocked in a call to select since it doesn't know which processes are affected by the descriptor that just became ready.

We can force select collisions with our example by preceding the call to accept in Figure 30.12 with a call to select, waiting for readability on the listening socket. The children will spend their time blocked in this call to select instead of in the call to accept. Figure 30.15 shows the portion of the child_main function that changes, using plus signs to note the lines that have changed from Figure 30.12.

Figure 30.15 Modification to Figure 30.12 to block in select instead of accept.
     printf("child %ld starting\n", (long) getpid());
+    FD_ZERO(&rset);
     for   ( ; ; ) {
+          FD_SET(listenfd, &rset);
+          Select(listenfd+1, &rset, NULL, NULL, NULL);
+          if (FD_ISSET(listenfd, &rset) == 0)
+              err_quit("listenfd readable");
+
           clilen = addrlen;
           connfd = Accept(listenfd, cliaddr, &clilen);

           web_child(connfd);      /* process request */
           Close(connfd);
     }

If we make this change and then examine the kernel's select collision counter before and after, we see 1,814 collisions one time we run the sever and 2,045 collisions the next time. Since the two clients create a total of 5,000 connections for each run of the server, this corresponds to about 35鈥?0% of the calls to select invoking a collision.

If we compare the server's CPU time for this example, the value of 1.8 in Figure 30.1 increases to 2.9 when we add the call to select. Part of this increase is probably because of the additional system call (since we are calling select and accept instead of just accept), and another part is probably because of the kernel overhead in handling the collisions.

The lesson to be learned from this discussion is when multiple processes are blocking on the same descriptor, it is better to block in a function such as accept instead of blocking in select.

[ Team LiB ] Previous Section Next Section
Converted from CHM to HTML with chm2web Pro 2.85 (unicode)