|
|
4.8. Solaris DoorsDoors provide a facility for processes to issue procedure calls to functions in other processes running on the same system. Using the APIs, a process can become a door server, exporting a function through a door it creates with the door_create(3X) interface. Other processes can then invoke the procedure by issuing door_call(3X), specifying the correct door descriptor. Our goal here is not to provide a programmer's guide to doors but rather to focus on the kernel implementation, data structures, and algorithms. Some discussion of the APIs is, of course, necessary to keep things in context, but we suggest that you refer to the manual pages and to Steven's book [35] to understand how to develop applications with doors. The door APIs were first available in Solaris 2.6. The Solaris kernel ships with a shared object library, libdoor.so, that must be linked to applications using the doors APIs. Table 4.10 describes the door APIs available in Solaris. During the course of our coverage of doors, we refer to the interfaces as necessary for clarity.
4.8.1. Doors OverviewFigure 4.6 illustrates broadly how doors provide an interprocess communication mechanism. The file abstraction used by doors is the means by which client kernel threads retrieve the proper door handle required to issue a door_call(3X). It is similar to the methodology employed when POSIX IPC facilities are used; a path name in the file system namespace is opened, and the returned file descriptor is passed as an argument in the door_call(3X) to call into the desired door. An argument structure, door_arg_t, is declared by the client code and used for passing arguments to the door server function being called. The address of the door_arg_t structure is passed as the second argument by the client in door_call(3X). Figure 4.6. Solaris Doors
On the server side, a function defined in the process can be made available to external client processes by creation of a door (door_create(3X)). The server must also bind the door to a file in the file system namespace. This is done with fattach(3C), which binds a STREAMS-based or door file descriptor to a file system path name. Once the binding has been established, a client can issue an open to the path name and use the returned file descriptor in door_call(3X). 4.8.2. Doors ImplementationDoors are implemented in the kernel as a pseudo file system, doorfs, which is loaded from the /kernel/sys directory during boot. Within a process, a door is referenced through its door descriptor, which is similar in form and function to a file descriptor, and, in fact, the allocation of a door descriptor in a process uses an available file descriptor slot. The major data structures required for doors support are illustrated in Figure 4.7. The two main structures are door_node, linked to the process structure with the p_door_list pointer, and door_data, linked to the door_node with the door_data pointer. A process can be a door server for multiple functions (multiple doors). Each call to door_create(3X) creates another door_node, which links to an existing door_node (if one already exists) through the door_list. door_data is created as part of the setup of a server thread during the create process, which we're about to walk through. door_data includes a door_arg structure that manages the argument list passed in door_call(3X), and a link to a door descriptor (door_desc) that passes door descriptors when a door function is called. Figure 4.7. Solaris Doors StructuresTo continue: A call to door_create(3X) enters the libdoor.so library door_create() enTRy point (as is the case with any library call). The kernel door_create() is invoked from the library and performs the following actions.
The next bit of code in door_return() applies to argument handling, return data, and other conditions that need to be dealt with when a kernel thread issues door_call(3X). We're still in the door create phase, so a bit later we'll revisit what happens in door_return() as a result of door_call(3X). Continuing with the door create in the door_return() kernel function:
We now digress slightly to explain shuttle synchronization objects. Typically, execution control flow is managed by the kernel dispatcher (see Chapter 5), using condition variables and sleep queues. Other synchronization primitives, mutex locks, and reader/writer locks are managed by turnstiles, an implementation of sleep queues that provides a priority inheritance mechanism. Shuttle objects are a relatively new (introduced in Solaris 2.5, when doors first shipped) synchronization object that essentially allows very fast transfer of control of a processor from one kernel thread to another without incurring the overhead of the dispatcher queue searching and normal kernel thread processing. In the case of a door_call(), control can be transferred directly from the caller (or client in this case), to a thread in the door server pool, which executes the door function on behalf of the caller. When the door function has completed, control is transferred directly back to the client (caller), all using the kernel shuttle interfaces to set thread state and to enter the dispatcher at the appropriate places. This direct transfer of processor control contributes significantly to the IPC performance attainable with doors. Shuttle objects are currently used only by the doors subsystem in Solaris. Kernel threads sleeping on shuttle objects have a 0 value in their wait channel field (t_wchan) and a value of 1 in t_wchan0. The thread's t_sobj_ops (synchronization object operations table) pointer is set to the shuttle object's operations structure (shuttle_sops); the thread's state is, of course, TS_SLEEP, and the thread's T_WAKEABLE flag is set. Getting back to door creation, we see the following.
This completes the creation of a door server. A server thread in the door pool is left sleeping on a shuttle object (the call to shuttle_swtch()), ready to execute the door function. Application code that creates a door to a function (becomes a door server) typically creates a file in the file system to which the door descriptor can be attached, using the standard open(2) and fattach(3C) APIs, to make the door more easily accessible to other processes. The fattach(3C) API has traditionally been used for STREAMS code, where it is desirable to associate a STREAM or STREAMS-based pipe with a file in the file system namespace, for precisely the same reason one would associate a door descriptor with a file name: that is, to make the descriptor easily accessible to other processes on the system so application software can take advantage of the IPC mechanism. The door code can build from the fact that the binding of an object to a file name, when that object does not meet the traditional definition of what a file is, has already been solved. fattach(3C) is implemented with a pseudo file system called namefs, the name file system. namefs allows the mounting of file systems on nondirectory mount points, as opposed to the traditional mounting of a file system that requires the selected mount point to be a directory file. Currently, fattach(3C) is the only client application of namefs; it calls the mount(2) system call, passing namefs as the file system name character string and a pointer to a namefs file descriptor. The mount(2) system call enters the VFS switch table through the VFS_MOUNT macro and enters the namefs mount code, nm_mount(). With the door server in place, client processes are free to issue a door_call(3X) to invoke the exported server function.
Just to get back to the forest for a moment (in case you're lost among the trees), we're into shuttle_resume() as a result of a kernel thread issuing door_call(3X). The door_call() kernel code up to this point essentially allocated or initialized the necessary data structures for the server thread to have the exported function executed on behalf of the caller. The shuttle_resume() function is entered from door_call(), so the kernel thread now executing in shuttle_resume() is the door client. So, what needs to happen is really pretty simple (relatively speaking)the server thread, which was passed to shuttle_resume() as an argument, needs to get control of the processor, and the current thread executing the shuttle_resume() code needs to be put to sleep on a shuttle object, since the current thread and the door client thread are one and the same.
Some final points to make regarding doors. There's a fair amount of code in the kernel doorfs module designed to deal with error conditions and the premature termination of the calling thread or server thread. In general, if the calling thread is awakened early, that is, before door_call() has completed, the code figures out why the wakeup occurred (signal, exit call, etc.) and sends a cancel signal (SIGCANCEL) to the server thread. If a server thread is interrupted because of a signal, exit, error condition, etc., the door_call() code bails out. In the client, an EINTR (interrupted system call) error is set, signifying that door_call() terminated prematurely. |
|
|