Multi-threaded server is not something that should implemented just because it exists. MTS has a specific purpose and, like any other feature, it comes with a set of trade-offs that you should consider before deciding to use it.

Organizations over the years have greatly increased the number of users connecting to their databases. Using the traditional dedicated server model of connecting, each client user is given a dedicated process that runs on the database server. These dedicated processes require memory, and, as the number of users increases, the demand for memory increases linearly. It also takes time to create and delete these dedicated server processes. If you have a lot of churn in other words, if clients are constantly connecting and disconnecting throughout the day a good percentage of your server's CPU resources will end up being dedicated to creating and deleting dedicated server processes. These are the two issues that MTS was designed to solve.

Benefits of MTS

In many environments, especially online transaction environments, users spend much of their time reading and editing data on their screens, and comparatively little time actively executing SQL statements. What that means is that many of those dedicated server processes are sitting out there idle. MTS allows one server process to service many clients instead of just one. Such a server process is referred to as a shared server process. Several advantages accrue as a result of using MTS:

  • Fewer server processes are needed, since one server process can now handle many clients. This leads directly to a reduced demand for memory.
  • Server processes are used more efficiently
  • there is less idle time.
  • Server processes are no longer created and deleted each time a client connects and disconnects. Consequently, fewer resources are spent on the overhead involved with creating and deleting processes on the server.
  • TCP/IP port numbers can be explicitly specified for MTS dispatchers. Knowing in advance the port numbers that will be used is sometimes helpful when dealing with communications through a firewall.

The use of dedicated servers requires four server processes one for each connection. If each process consumes 5 megabytes of memory, you will need 20 megabytes just to support this scenario. If there were 100 connections, then 500 megabytes of memory would be used for the dedicated servers. With the use of shared server processes, the memory requirements drop dramatically. In the figure above, those same four client connections are supported by only two shared server processes. You might even be able to support 100 connections with those same two server processes. Consequently, the memory requirements are dramatically less when MTS is used.

Because shared server processes are not directly tied to client connections, they are not created and deleted each time a client connection is created or deleted. Instead, Oracle actually maintains an active pool of shared server processes. New processes are only created when demand exceeds supply, and shared server processes are only deleted when demand drops off and those extra shared server processes are no longer needed.

Drawbacks of MTS

  • The use of MTS is predicated on the assumption that each client will use only a small fraction of its connect time to do any work against the database. Thus, the remaining time can be used to do work for other clients. The more active your clients are, the greater the number of shared server processes you will need in order to respond to their requests in a timely manner. At some point, the use of MTS ceases to make sense at all. A batch job, or a client that extracts a large amount of data, should not connect through MTS.
  • Latency will increase with MTS. When a client submits a SQL statement to be executed, it now has to wait until a shared server process is freed up and can take on the task of executing that statement. The amount of wait time depends on the number of shared server processes currently running and on how busy they are.
  • MTS can reduce the demand for memory and CPU resources on the server, but it's not going to result in increased performance from the client's perspective. At best, in an environment where the users require lots of "think time," the use of MTS won't be noticed because there will always be enough available shared server processes to service incoming requests.

Implementing MTS

Two key issues needed to be addressed in order for Oracle to implement MTS and allow one server process to be shared by multiple client connections:

  • The issue of session-specific data. Dedicated server processes store session-specific data in their own memory space. To enable one connection to be served by two or more processes, this session-specific memory had to be moved to a commonly accessible location. 
  • The issue of routing client requests to shared server processes.

The next two sections describe how Oracle resolved these issues.

Session-specific memory

Every connection to an Oracle database has a session-specific memory area associated with it. This memory is sometimes referred to as the User Global Area (UGA) and is used to hold the values of PL/SQL variables, the values of SQL bind variables, and other items specific to a session. The UGA also contains that part of the sort area specified by the SORT_AREA_RETAINED_SIZE initialization parameter.

When a traditional dedicated server connection is made, the UGA is stored within the dedicated server process's Program Global Area (PGA). However, when an MTS connection is made, the UGA is stored in either the large pool or the shared pool. 

The reason for moving the UGA into a common memory structure, such as the large pool or shared pool, is to enable any shared server process to access it. Remember that under MTS, there is no direct correlation between shared server processes and client connections. Any shared server process may be called upon to process a SQL statement issued by a client, so every shared server process needs access to each client's session state.

It's important to understand that the use of MTS does not affect the overall amount of memory used for session-specific data. That memory is simply moved from the memory space of the individual server processes into a common memory space accessible by all processes. The memory savings that results from MTS comes about from reducing the overall number of server processes that you need to run at any one time.

If you don't have a large pool configured, then the UGA for MTS connections will be placed in the shared pool. However, placing the UGA in the shared pool can result in fragmentation, and Oracle recommends that the large pool be used.


When a dedicated server connection is used, and a client needs to execute a SQL statement, that statement is simply passed to the process on the server dedicated to that particular client. When a client is connected via MTS, there is no dedicated process on the server, so, the SQL statement gets sent to a dispatcher process.

In a multi-threaded server environment, dispatcher processes assume the task of communicating with clients. Instead of the listener handing off each new client connection to a dedicated server process, each new client connection is given to the most lightly loaded dispatcher process. That dispatcher then handles all communication with the client for the duration of the session.

Use caution when killing dispatcher processes from the operating-system prompt. Unlike when you kill a dedicated server process, killing a dispatcher process can adversely affect a large number of users.

Dispatcher processes take requests (often SQL statements) from clients and place them into a request queue. The request queue is an area of memory in the System Global Area (SGA) into which all incoming requests are placed. The shared server processes for an instance monitor the request queue. When a request is entered, the next available shared server process pulls it out of the queue and services it. Requests are handled in a first-in/first-out basis as shared servers become available. 

When a shared server process finishes a task, the results are placed into a response queue. A response queue is also an area of memory within the SGA. Response queues are dispatcher-specific. Each dispatcher has its own response queue; however, all the dispatchers for an instance share the same request queue. When a dispatcher detects a response in its response queue, it sends that response back to the client that first initiated the request.

The number of dispatchers that you have in an MTS environment depends on several factors. You must have at least one dispatcher for each network protocol that you want to support. Thus, if you are supporting TCP/IP and SPX connections, you need two dispatchers one for each protocol. The number of connections you expect to support is also a factor. Each dispatcher process can only support a certain number of connections (that number is operating-system specific). If your operating system allows 50 connections per process, and you expect to have up to 100 users, then you need at least two dispatcher processes. If you are still supporting two protocols, then you may need four dispatcher processes (two for each protocol).

Start the discussion at