여러 프로세스가 청취 소켓을 공유하는 방법이 있습니까?
소켓 프로그래밍에서 청취 소켓을 만든 다음 연결하는 각 클라이언트에 대해 클라이언트의 요청을 처리하는 데 사용할 수있는 일반 스트림 소켓을 얻습니다. OS는 백그라운드에서 들어오는 연결 대기열을 관리합니다.
기본적으로 두 프로세스는 동시에 동일한 포트에 바인딩 할 수 없습니다.
잘 알려진 OS, 특히 Windows에서 프로세스의 여러 인스턴스를 시작하여 모두 소켓에 바인딩되어 효과적으로 대기열을 공유 할 수있는 방법이 있는지 궁금합니다. 그러면 각 프로세스 인스턴스는 단일 스레드가 될 수 있습니다. 새 연결을 수락 할 때 차단됩니다. 클라이언트가 연결되면 유휴 프로세스 인스턴스 중 하나가 해당 클라이언트를 수락합니다.
이를 통해 각 프로세스는 명시적인 공유 메모리를 통하지 않는 한 아무것도 공유하지 않는 매우 간단한 단일 스레드 구현을 가질 수 있으며 사용자는 더 많은 인스턴스를 시작하여 처리 대역폭을 조정할 수 있습니다.
그러한 기능이 있습니까?
편집 : "왜 스레드를 사용하지 않습니까?" 분명히 스레드는 옵션입니다. 그러나 단일 프로세스에 여러 스레드가있는 경우 모든 개체를 공유 할 수 있으며 개체가 공유되지 않거나 한 번에 하나의 스레드에만 표시되는지 또는 절대적으로 변경 불가능하며 가장 널리 사용되는 언어 및 런타임에는 이러한 복잡성을 관리하기위한 기본 제공 지원이 없습니다.
소수의 동일한 작업자 프로세스를 시작하면 기본값 이 공유되지 않는 동시 시스템을 얻을 수 있으므로 정확하고 확장 가능한 구현을 훨씬 쉽게 구축 할 수 있습니다.
Linux 및 Windows에서 두 개 이상의 프로세스간에 소켓을 공유 할 수 있습니다.
Linux (또는 POSIX 유형 OS)에서를 사용 fork()
하면 분기 된 자식이 모든 부모 파일 설명 자의 복사본을 갖게됩니다. 닫히지 않은 모든 항목은 계속 공유되며 (예 : TCP 수신 소켓 사용) accept()
클라이언트 용 새 소켓에 사용할 수 있습니다 . 이것은 대부분의 경우 Apache를 포함하여 작동하는 서버 수입니다.
Windows에서는 fork()
시스템 호출이 없기 때문에 부모 프로세스가 CreateProcess
자식 프로세스 (물론 동일한 실행 파일을 사용할 수 있음)를 만들거나 상속 가능한 핸들을 전달해야한다는 점을 제외하고는 기본적으로 동일한 사항이 적용 됩니다 .
청취 소켓을 상속 가능한 핸들로 만드는 것은 완전히 사소한 활동은 아니지만 너무 까다 롭지는 않습니다. DuplicateHandle()
상속 가능한 플래그가 설정된 중복 핸들을 만드는 데 사용해야합니다 (하지만 여전히 상위 프로세스에 있음). 그런 다음 STARTUPINFO
구조의 해당 핸들을 CreateProcess의 하위 프로세스에 STDIN
, OUT
또는 ERR
핸들로 제공 할 수 있습니다 (다른 용도로 사용하고 싶지 않다고 가정).
편집하다:
MDSN 라이브러리를 읽으면 WSADuplicateSocket
이 작업을 수행하는보다 강력하거나 올바른 메커니즘 인 것으로 보입니다 . 부모 / 자식 프로세스가 어떤 핸들이 일부 IPC 메커니즘에 의해 복제되어야하는지 알아 내야하기 때문에 여전히 중요하지 않습니다 (파일 시스템의 파일처럼 간단 할 수 있음).
설명:
OP의 원래 질문에 대한 대답으로, 아니요, 여러 프로세스는 할 수 없습니다 bind()
. 단지 원래의 부모 프로세스는 부를 것이다 bind()
, listen()
등, 자식 프로세스는 단지의 요청 처리하는 것 accept()
, send()
, recv()
등
대부분의 다른 사람들은 이것이 작동하는 기술적 이유를 제공했습니다. 다음은이를 직접 보여주기 위해 실행할 수있는 몇 가지 Python 코드입니다.
import socket
import os
def main():
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(("127.0.0.1", 8888))
serversocket.listen(0)
# Child Process
if os.fork() == 0:
accept_conn("child", serversocket)
accept_conn("parent", serversocket)
def accept_conn(message, s):
while True:
c, addr = s.accept()
print 'Got connection from in %s' % message
c.send('Thank you for your connecting to %s\n' % message)
c.close()
if __name__ == "__main__":
main()
실제로 두 개의 프로세스 ID가 수신하고 있습니다.
$ lsof -i :8888
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Python 26972 avaitla 3u IPv4 0xc26aa26de5a8fc6f 0t0 TCP localhost:ddi-tcp-1 (LISTEN)
Python 26973 avaitla 3u IPv4 0xc26aa26de5a8fc6f 0t0 TCP localhost:ddi-tcp-1 (LISTEN)
다음은 텔넷과 프로그램을 실행 한 결과입니다.
$ telnet 127.0.0.1 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Thank you for your connecting to parent
Connection closed by foreign host.
$ telnet 127.0.0.1 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Thank you for your connecting to child
Connection closed by foreign host.
$ telnet 127.0.0.1 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Thank you for your connecting to parent
Connection closed by foreign host.
$ python prefork.py
Got connection from in parent
Got connection from in child
Got connection from in parent
Looks like this question has already been answered fully by MarkR and zackthehack but I would like to add that Nginx is an example of the listening socket inheritance model.
Here is a good description:
Implementation of HTTP Auth Server Round-Robin and Memory Caching for NGINX Email Proxy June 6, 2007 Md. Mansoor Peerbhoy <mansoor@zimbra.com>
...
Flow of an NGINX worker process
After the main NGINX process reads the configuration file and forks into the configured number of worker processes, each worker process enters into a loop where it waits for any events on its respective set of sockets.
Each worker process starts off with just the listening sockets, since there are no connections available yet. Therefore, the event descriptor set for each worker process starts off with just the listening sockets.
(NOTE) NGINX can be configured to use any one of several event polling mechanisms: aio/devpoll/epoll/eventpoll/kqueue/poll/rtsig/select
When a connection arrives on any of the listening sockets (POP3/IMAP/SMTP), each worker process emerges from its event poll, since each NGINX worker process inherits the listening socket. Then, each NGINX worker process will attempt to acquire a global mutex. One of the worker processes will acquire the lock, whereas the others will go back to their respective event polling loops.
Meanwhile, the worker process that acquired the global mutex will examine the triggered events, and will create necessary work queue requests for each event that was triggered. An event corresponds to a single socket descriptor from the set of descriptors that the worker was watching for events from.
If the triggered event corresponds to a new incoming connection, NGINX accepts the connection from the listening socket. Then, it associates a context data structure with the file descriptor. This context holds information about the connection (whether POP3/IMAP/SMTP, whether the user is yet authenticated, etc). Then, this newly constructed socket is added into the event descriptor set for that worker process.
The worker now relinquishes the mutex (which means that any events that arrived on other workers can proceeed), and starts processing each request that was earlier queued. Each request corresponds to an event that was signaled. From each socket descriptor that was signaled, the worker process retrieves the corresponding context data structure that was earlier associated with that descriptor, and then calls the corresponding call back functions that perform actions based on the state of that connection. For instance, in case of a newly established IMAP connection, the first thing that NGINX will do is to write the standard IMAP welcome message onto the
connected socket (* OK IMAP4 ready).By and by, each worker process completes processing the work queue entry for each outstanding event, and returns back to its event polling loop. Once any connection is established with a client, the events usually are more rapid, since whenever the connected socket is ready for reading, the read event is triggered, and the corresponding action must be taken.
I would like to add that the sockets can be shared on Unix/Linux via AF__UNIX sockets (inter-process sockets). What seems to happen is a new socket descriptor is created that is somewhat of an alias to the original one. This new socket descriptor is sent via the AFUNIX socket to the other process. This is especially useful in cases where a process cannot fork() to share it's file descriptors. For example, when using libraries that prevent against this due to threading issues. You should create a Unix domain socket and use libancillary to send over the descriptor.
See:
- https://www.linuxquestions.org/questions/programming-9/how-to-share-socket-between-processes-289978/
For creating AF_UNIX Sockets:
For example code:
- http://lists.canonical.org/pipermail/kragen-hacks/2002-January/000292.html
- http://cpansearch.perl.org/src/SAMPO/Socket-PassAccessRights-0.03/passfd.c
Not sure how relevant this to the original question, but in Linux kernel 3.9 there is a patch adding a TCP/UDP feature: TCP and UDP support for the SO_REUSEPORT socket option; The new socket option allows multiple sockets on the same host to bind to the same port, and is intended to improve the performance of multithreaded network server applications running on top of multicore systems. more information can be found in the LWN link LWN SO_REUSEPORT in Linux Kernel 3.9 as mentioned in the reference link:
the SO_REUSEPORT option is non-standard, but available in a similar form on a number of other UNIX systems (notably, the BSDs, where the idea originated). It seems to offer a useful alternative for squeezing the maximum performance out of network applications running on multicore systems, without having to use the fork pattern.
Starting with Linux 3.9, you can set the SO_REUSEPORT on a socket and then have multiple non-related processes share that socket. That's simpler than the prefork scheme, no more signal troubles, fd leak to child processes, etc.
Linux 3.9 introduced new way of writing socket servers
The SO_REUSEPORT socket option
Have a single task whose sole job is to listen for incoming connections. When a connection is received, it accepts the connection - this creates a separate socket descriptor. The accepted socket is passed to one of your available worker tasks, and the main task goes back to listening.
s = socket();
bind(s);
listen(s);
while (1) {
s2 = accept(s);
send_to_worker(s2);
}
Under Windows (and Linux) it is possible for one process to open a socket and then pass that socket to another process such that that second process can also then use that socket (and pass it on in turn, should it wish to do so).
The crucial function call is WSADuplicateSocket().
This populates a structure with information about an existing socket. This structure then, via an IPC mechanism of your choice, is passed to another existing process (note I say existing - when you call WSADuplicateSocket(), you must indicate the target process which will receive the emitted information).
The receiving process can then call WSASocket(), passing in this structure of information, and receive a handle to the underlying socket.
Both processes now hold a handle to the same underlying socket.
It sounds like what you want is one process listening on for new clients and then hand off the connection once you get a connection. To do that across threads is easy and in .Net you even have the BeginAccept etc. methods to take care of a lot of the plumbing for you. To hand off the connections across process boundaries would be complicated and would not have any performance advantages.
Alternatively you can have multiple processes bound and listening on the same socket.
TcpListener tcpServer = new TcpListener(IPAddress.Loopback, 10090);
tcpServer.Server.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
tcpServer.Start();
while (true)
{
TcpClient client = tcpServer.AcceptTcpClient();
Console.WriteLine("TCP client accepted from " + client.Client.RemoteEndPoint + ".");
}
If you fire up two processes each executing the above code it will work and the first process seems to get all the connections. If the first process is killed the second one then gets the connections. With socket sharing like that I'm not sure exactly how Windows decides which process gets new connections although the quick test does point to the oldest process getting them first. As to whether it shares if the first process is busy or anything like that I don't know.
Another approach (that avoids many complex details) in Windows if you are using HTTP, is to use HTTP.SYS. This allows multiple processes to listen to different URLs on the same port. On Server 2003/2008/Vista/7 this is how IIS works, so you can share ports with it. (On XP SP2 HTTP.SYS is supported, but IIS5.1 does not use it.)
Other high level APIs (including WCF) make use of HTTP.SYS.
'code' 카테고리의 다른 글
VMware Workstation 및 Device / Credential Guard는 호환되지 않습니다. (0) | 2020.09.16 |
---|---|
Django의 한 앱에서 다른 앱으로의 외래 키 (0) | 2020.09.16 |
models.py가 점점 커지고 있는데, 그것을 나누는 가장 좋은 방법은 무엇입니까? (0) | 2020.09.16 |
C 애플리케이션을 종료하면 malloc-ed 메모리가 자동으로 해제됩니까? (0) | 2020.09.16 |
함수형 프로그래밍은 웹 개발과 관련이 있습니까? (0) | 2020.09.16 |