stop doesn't really stop the rocket server

Bug #1205891 reported by gbin
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Rocket Web Server
New
Undecided
Unassigned

Bug Description

See the minimal reproduction code here :
https://github.com/gbin/err/issues/175

Basically the listening socket is still here long after stop is called.

As you give the host / port to the start method and it starts listening, you kind of expect stop to revert this.

Revision history for this message
gbin (gbin) wrote :

I propose this patch to force the server to quit :

diff --git a/rocket/listener.py b/rocket/listener.py
index 7b54bea..157663b 100644
--- a/rocket/listener.py
+++ b/rocket/listener.py
@@ -113,40 +113,44 @@ class Listener(Thread):
             # secure socket. We don't do anything because it will be detected
             # by Worker and dealt with appropriately.
             pass
-
+
         return sock

     def start(self):
         if not self.ready:
             self.err_log.warning('Listener started when not ready.')
             return
-
+
         if self.thread is not None and self.thread.isAlive():
             self.err_log.warning('Listener already running.')
             return
-
+
         self.thread = Thread(target=self.listen, name="Port" + str(self.port))
-
+
         self.thread.start()
-
+
+ def stop(self):
+ self.ready = False
+ self.listener.shutdown(socket.SHUT_RDWR)
+
     def isAlive(self):
         if self.thread is None:
             return False
-
+
         return self.thread.isAlive()

     def join(self):
         if self.thread is None:
             return
-
+
         self.ready = False
-
+
         self.thread.join()
-
+
         del self.thread
         self.thread = None
         self.ready = True
-
+
     def listen(self):
         if __debug__:
             self.err_log.debug('Entering main loop.')
@@ -161,15 +165,13 @@ class Listener(Thread):
                                        self.interface[1],
                                        self.secure))

- except socket.timeout:
+ except:
                 # socket.timeout will be raised every THREAD_STOP_CHECK_INTERVAL
                 # seconds. When that happens, we check if it's time to die.
-
                 if not self.ready:
                     if __debug__:
                         self.err_log.debug('Listener exiting.')
                     return
                 else:
+ self.err_log.error(str(traceback.format_exc()))
                     continue
- except:
- self.err_log.error(str(traceback.format_exc()))
diff --git a/rocket/main.py b/rocket/main.py
index b8d7a01..baf4499 100644
--- a/rocket/main.py
+++ b/rocket/main.py
@@ -159,7 +159,7 @@ class Rocket(object):
         try:
             # Stop listeners
             for l in self.listeners:
- l.ready = False
+ l.stop()

             # Encourage a context switch
             time.sleep(0.01)

Revision history for this message
Nick Groenen (zoni) wrote :

[Crossposting from the aforementioned https://github.com/gbin/err/issues/175]

With that patch applied, I got an AttributeError about `stop()` not being defined.

However, while playing around with it, I noticed that my earlier observation was wrong, and it appears `socket.timeout` is being raised at the right times after all.

This gave me the idea however to forcibly close the socket at that point, like in your patch, as in so:

diff --git a/rocket/listener.py b/rocket/listener.py
index e1a522b..3d2e9b1 100644
--- a/rocket/listener.py
+++ b/rocket/listener.py
@@ -165,6 +165,7 @@ class Listener(Thread):
                 if not self.ready:
                     if __debug__:
                         self.err_log.debug('Listener exiting.')
+ self.listener.shutdown(socket.SHUT_RDWR)
                     return
                 else:
                     continue

This also still makes the reproduction case stop failing! I then ran some more testing, and as far as I could see, it doesn't appear like this causes any bad side effects.

I made some long running requests, and had Rocket stopping while these requests were still being processed. The server nicely waited to complete the requests before completely shutting down, and didn't cause connections to be cut off in the middle.

So I think this is the correct way to do it.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.