3

currently I'm working on my Backend websserver using tornado.

The problem i have right now:
- when a request is made and the server is processing the request all other request are blocked

My RequestHandler:

class UpdateServicesRequestHandler( RequestHandler ):

    @gen.coroutine
    def get( self ):

        update = ServiceUpdate()
        response = yield update.update_all( )

        if self.request.headers.get('Origin'):
            self.set_header( 'Access-Control-Allow-Origin', self.request.headers.get('Origin') )
        self.set_header( 'Content-Type', 'application/json')
        self.write( response )

My update_all():

@gen.coroutine
def update_all( self ):

    for service in self.port_list:
        response = yield self.update_service( str( service.get( 'port' ) ) )
        self.response_list.append( response )

    self.response = json.dumps( self.response_list )

    return self.response

My update_sevice():

process = Popen( [ command ], stdout=PIPE, stderr=PIPE, shell=True )
output, error = process.communicate()

The thing is, that I need the result of the update_all() method. So is there a possibility to make this request not block my whole server for requests?

Thank you!

7
  • Is update.update_all() a coroutine? Does it use non-blocking I/O to do its work? Commented Aug 14, 2015 at 15:31
  • just updated my post.. Commented Aug 14, 2015 at 15:35
  • Now we need to know what update_service looks like. :) Ultimately, we need to know if you're making a slow, blocking call somewhere inside update_all. Commented Aug 14, 2015 at 15:48
  • i'm using a subprocess process = Popen( [ command ], stdout=PIPE, stderr=PIPE, shell=True ) to run a generated command. Generally I'm calling 'git pull' commands on severall directories Commented Aug 14, 2015 at 15:57
  • Are you then waiting for the Popen command to finish? Because that will definitely block the event loop. Commented Aug 14, 2015 at 15:57

2 Answers 2

4

In addition to using tornado.process.Subprocess as dano suggests, you should use stdout=tornado.process.Subprocess.STREAM instead of PIPE, and read from stdout/stderr asynchronously. Using PIPE will work for small amounts of output, but you will deadlock in wait_for_exit() if you use PIPE and the subprocess tries to write too much data (used to be 4KB but the limit is higher in most modern linux systems).

process = Subprocess([command], 
    stdout=Subprocess.STREAM, stderr=Subprocess.STREAM,
    shell=True)
out, err = yield [process.stdout.read_until_close(),
    process.stderr.read_until_close()]
Sign up to request clarification or add additional context in comments.

2 Comments

What's the idiomatic way of getting the return code and raising an exception if non-zero in this case? Thanks -
yield process.wait_for_exit(): tornadoweb.org/en/stable/…
2

You need to use tornado's wrapper around subprocess.Popen to avoid blocking the event loop:

from tornado.process import Subprocess
from subprocess import PIPE
from tornado import gen

@gen.coroutine
def run_command(command):
    process = Subprocess([command], stdout=PIPE, stderr=PIPE, shell=True)
    yield process.wait_for_exit()  # This waits without blocking the event loop.
    out, err = process.stdout.read(), process.stderr.read()
    # Do whatever you do with out and err

2 Comments

This really helped me out! Thanks a lot!! :-)
Consider wait_for_exit( raise_error=False), otherwise a non 0 return from the subprocess will raise an exception.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.