I have a class that mainly updates data and analyzes it, continuously every 30 seconds. The point is that since the functions you see in the pseudo code must remain separate, I am forced to download all the data and then analyze it all. My goal would be to rewrite the code so that as soon as the data for a ticker is available, the function immediately parses it. This work flow is explained in the pseudo code 02 but is it possible to do it without merging the functions? Thanks for the suggestions :)
Pseudo Code 01:
import datetime, multiprocessing
class myclass:
def __init__(self, list_of_symbols):
self.last_update = 0
self.list_of_symbols = list_of_symbols
def get_data(self, symbol):
request_data_to_api(symbol)
def analyze_data(self, symbol):
analyze(symbol)
def run():
while True:
if (self.last_update == 0) or ((datetime.datetime.now() - self.last_update).seconds >= 30):
self.last_update = datetime.datetime.now()
# Update data as soon as possible
pool = multiprocessing.Pool(20)
pool.map(self.get_data, self.list_of_symbols)
pool.close()
pool.join()
# Analyze data as soon as possible
pool = multiprocessing.Pool(20)
pool.map(self.analyze_data, self.list_of_symbols)
pool.close()
pool.join()
Pseudo Code 02:
class myclass:
def __init__(self, list_of_symbols):
self.last_update = 0
self.list_of_symbols = list_of_symbols
def get_all_in_one(self, symbol):
request_data_to_api(symbol)
analyze(symbol)
def run():
while True:
if (self.last_update == 0) or ((datetime.datetime.now() - self.last_update).seconds >= 30):
self.last_update = datetime.datetime.now()
pool = multiprocessing.Pool(20)
pool.map(self.get_all_in_one, self.list_of_symbols)
pool.close()
pool.join()
concurrent.futurespackage, which is just a slightly different way to do multiptroc. You submit your a data fetch job, get aFutureobject back, then add a callback function to that future to submit the follow-up analyze job to the Thread or ProcessPool.