[SOLVED] Building sone kind of self running que-script waiting for other python scripts inputs


This Content is from Stack Overflow. Question asked by Dreak

I have a problem, which from my perspective is some kind of special.

Iam running a system (which is not changeable) which runs the same Python script 10-100 times simultanousley. Not all the time, but when it does it, than all at once.

Actually this script, which is executes x times at the exact same moment (or just with a delay of milliseconds) needs to ask a Web API for certain data.
This Web API cant handle that much requests at once (which I cant change either, nor can I modify this API in any way).

So what I would like to build, is some kind of seperate python script which runs all the time and is waiting for input from all those other scripts.
This seperate script should recieve the request payload for the API, than creates a que and gets all that data. After this, is gives back the data to the python script asked for the data.

Is this somehow possible? Can someone even understand my problem? Sorry for my complicated description 😀

Actually I solved this problem with an RNG in that one Script that is executed multiple times, before those scripts perform the API request, they pause for rng(x) milliseconds, so they arent execute the request all at once – but this solution is not really failproof.

Maybe there is a better solution for my problem, than my first idea.

Thanks for your help!


fcntl.flock – how to implement a timeout?

This command executes 5 instances of a python script as fast as possible then the wait command waits for all background processes to finish.

for ((i=0;i<5;i++)) ; do ./same-lock.py &  done ; wait
[1] 66023
[2] 66024
[3] 66025
[4] 66026
[5] 66027
[1]   Done                    ./same-lock.py
[2]   Done                    ./same-lock.py
[3]   Done                    ./same-lock.py
[4]-  Done                    ./same-lock.py
[5]+  Done                    ./same-lock.py

The python code below ensures that only one of those scripts runs at a time.


# same-lock.py

import os
from random import randint
from time import sleep
import signal, errno
from contextlib import contextmanager
import fcntl

lock_file = '/tmp/same.lock_file'

def timeout(seconds):
    def timeout_handler(signum, frame):

    original_handler = signal.signal(signal.SIGALRM, timeout_handler)

        signal.signal(signal.SIGALRM, original_handler)

# wait up to 600 seconds for a lock
with timeout(600):
    f = open(lock_file, "w")
        fcntl.flock(f.fileno(), fcntl.LOCK_EX)
        # Print the process ID of the current process
        pid = os.getpid()
        # Sleep a random number of seconds (between 1 and 5)
        fcntl.flock(f.fileno(), fcntl.LOCK_UN)
    except IOError as e:
        if e.errno != errno.EINTR:
            raise e
        print( "Lock timed out")

This Question was asked in StackOverflow by Dreak and Answered by atl It is licensed under the terms of CC BY-SA 2.5. - CC BY-SA 3.0. - CC BY-SA 4.0.

people found this article helpful. What about you?