The current addprocs_sge from ClusterManagers.jl relies on the blocking addprocs from Distributed.jl. Therefore, every time I request a node, I must wait until the node is acquired before proceeding. So, while the following code snippet immediately returns, for the second qsub command to be submitted, it must wait for the first qsub command to start running. using ClusterManagers for i=1:2 @async ClusterManagers.addprocs_sge(1) end  The @async is effectively useless. The simplest solution I found to overcome this is as follows: using ClusterManagers function get_proc() sge = SGEManager(1, ""); Distributed.cluster_mgmt_from_master_check() id = Distributed.addprocs_locked(sge; qsub_env="", res_list=""); end a = [] for i=1:2 @async push!(a, get_proc()) end  I do this because, addprocs is just a locked wrapper over addprocs_locked: function addprocs(manager::ClusterManager; kwargs...) cluster_mgmt_from_master_check() lock(worker_lock) try addprocs_locked(manager::ClusterManager; kwargs...) finally unlock(worker_lock) end end  My question is: what are potential issues that may arise when this happens? Why is there a lock in the first place? Finally, is there any better way to do this? I am aware of this thread but nothing really came out of it apparently.