You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that when adding measurements to a campaign object only one core is being utilized. Is there a way to parallelize this process to decrease runtime? This is currently a very slow process for me.
By contrast, I noticed when running the simulate_experiment module all cores are in use. I know these are different processes, but just was curious why this module can utilize multiple cores.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi @brandon-holt, as always, thanks for reporting the issue. The fact that the mere addition of measurements (i.e., without even recommending) causes delays is clearly suboptimal and needs to be fixed. Ideally, this should not be noticeable at all but the current overhead stems from a design choice that we might need to rethink: it's probably caused by the process of "marking" parameter configurations being measured in the search space metadata. This process is currently by no means optimized for speed and I see different potential ways around it that we'd need to discuss in our team:
Making the involved fuzzy matching more efficient
Switching to a more performant backend like polars
Following an entire different approach to metadata handling
...
I suspect your search space is quite big, causing the delays? Can you give me a rough estimate of your dimensions so that I have something to work with?
Thanks for sharing. That is indeed already quite a bit. I'll take this into our team meeting and see what we can do about it. Perhaps we can find a quick fix for you... But priority-wise, a full fix could take a while since my focus is currently still on the surrogate / SHAP issue 😋
Hi, I noticed that when adding measurements to a campaign object only one core is being utilized. Is there a way to parallelize this process to decrease runtime? This is currently a very slow process for me.
By contrast, I noticed when running the
simulate_experiment
module all cores are in use. I know these are different processes, but just was curious why this module can utilize multiple cores.Thanks!
The text was updated successfully, but these errors were encountered: