-
Notifications
You must be signed in to change notification settings - Fork 13
Fix (all?) parallel hyperopt issues #2415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fbd2fde to
7c06f3b
Compare
|
This is now ready. The only point I haven't been able to really fix is the creation of the cache hashes in the working directory. One solution is to run each worker in a separate node and only the one holding the database locally. But this is a cluster-dependent solution. Btw, in principle you can run in parallel in different folders, nodes or computer, and the |
b50218f to
e1f73dc
Compare
Radonirinaunimi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay. Here are some comments.
n3fit run, server now is started only if a connection to a database fails, --restart is removed, now only restart is allowed
Co-authored-by: Tanjona R. Rabemananjara <[email protected]>
f5506c4 to
33d7aec
Compare
With this PR we can now submit to multiple nodes in parallel (technically even to multiple clusters if you are feeling lucky).
The memory footprint of mongodb itself is much smaller as well.
Now an instance of
n3fitcan only spawn one single mongo worker, so it doesn't matter whether they are running in the same node or not, every worker connects to the database in the same manner. The num mongo workers option has been removed.Now when a parallel hyperopt is going to run, each worker checks whether there's a database already running, if there is none, it starts the database and writes down the address. All other instances of
n3fitwill find an address and will try to connect.There's also no longer a
--restartoption. Every new run is always a--restart. If you didn't want to continue a previous run just change the name of the runcard, but overwritting previous runs is very impolite.Also, the database is no longer written in a separate folder then compressed but only when finishing cleanly and so on. Now the database is always in the
nnfitfolder, it will be compressed atvp-uploadbut not before (this is a problem because it is so big...), but only if the--upload-dboption is used. Otherwise it skips the database during the upload. This makes the hyperopt not as heavy on the nnpdf server.To do:
mongodbworks well. In particular finding automatically in which node the database is running. Seems to work but maybe a lockfile to ensure two databases don't start at once.Remove the hash files in the working directory. Not sure whether this is possible but it is quite ugly.Can't find a way to do this 🤷♂️