Skip to content

H3 pred#434

Open
kyle-messier wants to merge 33 commits intomainfrom
h3_pred
Open

H3 pred#434
kyle-messier wants to merge 33 commits intomainfrom
h3_pred

Conversation

@kyle-messier
Copy link
Member

H3 prediction grid in progress

☑️ Updated the crew controller to the MLP approach that dispatches a SLURM job per branch
☑️ Able to use normal highmem and geo nodes and call significantly for workers

  • Need to clean up the controller name and launch scripts

@kyle-messier
Copy link
Member Author

@mitchellmanware Is this still a relevant warning/comment the koppen prediction grid

    # should be revised 
    targets::tar_target(
      list_pred_calc_koppen,

@sigmafelix
Copy link
Collaborator

@kyle-messier I think I added that line, which can be ignored now.

@sigmafelix
Copy link
Collaborator

sigmafelix commented Sep 14, 2025

@kyle-messier FYI, in 00c8806, most of the targets were completed well on my end. I am still learning the selective usage of level 2 or 3 partitions of level 8 h3 points. Node heterogeneity might be the reason for errors in SLURM deployment. 🤔

@kyle-messier
Copy link
Member Author

@sigmafelix level 2 is being used when the calculation time is driven by the variable or time dimension and not the number of locations and the HPC memory can handle the larger number of rows. It has ~90 spatial groups compared to ~700. I was having an issue of many branches failing immediately after ~170 were dispatched - subsequent branches would fail. Reducing from 100G to 50G resolves that and I think that will be sufficient memory for the NARR and NLCD variables. Currently, those are the only ones left to do, so it is also going pretty well on our end. 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants