- Run
preprocessing/pickle/fits_to_pkl.pyon the FITS file. - Run
preprocessing/pickle/add_splits.pyto add training splits. - Run
preprocessing/pickle/mark_pkl.pyto compute corruption and filter data based on star properties. - Run
preprocessing/pickle/clean_pkl.pyto filter data based on corruption.
- Download the TESS FFI scripts for the desired sectors from TESS Bulk Downloads.
- Create a directory for the FFIs. Inside, create folders for each sector and place the respective download script in each folder.
- Run
preprocessing/cubes/download_sectors.pywith the directory and sector names to start downloading. - Run
preprocessing/cubes/check_download_completed.pyto verify downloads. - Run
preprocessing/cubes/build_datacubes.pyto convert FITS frames into datacubes. - Run
preprocessing/hdf5/zarr_to_hdf5.pyto store datapoints efficiently. (Requires cleaned labels pickle file.) - Run
preprocessing/cubes/save_fits_header_timestamps.pyto save timestamps for each sector. - Update
training/configs/datasetwith the correct data paths and adjust hyperparameters if needed. - Run
training/pipeline.pyto train the model.
- Run
preprocessing/catalog/download_files.pyto download files from the TESS Catalog. The provided header.csv and md5sum.txt within the repository are provided from this url. - Run
preprocessing/catalog/split_pkl.pyto clean and split input pickle files into manageable chunks. - Run
training/eval_catalog.pyto process input chunks and generate outputs. - (Optional) Run
preprocessing/catalog/join_preds.pyto merge prediction chunks into a single file.