I am conducting chart-to-table testing on the ChartQA dataset, but it shows insufficient GPU memory. I am using a 24G GPU. I have also tried using torch.distributed for distributed training, placing the model on two GPUs for training, but the result is still insufficient GPU memory. I have set the batch_size to 1 and input_size to 224.