You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,7 +95,7 @@ Our ViT implementations are in `vit`. We provide utility notebooks in the `noteb
95
95
*[`dino-attention-maps-video.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/dino-attention-maps-video.ipynb) shows how to generate attention heatmaps from a video. (Visually,) best results were obtained with DINO.
96
96
*[`dino-attention-maps.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/dino-attention-maps.ipynb) shows how to generate attention maps from individual attention heads from the final transformer block. (Visually,) best results were obtained with DINO.
97
97
*[`load-dino-weights-vitb16.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/load-dino-weights-vitb16.ipynb) shows how to populate the pre-trained DINO parameters into our implementation (only for ViT B-16 but can easily be extended to others).
98
-
*[`load-jax-weights-vitb16.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/load-jax-weights-vitb16.ipynb) shows how to populate the pre-trained ViT parameters into our implementation (only for ViT B-16 but can easily be extended to others). .
98
+
*[`load-jax-weights-vitb16.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/load-jax-weights-vitb16.ipynb) shows how to populate the pre-trained ViT parameters into our implementation (only for ViT B-16 but can easily be extended to others).
99
99
*[`mean-attention-distance-1k.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/mean-attention-distance-1k.ipynb) shows how to plot mean attention distances of different transformer blocks of different ViTs computed over 1000 images.
100
100
*[`single-instance-probing.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/single-instance-probing.ipynb) shows how to compute mean attention distance, attention-rollout map for a single prediction instance.
101
101
*[`visualizing-linear-projections.ipynb`](https://github.com/sayakpaul/probing-vits/blob/main/notebooks/visualizing-linear-projections.ipynb) shows visualizations of the linear projection filters learned by ViTs.
0 commit comments