Skip to content

Commit e848ff4

Browse files
committed
Add thumbnails for posts
1 parent 837d041 commit e848ff4

File tree

3 files changed

+7
-2
lines changed

3 files changed

+7
-2
lines changed

_posts/2024-06-07-rethink-lora-init.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,9 @@ categories: [LoRA, Fine tuning, LLM]
77
tags: [LoRA, Fine tuning, LLM]
88
render_with_liquid: false
99
math: true
10+
image:
11+
path: /assets/img/blogs/know_lora/lora.png
12+
alt: LoRA Fine tuning, modification, analysis and findings
1013
---
1114

1215

_posts/2025-01-22-transformer-showdown.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,11 @@ title: Transformer showdown MHA vs MLA vs nGPT vs Differential Transformer
44
description: Comparing various transformer architectures like MHA, GQA, Multi Latent Attention, nGPT, Differential Transformer.
55
date: 2025-01-22 20:27 +0530
66
categories: [Transformer, Architectures]
7-
tags: [MLA, MHA, GQA, Multi Latent Attention, nGPT, Differential Transformer]
8-
render_with_liquid: false
7+
tags: [MLA, MHA, GQA, Multi Latent Attention, nGPT, Differential Transformer,kv cache, activations, memory, trainig, nanoformer]
98
math: true
9+
image:
10+
path: /assets/img/blogs/transformer_showdown/attn_variants.png
11+
alt: Transformer and Attention variants
1012
---
1113

1214
# Transformer Showdown
80.4 KB
Loading

0 commit comments

Comments
 (0)