x-transformers 1.27.4__tar.gz → 1.27.6__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {x-transformers-1.27.4/x_transformers.egg-info → x-transformers-1.27.6}/PKG-INFO +1 -1
- {x-transformers-1.27.4 → x-transformers-1.27.6}/README.md +2 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/setup.py +1 -1
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/x_transformers.py +4 -1
- {x-transformers-1.27.4 → x-transformers-1.27.6/x_transformers.egg-info}/PKG-INFO +1 -1
- {x-transformers-1.27.4 → x-transformers-1.27.6}/LICENSE +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/setup.cfg +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/__init__.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/attend.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/autoregressive_wrapper.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/continuous.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/nonautoregressive_wrapper.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/xl_autoregressive_wrapper.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/xval.py +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/SOURCES.txt +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/dependency_links.txt +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/requires.txt +0 -0
- {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/top_level.txt +0 -0
@@ -1171,6 +1171,8 @@ This flavor of attention also has <a href="https://arxiv.org/abs/2111.05498">a c
|
|
1171
1171
|
|
1172
1172
|
Update: I have discovered a way to remove the learned temperature altogether, by grouping the feature dimension and doing l2-normalization on each group. This allows the queries and keys to have a similarity that is upper bounded by the number of groups. A group size of 8 or 16 was sufficient in my tests. Decided to name this technique "Grouped QK Normalization". The drawback is that I believe an attention head dimension 32 is too small to use this tactic (a dimension often used in vision)
|
1173
1173
|
|
1174
|
+
Update 2: Tero Karras has successfully used cosine sim attention in <a href="https://arxiv.org/abs/2312.02696">a new paper</a>.
|
1175
|
+
|
1174
1176
|
You can use it as follows
|
1175
1177
|
|
1176
1178
|
```python
|
@@ -435,7 +435,7 @@ class RotaryEmbedding(nn.Module):
|
|
435
435
|
|
436
436
|
@autocast(enabled = False)
|
437
437
|
def forward(self, t):
|
438
|
-
device = self.inv_freq.device
|
438
|
+
device, seq_len = self.inv_freq.device, t.shape[-1]
|
439
439
|
|
440
440
|
t = t.type_as(self.inv_freq)
|
441
441
|
|
@@ -1329,6 +1329,9 @@ class AttentionLayers(nn.Module):
|
|
1329
1329
|
if exists(pre_norm):
|
1330
1330
|
x = pre_norm(x)
|
1331
1331
|
|
1332
|
+
if exists(layer_mem):
|
1333
|
+
layer_mem = pre_norm(layer_mem)
|
1334
|
+
|
1332
1335
|
if layer_type == 'a':
|
1333
1336
|
out, inter = block(x, mask = mask, context_mask = self_attn_kv_mask, attn_mask = attn_mask, rel_pos = self.rel_pos, rotary_pos_emb = rotary_pos_emb, prev_attn = prev_attn, cache = next(iter_attn_cache, None), mem = layer_mem, mem_mask = layer_mem_mask, return_intermediates = True)
|
1334
1337
|
elif layer_type == 'c':
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
{x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/dependency_links.txt
RENAMED
File without changes
|
File without changes
|
File without changes
|