x-transformers 1.27.4__tar.gz → 1.27.6__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (18) hide show
  1. {x-transformers-1.27.4/x_transformers.egg-info → x-transformers-1.27.6}/PKG-INFO +1 -1
  2. {x-transformers-1.27.4 → x-transformers-1.27.6}/README.md +2 -0
  3. {x-transformers-1.27.4 → x-transformers-1.27.6}/setup.py +1 -1
  4. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/x_transformers.py +4 -1
  5. {x-transformers-1.27.4 → x-transformers-1.27.6/x_transformers.egg-info}/PKG-INFO +1 -1
  6. {x-transformers-1.27.4 → x-transformers-1.27.6}/LICENSE +0 -0
  7. {x-transformers-1.27.4 → x-transformers-1.27.6}/setup.cfg +0 -0
  8. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/__init__.py +0 -0
  9. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/attend.py +0 -0
  10. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/autoregressive_wrapper.py +0 -0
  11. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/continuous.py +0 -0
  12. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/nonautoregressive_wrapper.py +0 -0
  13. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/xl_autoregressive_wrapper.py +0 -0
  14. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers/xval.py +0 -0
  15. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/SOURCES.txt +0 -0
  16. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/dependency_links.txt +0 -0
  17. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/requires.txt +0 -0
  18. {x-transformers-1.27.4 → x-transformers-1.27.6}/x_transformers.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: x-transformers
3
- Version: 1.27.4
3
+ Version: 1.27.6
4
4
  Summary: X-Transformers - Pytorch
5
5
  Home-page: https://github.com/lucidrains/x-transformers
6
6
  Author: Phil Wang
@@ -1171,6 +1171,8 @@ This flavor of attention also has <a href="https://arxiv.org/abs/2111.05498">a c
1171
1171
 
1172
1172
  Update: I have discovered a way to remove the learned temperature altogether, by grouping the feature dimension and doing l2-normalization on each group. This allows the queries and keys to have a similarity that is upper bounded by the number of groups. A group size of 8 or 16 was sufficient in my tests. Decided to name this technique "Grouped QK Normalization". The drawback is that I believe an attention head dimension 32 is too small to use this tactic (a dimension often used in vision)
1173
1173
 
1174
+ Update 2: Tero Karras has successfully used cosine sim attention in <a href="https://arxiv.org/abs/2312.02696">a new paper</a>.
1175
+
1174
1176
  You can use it as follows
1175
1177
 
1176
1178
  ```python
@@ -3,7 +3,7 @@ from setuptools import setup, find_packages
3
3
  setup(
4
4
  name = 'x-transformers',
5
5
  packages = find_packages(exclude=['examples']),
6
- version = '1.27.4',
6
+ version = '1.27.6',
7
7
  license='MIT',
8
8
  description = 'X-Transformers - Pytorch',
9
9
  author = 'Phil Wang',
@@ -435,7 +435,7 @@ class RotaryEmbedding(nn.Module):
435
435
 
436
436
  @autocast(enabled = False)
437
437
  def forward(self, t):
438
- device = self.inv_freq.device
438
+ device, seq_len = self.inv_freq.device, t.shape[-1]
439
439
 
440
440
  t = t.type_as(self.inv_freq)
441
441
 
@@ -1329,6 +1329,9 @@ class AttentionLayers(nn.Module):
1329
1329
  if exists(pre_norm):
1330
1330
  x = pre_norm(x)
1331
1331
 
1332
+ if exists(layer_mem):
1333
+ layer_mem = pre_norm(layer_mem)
1334
+
1332
1335
  if layer_type == 'a':
1333
1336
  out, inter = block(x, mask = mask, context_mask = self_attn_kv_mask, attn_mask = attn_mask, rel_pos = self.rel_pos, rotary_pos_emb = rotary_pos_emb, prev_attn = prev_attn, cache = next(iter_attn_cache, None), mem = layer_mem, mem_mask = layer_mem_mask, return_intermediates = True)
1334
1337
  elif layer_type == 'c':
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: x-transformers
3
- Version: 1.27.4
3
+ Version: 1.27.6
4
4
  Summary: X-Transformers - Pytorch
5
5
  Home-page: https://github.com/lucidrains/x-transformers
6
6
  Author: Phil Wang
File without changes