Linear Self-Attention
2026/3/9小于 1 分钟
Linear Self-Attention
题面
实现线性注意力:LinearAttn(Q,K,V) = φ(Q) (φ(K)^T V) / (φ(Q)(Σ_j φ(K_j))),φ 为逐元素特征映射(如 ELU(x)+1)。
Implementation Requirements
- Use only native features (external libraries are not permitted)
- The solve function signature must remain unchanged
- 输出写入
output
Examples
见页面两组示例(2×4 与 2×2)。
Constraints
- Q,K,V: M×d;1 ≤ M ≤ 10000;1 ≤ d ≤ 128;float32
- Performance: M = 10,000