RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Linux Foundation AI project. Production at Windows, Office, NeMo. RWKV-7 (March 2025). Models up to 14B parameters.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install rwkv-architecture@zechenzhangAGI/AI-research-SKILLs