Requirements:

  • CUDA 11.4 and above.

  • PyTorch 1.12 and above.

  1. Make sure that PyTorch is installed.

  2. Make sure that packaging is installed (pip install packaging)

  3. Make sure that ninja is installed and that it works correctly (e.g. ninja --version then echo $? should return exit code 0). If not (sometimes ninja --version then echo $? returns a nonzero exit code), uninstall then reinstall ninja (pip uninstall -y ninja && pip install ninja). Without ninja, compiling can take a very long time (2h) since it does not use multiple CPU cores. With ninja compiling takes 3-5 minutes on a 64-core machine.

安装

方式一 直接装:

pip install flash-attn --no-build-isolation

由于GPU型号、网络环境等原因,安装容易失败。

推荐源码安装:

方式二 源码安装:

1.看机器是否支持2.x版本

FlashAttention-2 currently supports:

  1. Ampere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100). Support for Turing GPUs (T4, RTX 2080) is coming soon, please use FlashAttention 1.x for Turing GPUs for now.

  2. Datatype fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).

  3. All head dimensions up to 256. Head dim > 192 backward requires A100/A800 or H100/H800.

机器支持2.x,下载main

机器不支持2.x,下载1.x

python setup.py install

安装中需要通过git自动下载nvidia的cutlass包,有的机器网络环境不支持git下载代码,会报缺少cutlass文件的错,需要在对应版本的git网页上进入flash-attention/csrc,下载cutlass对应的版本后再安装。

如果报未安装rotary、xentropy的错,单独安装即可

cd flash-attention/csrc/rotary & python setup.py install
cd flash-attention/csrc/xentropy & python setup.py install

搞定。

参考

GitHub - Dao-AILab/flash-attention: Fast and memory-efficient exact attention

Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐