安装flash-attention
安装中需要通过git自动下载nvidia的cutlass包,有的机器网络环境不支持git下载代码,会报缺少cutlass文件的错,需要在对应版本的git网页上进入flash-attention/csrc,下载cutlass对应的版本后再安装。如果报未安装rotary、xentropy的错,单独安装即可。由于GPU型号、网络环境等原因,安装容易失败。1.看机器是否支持2.x版本。机器不支持2.x,下
Requirements:
-
CUDA 11.4 and above.
-
PyTorch 1.12 and above.
-
Make sure that PyTorch is installed.
-
Make sure that
packaging
is installed (pip install packaging
) -
Make sure that
ninja
is installed and that it works correctly (e.g.ninja --version
thenecho $?
should return exit code 0). If not (sometimesninja --version
thenecho $?
returns a nonzero exit code), uninstall then reinstallninja
(pip uninstall -y ninja && pip install ninja
). Withoutninja
, compiling can take a very long time (2h) since it does not use multiple CPU cores. Withninja
compiling takes 3-5 minutes on a 64-core machine.
安装
方式一 直接装:
pip install flash-attn --no-build-isolation
由于GPU型号、网络环境等原因,安装容易失败。
推荐源码安装:
方式二 源码安装:
1.看机器是否支持2.x版本
FlashAttention-2 currently supports:
Ampere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100). Support for Turing GPUs (T4, RTX 2080) is coming soon, please use FlashAttention 1.x for Turing GPUs for now.
Datatype fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).
All head dimensions up to 256. Head dim > 192 backward requires A100/A800 or H100/H800.
机器支持2.x,下载main
机器不支持2.x,下载1.x
python setup.py install
安装中需要通过git自动下载nvidia的cutlass包,有的机器网络环境不支持git下载代码,会报缺少cutlass文件的错,需要在对应版本的git网页上进入flash-attention/csrc,下载cutlass对应的版本后再安装。
如果报未安装rotary、xentropy的错,单独安装即可
cd flash-attention/csrc/rotary & python setup.py install
cd flash-attention/csrc/xentropy & python setup.py install
搞定。
参考
GitHub - Dao-AILab/flash-attention: Fast and memory-efficient exact attention
更多推荐
所有评论(0)