https://github.com/happyfish100

FastDFS 是一款开源的轻量级分布式文件系统,

  • 纯C实现,支持Linux、FreeBSD等类UNIX系统;
  • 类Google FS,不是通用的文件系统,只能通过专有的API访问,目前提供了 C、Java和PHP API;
  • 为互联网应用量身定做,追求高性能和高扩展型;
  • FastDFS可以看做是基于文件系统的key/value pair存储系统,称作分布式文件存储服务更为合适;
  • 更倾向于存储中小型文件(4KB~500MB);

FastDFS 组成

Tracker Server

跟踪服务器,主要做调度工作,在访问时起负载均衡的作用。在内存中记录 集群中group和storage服务器的状态信息,是连接客户端和Storage服务端的枢纽。因为相关信息全部在内存中,Tracker服务器的性能非常高,一个较大的集群(比如上百个group)中有3台就足够了。

Storage Server

存储服务器,文件和文件属性(meta data)都保存到存储服务器上。

FastDFS上传机制

title FastDFS上传机制
participant Client
participant Tracker_Server
participant Storage_Server

Storage_Server -> Tracker_Server: 1. 定时向Tracker上传状态信息
Client -> Tracker_Server: 2. 上传连接请求
Tracker_Server -> Tracker_Server: 3. 查询可用storage
Client <- Tracker_Server: 4. 返回信息(storage的IP和端口)
Client -> Storage_Server: 5. 上传文件(file content和meta data)
Storage_Server -> Storage_Server: 6. 生成file_id
Storage_Server -> Storage_Server: 7. 将上传内容写入硬盘
Client <- Storage_Server: 8. 返回file_id(路径信息和文件名)
Client -> Client: 9. 存储文件信息

在这里插入图片描述

FastDFS使用场景

FastDFS架构

FastDFS 架构图如下:
在这里插入图片描述

实验环境

主机名IP地址说明
tracker01.lavenliu.com192.168.20.160追踪服务器01
tracker02.lavenliu.com192.168.20.161追踪服务器02
storage01.lavenliu.com192.168.20.162存储服务器01
storage02.lavenliu.com192.168.20.163存储服务器02
client01.lavenliu.com192.168.20.164客户端01

上述表作废,以下面的为准,所有服务安装在一台机器上。

主机名IP地址说明
tracker.lavenliu.com192.168.16.137追踪服务器
storage.lavenliu.com192.168.16.137存储服务器
client.lavenliu.com192.168.16.137客户端
xxx10.20.20.35迁移用服务器

实验环境中用到的版本信息:

  • Nginx 1.21.1
  • FastDFS 6.0.7
  • FastDFS libfastcommon 1.0.48
  • Nginx FastDFS Module 1.22

部署 FastDFS

安装依赖

官方文档如下:

# step 1. download libfastcommon source codes and install it,
#   github address:  https://github.com/happyfish100/libfastcommon.git
#   gitee address:   https://gitee.com/fastdfs100/libfastcommon.git
# command lines as:

   git clone https://github.com/happyfish100/libfastcommon.git
   cd libfastcommon; git checkout V1.0.47
   ./make.sh clean && ./make.sh && ./make.sh install


# step 2. download fastdfs source codes and install it, 
#   github address:  https://github.com/happyfish100/fastdfs.git
#   gitee address:   https://gitee.com/fastdfs100/fastdfs.git
# command lines as:

   git clone https://github.com/happyfish100/fastdfs.git
   cd fastdfs; git checkout V6.07
   ./make.sh clean && ./make.sh && ./make.sh install


# step 3. setup the config files
#   the setup script does NOT overwrite existing config files,
#   please feel free to execute this script (take easy :)

./setup.sh /etc/fdfs


# step 4. edit or modify the config files of tracker, storage and client
such as:
 vi /etc/fdfs/tracker.conf
 vi /etc/fdfs/storage.conf
 vi /etc/fdfs/client.conf

 and so on ...


# step 5. run the server programs
# start the tracker server:
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart

# start the storage server:
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart

# (optional) in Linux, you can start fdfs_trackerd and fdfs_storaged as a service:
/sbin/service fdfs_trackerd restart
/sbin/service fdfs_storaged restart


# step 6. (optional) run monitor program
# such as:
/usr/bin/fdfs_monitor /etc/fdfs/client.conf


# step 7. (optional) run the test program
# such as:
/usr/bin/fdfs_test <client_conf_filename> <operation>
/usr/bin/fdfs_test1 <client_conf_filename> <operation>

# for example, upload a file for test:
/usr/bin/fdfs_test /etc/fdfs/client.conf upload /usr/include/stdlib.h


tracker server config file sample please see conf/tracker.conf

storage server config file sample please see conf/storage.conf

client config file sample please see conf/client.conf

Item detail
1. server common items
---------------------------------------------------
|  item name            |  type  | default | Must |
---------------------------------------------------
| base_path             | string |         |  Y   |
---------------------------------------------------
| disabled              | boolean| false   |  N   |
---------------------------------------------------
| bind_addr             | string |         |  N   |
---------------------------------------------------
| network_timeout       | int    | 30(s)   |  N   |
---------------------------------------------------
| max_connections       | int    | 256     |  N   |
---------------------------------------------------
| log_level             | string | info    |  N   |
---------------------------------------------------
| run_by_group          | string |         |  N   |
---------------------------------------------------
| run_by_user           | string |         |  N   |
---------------------------------------------------
| allow_hosts           | string |   *     |  N   |
---------------------------------------------------
| sync_log_buff_interval| int    |  10(s)  |  N   |
---------------------------------------------------
| thread_stack_size     | string |  1M     |  N   |
---------------------------------------------------
memo:
   * base_path is the base path of sub dirs: 
     data and logs. base_path must exist and it's sub dirs will 
     be automatically created if not exist.
       $base_path/data: store data files
       $base_path/logs: store log files
   * log_level is the standard log level as syslog, case insensitive
     # emerg: for emergency
     # alert
     # crit: for critical
     # error
     # warn: for warning
     # notice
     # info
     # debug
   * allow_hosts can ocur more than once, host can be hostname or ip address,
     "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20]
      or host[01-08,20-25].domain.com, for example:
        allow_hosts=10.0.1.[1-15,20]
        allow_hosts=host[01-08,20-25].domain.com

2. tracker server items
---------------------------------------------------
|  item name            |  type  | default | Must |
---------------------------------------------------
| port                  | int    | 22000   |  N   |
---------------------------------------------------
| store_lookup          | int    |  0      |  N   |
---------------------------------------------------
| store_group           | string |         |  N   |
---------------------------------------------------
| store_server          | int    |  0      |  N   |
---------------------------------------------------
| store_path            | int    |  0      |  N   |
---------------------------------------------------
| download_server       | int    |  0      |  N   |
---------------------------------------------------
| reserved_storage_space| string |  1GB    |  N   |
---------------------------------------------------

memo: 
  * the value of store_lookup is:
    0: round robin (default)
    1: specify group
    2: load balance (supported since V1.1)
  * store_group is the name of group to store files.
    when store_lookup set to 1(specify group), 
    store_group must be set to a specified group name.
  * reserved_storage_space is the reserved storage space for system 
    or other applications. if the free(available) space of any stoarge
    server in a group <= reserved_storage_space, no file can be uploaded
    to this group (since V1.1)
    bytes unit can be one of follows:
      # G or g for gigabyte(GB)
      # M or m for megabyte(MB)
      # K or k for kilobyte(KB)
      # no unit for byte(B)

3. storage server items
-------------------------------------------------
|  item name          |  type  | default | Must |
-------------------------------------------------
| group_name          | string |         |  Y   |
-------------------------------------------------
| tracker_server      | string |         |  Y   |
-------------------------------------------------
| port                | int    | 23000   |  N   |
-------------------------------------------------
| heart_beat_interval | int    |  30(s)  |  N   |
-------------------------------------------------
| stat_report_interval| int    | 300(s)  |  N   |
-------------------------------------------------
| sync_wait_msec      | int    | 100(ms) |  N   |
-------------------------------------------------
| sync_interval       | int    |   0(ms) |  N   |
-------------------------------------------------
| sync_start_time     | string |  00:00  |  N   |
-------------------------------------------------
| sync_end_time       | string |  23:59  |  N   |
-------------------------------------------------
| store_path_count    | int    |   1     |  N   |
-------------------------------------------------
| store_path0         | string |base_path|  N   |
-------------------------------------------------
| store_path#         | string |         |  N   |
-------------------------------------------------
|subdir_count_per_path| int    |   256   |  N   |
-------------------------------------------------
|check_file_duplicate | boolean|    0    |  N   |
-------------------------------------------------
| key_namespace       | string |         |  N   |
-------------------------------------------------
| keep_alive          | boolean|    0    |  N   |
-------------------------------------------------
| sync_binlog_buff_interval| int |   60s |  N   |
-------------------------------------------------

memo:
  * tracker_server can ocur more than once, and tracker_server format is
    "host:port", host can be hostname or ip address.
  * store_path#, # for digital, based 0
  * check_file_duplicate: when set to true, must work with FastDHT server, 
    more detail please see INSTALL of FastDHT. FastDHT download page: 
    http://code.google.com/p/fastdht/downloads/list
  * key_namespace: FastDHT key namespace, can't be empty when 
    check_file_duplicate is true. the key namespace should short as possible

简化的步骤:

git clone https://gitee.com/fastdfs100/libfastcommon.git
cd libfastcommon
git checkout V1.0.48
./make.sh clean && ./make.sh && ./make.sh install

安装服务端

git clone https://gitee.com/fastdfs100/fastdfs.git
cd fastdfs
git checkout V6.07
./make.sh clean
./make.sh
[root@node01 fastdfs]# ./make.sh install
mkdir -p /usr/bin
mkdir -p /etc/fdfs
cp -f fdfs_trackerd /usr/bin
if [ ! -f /etc/fdfs/tracker.conf ]; then cp -f ../conf/tracker.conf /etc/fdfs/tracker.conf; fi
if [ ! -f /etc/fdfs/storage_ids.conf ]; then cp -f ../conf/storage_ids.conf /etc/fdfs/storage_ids.conf; fi
mkdir -p /usr/bin
mkdir -p /etc/fdfs
cp -f fdfs_storaged  /usr/bin
if [ ! -f /etc/fdfs/storage.conf ]; then cp -f ../conf/storage.conf /etc/fdfs/storage.conf; fi
mkdir -p /usr/bin
mkdir -p /etc/fdfs
mkdir -p /usr/lib64
mkdir -p /usr/lib
cp -f fdfs_monitor fdfs_test fdfs_test1 fdfs_crc32 fdfs_upload_file fdfs_download_file fdfs_delete_file fdfs_file_info fdfs_appender_test fdfs_appender_test1 fdfs_append_file fdfs_upload_appender fdfs_regenerate_filename /usr/bin
if [ 0 -eq 1 ]; then cp -f libfdfsclient.a /usr/lib64; cp -f libfdfsclient.a /usr/lib/;fi
if [ 1 -eq 1 ]; then cp -f libfdfsclient.so /usr/lib64; cp -f libfdfsclient.so /usr/lib/;fi
mkdir -p /usr/include/fastdfs
cp -f ../common/fdfs_define.h ../common/fdfs_global.h ../common/mime_file_parser.h ../common/fdfs_http_shared.h ../tracker/tracker_types.h ../tracker/tracker_proto.h ../tracker/fdfs_shared_func.h ../tracker/fdfs_server_id_func.h ../storage/trunk_mgr/trunk_shared.h tracker_client.h storage_client.h storage_client1.h client_func.h client_global.h fdfs_client.h /usr/include/fastdfs
if [ ! -f /etc/fdfs/client.conf ]; then cp -f ../conf/client.conf /etc/fdfs/client.conf; fi

安装完毕,配置文件在 /etc/fdfs 目录下;二进制程序在 /usr/bin 目录下;启动脚本在 /lib/systemd/system 目录下。启动脚本要做一些修改,我们后续不用这个脚本。

设置完毕,还不能启动服务,因为我们还没有进行配置 tracker 服务器和 storage 服务器。所以接下来的工作就是配置 tracker 和 storage 服务。

配置 Tracker 服务端

默认的配置文件为:

[root@node01 fdfs]# grep -E -v "^#|^$" tracker.conf 
disabled = false
bind_addr =
port = 22122
connect_timeout = 5
network_timeout = 60
base_path = /home/yuqing/fastdfs
max_connections = 1024
accept_threads = 1
work_threads = 4
min_buff_size = 8KB
max_buff_size = 128KB
store_lookup = 2
store_group = group2
store_server = 0
store_path = 0
download_server = 0
reserved_storage_space = 20%
log_level = info
run_by_group=
run_by_user =
allow_hosts = *
sync_log_buff_interval = 1
check_active_interval = 120
thread_stack_size = 256KB
storage_ip_changed_auto_adjust = true
storage_sync_file_max_delay = 86400
storage_sync_file_max_time = 300
use_trunk_file = false 
slot_min_size = 256
slot_max_size = 1MB
trunk_alloc_alignment_size = 256
trunk_free_space_merge = true
delete_unused_trunk_files = false
trunk_file_size = 64MB
trunk_create_file_advance = false
trunk_create_file_time_base = 02:00
trunk_create_file_interval = 86400
trunk_create_file_space_threshold = 20G
trunk_init_check_occupying = false
trunk_init_reload_from_binlog = false
trunk_compress_binlog_min_interval = 86400
trunk_compress_binlog_interval = 86400
trunk_compress_binlog_time_base = 03:00
trunk_binlog_max_backups = 7
use_storage_id = false
storage_ids_filename = storage_ids.conf
id_type_in_filename = id
store_slave_file_use_link = false
rotate_error_log = false
error_log_rotate_time = 00:00
compress_old_error_log = false
compress_error_log_days_before = 7
rotate_error_log_size = 0
log_file_keep_days = 0
use_connection_pool = true
connection_pool_max_idle_time = 3600
http.server_port = 8080
http.check_alive_interval = 30
http.check_alive_type = tcp
http.check_alive_uri = /status.html

在tracker01及tracker02服务器上进行如下操作,只需要修改 base_path 即可。

cat /etc/fdfs/tracker.conf
# the base path to store data and log files
base_path=/data/fdfs_tracker

启动 trackerd 进程,

[root@node01 fdfs]# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
starting ...

[root@node01 fdfs]# ps -ef |grep fdfs
root       2531      1  0 20:56 ?        00:00:00 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart

[root@node01 fdfs]# netstat -natup |grep fdfs
tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      2531/fdfs_trackerd

Tracker 服务启动后,会在 base_path 目录下创建 datalogs 两个目录。如下:

[root@node01 ~]# tree /data/fdfs_tracker/
/data/fdfs_tracker/
├── data
│   ├── fdfs_trackerd.pid
│   ├── storage_changelog.dat
│   ├── storage_groups_new.dat
│   ├── storage_servers_new.dat
│   └── storage_sync_timestamp.dat
└── logs
    └── trackerd.log

2 directories, 6 files

配置 Storage 服务端

配置文件为:

[root@node01 fdfs]# grep -E -v "(^#|^$)" storage.conf
disabled = false
group_name = group1
bind_addr =
client_bind = true
port = 23000
connect_timeout = 5
network_timeout = 60
heart_beat_interval = 30
stat_report_interval = 60
base_path = /data/fdfs_storage
max_connections = 1024
buff_size = 256KB
accept_threads = 1
work_threads = 4
disk_rw_separated = true
disk_reader_threads = 1
disk_writer_threads = 1
sync_wait_msec = 50
sync_interval = 0
sync_start_time = 00:00
sync_end_time = 23:59
write_mark_file_freq = 500
disk_recovery_threads = 3
store_path_count = 1
store_path0 = /data/fdfs_file
subdir_count_per_path = 256
tracker_server = 192.168.16.137:22122
log_level = info
run_by_group =
run_by_user =
allow_hosts = *
file_distribute_path_mode = 0
file_distribute_rotate_count = 100
fsync_after_written_bytes = 0
sync_log_buff_interval = 1
sync_binlog_buff_interval = 1
sync_stat_file_interval = 300
thread_stack_size = 512KB
upload_priority = 10
if_alias_prefix =
check_file_duplicate = 0
file_signature_method = hash
key_namespace = FastDFS
keep_alive = 0
use_access_log = false
rotate_access_log = false
access_log_rotate_time = 00:00
compress_old_access_log = false
compress_access_log_days_before = 7
rotate_error_log = false
error_log_rotate_time = 00:00
compress_old_error_log = false
compress_error_log_days_before = 7
rotate_access_log_size = 0
rotate_error_log_size = 0
log_file_keep_days = 0
file_sync_skip_invalid_record = false
use_connection_pool = true
connection_pool_max_idle_time = 3600
compress_binlog = true
compress_binlog_time = 01:30
check_store_path_mark = true
http.domain_name =
http.server_port = 8888

接下来启动服务:

[root@node01 fdfs]# /usr/bin/fdfs_storaged /etc/fdfs/storage.conf start

[root@node01 fdfs]# tail /data/fdfs_storage/logs/storaged.log 
mkdir data path: FA ...
mkdir data path: FB ...
mkdir data path: FC ...
mkdir data path: FD ...
mkdir data path: FE ...
mkdir data path: FF ...
data path: /data/fdfs_storage/data, mkdir sub dir done.
[2021-07-10 21:00:54] INFO - file: storage_param_getter.c, line: 217, use_storage_id=0, id_type_in_filename=ip, storage_ip_changed_auto_adjust=1, store_path=0, reserved_storage_space=20.00%, use_trunk_file=0, slot_min_size=256, slot_max_size=1024 KB, trunk_alloc_alignment_size=256, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_free_space_merge=1, delete_unused_trunk_files=0, trunk_compress_binlog_min_interval=86400, trunk_compress_binlog_interval=86400, trunk_compress_binlog_time_base=03:00, trunk_binlog_max_backups=7, store_slave_file_use_link=0
[2021-07-10 21:00:54] INFO - file: storage_func.c, line: 273, tracker_client_ip: 192.168.16.137, my_server_id_str: 192.168.16.137, g_server_id_in_filename: -1995396928
[2021-07-10 21:00:55] INFO - file: tracker_client_thread.c, line: 299, successfully connect to tracker server 192.168.16.137:22122, as a tracker client, my ip is 192.168.16.137

验证端口:

[root@node01 fdfs]# netstat -antup |grep fdfs
tcp        0      0 0.0.0.0:23000           0.0.0.0:*               LISTEN      2579/fdfs_storaged
tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      2531/fdfs_trackerd
tcp        0      0 192.168.16.137:40968    192.168.16.137:22122    ESTABLISHED 2579/fdfs_storaged
tcp        0      0 192.168.16.137:22122    192.168.16.137:40968    ESTABLISHED 2531/fdfs_trackerd

Storage 服务启动后,在 base_path 下创建了 datalogs 目录,记录着 Storage Server 的信息。在 store_path0 目录下,创建了 N*N 个子目录:

[root@node01 fdfs]# ls /data/fdfs_file/data/
00  05  0A  0F  14  19  1E  23  28  2D  32  37  3C  41  46  4B  50  55
5A  5F  64  69  6E  73  78  7D  82  87  8C  91  96  9B  A0  A5  AA  AF
B4  B9  BE  C3  C8  CD  D2  D7  DC  E1  E6  EB  F0  F5  FA  FF  01  06
0B  10  15  1A  1F  24  29  2E  33  38  3D  42  47  4C  51  56  5B  60
65  6A  6F  74  79  7E  83  88  8D  92  97  9C  A1  A6  AB  B0  B5  BA
BF  C4  C9  CE  D3  D8  DD  E2  E7  EC  F1  F6  FB  02  07  0C  11  16
1B  20  25  2A  2F  34  39  3E  43  48  4D  52  57  5C  61  66  6B  70
75  7A  7F  84  89  8E  93  98  9D  A2  A7  AC  B1  B6  BB  C0  C5  CA
CF  D4  D9  DE  E3  E8  ED  F2  F7  FC  03  08  0D  12  17  1C  21  26
2B  30  35  3A  3F  44  49  4E  53  58  5D  62  67  6C  71  76  7B  80
85  8A  8F  94  99  9E  A3  A8  AD  B2  B7  BC  C1  C6  CB  D0  D5  DA
DF  E4  E9  EE  F3  F8  FD  04  09  0E  13  18  1D  22  27  2C  31  36
3B  40  45  4A  4F  54  59  5E  63  68  6D  72  77  7C  81  86  8B  90
95  9A  9F  A4  A9  AE  B3  B8  BD  C2  C7  CC  D1  D6  DB  E0  E5  EA
EF  F4  F9  FE

接下来查看 Storage 服务与 Tracker 服务是否通信正常:

[root@node01 fdfs]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
[2021-07-10 21:03:50] DEBUG - base_path=/data/fdfs_storage, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=1, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

server_count=1, server_index=0

tracker server is 192.168.16.137:22122

group count: 1

Group 1:
group name = group1
disk total space = 17,394 MB
disk free space = 12,464 MB
trunk free space = 0 MB
storage server count = 1
active server count = 1
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

	Storage 1:
		id = 192.168.16.137
		ip_addr = 192.168.16.137  ACTIVE
		http domain = 
		version = 6.07
		join time = 2021-07-10 21:00:51
		up time = 2021-07-10 21:00:51
		total storage = 17,394 MB
		free storage = 12,464 MB
		upload priority = 10
		store_path_count = 1
		subdir_count_per_path = 256
		storage_port = 23000
		storage_http_port = 8888
		current_write_path = 0
		source storage id = 
		if_trunk_server = 0
		connection.alloc_count = 256
		connection.current_count = 0
		connection.max_count = 0
		total_upload_count = 0
		success_upload_count = 0
		total_append_count = 0
		success_append_count = 0
		total_modify_count = 0
		success_modify_count = 0
		total_truncate_count = 0
		success_truncate_count = 0
		total_set_meta_count = 0
		success_set_meta_count = 0
		total_delete_count = 0
		success_delete_count = 0
		total_download_count = 0
		success_download_count = 0
		total_get_meta_count = 0
		success_get_meta_count = 0
		total_create_link_count = 0
		success_create_link_count = 0
		total_delete_link_count = 0
		success_delete_link_count = 0
		total_upload_bytes = 0
		success_upload_bytes = 0
		total_append_bytes = 0
		success_append_bytes = 0
		total_modify_bytes = 0
		success_modify_bytes = 0
		stotal_download_bytes = 0
		success_download_bytes = 0
		total_sync_in_bytes = 0
		success_sync_in_bytes = 0
		total_sync_out_bytes = 0
		success_sync_out_bytes = 0
		total_file_open_count = 0
		success_file_open_count = 0
		total_file_read_count = 0
		success_file_read_count = 0
		total_file_write_count = 0
		success_file_write_count = 0
		last_heart_beat_time = 2021-07-10 21:03:25
		last_source_update = 1970-01-01 08:00:00
		last_sync_update = 1970-01-01 08:00:00
		last_synced_timestamp = 1970-01-01 08:00:00

文件上传测试

修改 Tracker 服务器上的客户端配置文件,修改 base_pathtracker_server 配置,其他保持默认即可。如下:

[root@node01 fdfs]# grep -E "(^base_path|^tracker_server)" client.conf 
base_path = /data/fastdfs
tracker_server = 192.168.16.137:22122

上传测试,执行如下命令:

[root@node01 ~]# fdfs_upload_file /etc/fdfs/client.conf lv.png 
group1/M00/00/00/wKgQiWDqSPyAJhjVAAYouwsIOJc104.png

上传成功后返回文件ID号:group1/M00/00/00/wKgQiWDqSPyAJhjVAAYouwsIOJc104.png。返回的文件ID由group、存储目录、两级子目录、fileid、文件后缀名(由客户端指定,主要用于区分文件类型)拼接而成。

安装配置 Nginx

wget -c https://nginx.org/download/nginx-1.21.1.tar.gz
[root@node01 ~]# wget -c https://nginx.org/download/nginx-1.21.1.tar.gz

[root@node01 ~]# mkdir -pv /data/apps

# 安装编译 Nginx 所需的依赖
[root@node01 ~]# yum install -y pcre-devel zlib-devel openssl-devel
[root@node01 ~]# tar -xvf nginx-1.21.1.tar.gz
[root@node01 ~]# cd nginx-1.21.1

# 使用默认配置即可
[root@node01 nginx-1.21.1]# ./configure --prefix=/data/apps/nginx-1.21.1
[root@node01 nginx-1.21.1]# make && make install

验证 Nginx 的安装:

[root@node01 fdfs]# /data/apps/nginx-1.21.1/sbin/nginx -V
nginx version: nginx/1.21.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
configure arguments: --prefix=/data/apps/nginx-1.21.1

简单配置 Nginx,可以使我们可以访问刚刚上传的文件。配置文件中添加一个 /group1/M00location 即可。配置如下:

location /group1/M00 {
    alias /data/fdfs_file/data;
} 

接着启动 Nginx,验证是否可以访问。

[root@node01 ~]# /data/apps/nginx-1.21.1/sbin/nginx -t
nginx: the configuration file /data/apps/nginx-1.21.1/conf/nginx.conf syntax is ok
nginx: configuration file /data/apps/nginx-1.21.1/conf/nginx.conf test is successful

[root@node01 ~]# ps -ef |grep nginx
root      12560      1  0 09:31 ?        00:00:00 nginx: master process /data/apps/nginx-1.21.1/sbin/nginx
nobody    12561  12560  0 09:31 ?        00:00:00 nginx: worker process

打开浏览器访问刚刚上传的文件,URL 为:http://192.168.16.137/group1/M00/00/00/wKgQiWDqSPyAJhjVAAYouwsIOJc104.png

在这里插入图片描述

命令行使用 curl 访问:

[root@node01 ~]# curl -I http://192.168.16.137/group1/M00/00/00/wKgQiWDqSPyAJhjVAAYouwsIOJc104.png
HTTP/1.1 200 OK
Server: nginx/1.21.1
Date: Sun, 11 Jul 2021 01:36:57 GMT
Content-Type: image/png
Content-Length: 403643
Last-Modified: Sun, 11 Jul 2021 01:27:24 GMT
Connection: keep-alive
ETag: "60ea48fc-628bb"
Accept-Ranges: bytes

多次上传相同的图片,每次返回的文件 ID 是不一样的。通过查看其 md5 值却是一样的。

# md5sum /data/fdfs_file/data/00/00/*.png
314410817cc9dc792a1f9b6e5d171239  /data/fdfs_file/data/00/00/ChQUD2DuQkCAFh2MAC9azGybwbI646.png
314410817cc9dc792a1f9b6e5d171239  /data/fdfs_file/data/00/00/ChQUD2DuQnWANATCAC9azGybwbI469.png

Nginx 集成 FastDFS 模块

FastDFS 通过 Tracker 服务器,将文件放在 Storage 服务器存储, 但是同组存储服务器之间需要进行文件复制, 有同步延迟的问题。假设 Tracker 服务器将文件上传到了 192.168.16.137,上传成功后文件 ID 已经返回给客户端。此时 FastDFS 存储集群机制会将这个文件同步到同组存储 192.168.16.138,在文件还没有复制完成的情况下,客户端如果用这个文件 ID 在 192.168.16.138 上取文件,就会出现文件无法访问的错误。而 fastdfs-nginx-module 可以重定向文件链接到源服务器取文件,避免客户端由于复制延迟导致的文件无法访问错误。

接下来安装 FastDFS 的 Nginx 模块,

[root@node01 ~]# git clone https://gitee.com/fastdfs100/fastdfs-nginx-module.git
[root@node01 ~]# cd fastdfs-nginx-module/

# 检出最新的 Tag
[root@node01 fastdfs-nginx-module]# git tag
V1.20
V1.21
V1.22
[root@node01 fastdfs-nginx-module]# git checkout V1.22

这时,再重新编译 Nginx,

[root@node01 nginx-1.21.1]# /data/apps/nginx-1.21.1/sbin/nginx -s stop
[root@node01 nginx-1.21.1]# ps -ef |grep nginx |grep -v grep # 确保没有输出
[root@node01 nginx-1.21.1]# ./configure \
--prefix=/data/apps/nginx-1.21.1 \
--add-module=/root/fastdfs-nginx-module/src

[root@node01 nginx-1.21.1]# make && make install

[root@node01 nginx-1.21.1]# /data/apps/nginx-1.21.1/sbin/nginx -V
nginx version: nginx/1.21.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
configure arguments: --prefix=/data/apps/nginx-1.21.1 --add-module=/root/fastdfs-nginx-module/src

复制 fastdfs-nginx-module 源码中的配置文件到 /etc/fdfs 目录, 并修改:

[root@node01 ~]# cd fastdfs-nginx-module/
[root@node01 fastdfs-nginx-module]# ls src/
common.c  common.h  config  mod_fastdfs.conf  ngx_http_fastdfs_module.c
[root@node01 fastdfs-nginx-module]# cp src/mod_fastdfs.conf /etc/fdfs/

修改如下:

# 连接超时时间
connect_timeout=10

# Tracker Server
tracker_server=192.168.16.137:22122

# StorageServer 默认端口
storage_server_port=23000

# the group name of the local storage server
group_name=group1

# 如果文件ID的uri中包含/group**,则要设置为true
url_have_group_name = true

# Storage 配置的store_path0路径,必须和storage.conf中的一致
store_path0=/data/fastdfs_file/

修改 Nginx 配置文件,

location ~/group([0-9])/M00 {
    ngx_fastdfs_module;
}

启动 Nginx,

[root@node01 conf]# /data/apps/nginx-1.21.1/sbin/nginx
ngx_http_fastdfs_set pid=15338 # 有这一行,说明配置及启动成功

[root@node01 fastdfs-nginx-module]# ps -ef |grep nginx
root      15235      1  0 09:51 ?        00:00:00 nginx: master process /usr/local/nginx/sbin/nginx
nobody    15236  15235  0 09:51 ?        00:00:00 nginx: worker process
root      15238   1486  0 09:51 pts/0    00:00:00 grep --color=auto nginx

FastDFS 开启自启

[root@node01 fastdfs]# cp systemd/fdfs_* /usr/lib/systemd/system/

制作 RPM 包

模拟文件迁移

批量的上传文件,然后模拟迁移,

[root@node01 fastdfs]# for i in {1..1000}; do fdfs_upload_file /etc/fdfs/client.conf /root/test.png; sleep 0.2; done

然后查看一下集群状态,

[root@node01 fastdfs]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

server_count=1, server_index=0

tracker server is 10.20.20.15:22122

group count: 1

Group 1:
group name = group1
disk total space = 102,388 MB
disk free space = 90,811 MB
trunk free space = 0 MB
storage server count = 1
active server count = 1
storage server port = 23000
storage HTTP port = 80
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

	Storage 1:
		id = 10.20.20.15
		ip_addr = 10.20.20.15  ACTIVE
		http domain =
		version = 6.07
		join time = 2021-07-13 17:34:42
		up time = 2021-07-13 17:34:42
		total storage = 102,388 MB
		free storage = 90,811 MB
		upload priority = 10
		store_path_count = 1
		subdir_count_per_path = 256
		storage_port = 23000
		storage_http_port = 80
		current_write_path = 0
		source storage id =
		if_trunk_server = 0
		connection.alloc_count = 256
		connection.current_count = 0
		connection.max_count = 1
		total_upload_count = 1006
		success_upload_count = 1006
...
		total_upload_bytes = 3122056616
		success_upload_bytes = 3122056616
...
		total_file_open_count = 1006   # 集群当中共有 1006 个文件
		success_file_open_count = 1006
		total_file_read_count = 0
		success_file_read_count = 0
		total_file_write_count = 12072
		success_file_write_count = 12072
		last_heart_beat_time = 2021-07-14 16:01:49
		last_source_update = 2021-07-14 15:59:55
		last_sync_update = 1970-01-01 08:00:00
		last_synced_timestamp = 1970-01-01 08:00:00

到文件存储目录查看文件占用硬盘空间大小,

[root@node01 fastdfs]# pwd
/data/fdfs_file/data
[root@node01 fastdfs]# du  -sh
3.0G    .

我们已经准备了 3G 的数据,接下来就可以进行迁移了。

停止旧集群服务

[root@node01 fastdfs]# pkill -9 fdfs
[root@node01 fastdfs]# ps -ef |grep fdfs |grep -v grep # 确保没有输出
[root@node01 fastdfs]# /data/apps/nginx-1.21.1/sbin/nginx  -s stop
ngx_http_fastdfs_set pid=12021

把旧集群的配置文件复制到新的集群上,并修改对应的 IP 地址,路径可以保持不变。

[root@node01 fastdfs]# scp conf/nginx.conf 10.20.20.35:/data/apps/nginx-1.21.1/conf/
[root@node01 fastdfs]# scp -r /etc/fdfs 10.20.20.35:/etc/
[root@node01 data]# pwd
/data
[root@node01 data]# scp -r fdfs_file/ fdfs_storage/ fdfs_tracker fastdfs 10.20.20.35:/data

启动新集群服务

修改 /etc/fdfs 目录下的配置文件,主要是 IP 地址信息,

[root@node02 fdfs]# pwd
/etc/fdfs

# 查找之前的旧 IP
[root@node02 fdfs]# find ./ -name "*.conf" |xargs grep 10.20.20.15
./storage.conf:tracker_server = 10.20.20.15:22122
./client.conf:tracker_server = 10.20.20.15:22122
./mod_fastdfs.conf:tracker_server=10.20.20.15:22122

# 替换新的 IP
[root@node02 fdfs]# find ./ -name "*.conf" |xargs sed -i 's@10.20.20.15@10.20.20.35@g'

# 验证新的 IP
[root@node02 fdfs]# find ./ -name "*.conf" |xargs grep 10.20.20.35
./storage.conf:tracker_server = 10.20.20.35:22122
./client.conf:tracker_server = 10.20.20.35:22122
./mod_fastdfs.conf:tracker_server=10.20.20.35:22122

修改数据目录下面的配置信息,

[root@node02 data]# pwd
/data/fdfs_tracker/data

[root@node02 data]# find ./ -type f |xargs grep 10.20.20.15
./storage_servers_new.dat:# storage 10.20.20.15:23000
./storage_servers_new.dat:      ip_addr=10.20.20.15
./storage_sync_timestamp.dat:group1,10.20.20.15,0

[root@node02 data]# find ./ -type f |xargs sed -i 's@10.20.20.15@10.20.20.35@g'
[root@node02 data]# find ./ -type f |xargs grep 10.20.20.35
./storage_servers_new.dat:# storage 10.20.20.35:23000
./storage_servers_new.dat:      ip_addr=10.20.20.35
./storage_sync_timestamp.dat:group1,10.20.20.35,0

[root@node02 data]# pwd
/data/fdfs_storage/data

[root@node02 data]# find ./ -type f |xargs grep 10.20.20.15
./.data_init_flag:last_ip_addr=10.20.20.15

[root@node02 data]# find ./ -type f |xargs sed -i 's@10.20.20.15@10.20.20.35@g'
[root@node02 data]# find ./ -type f |xargs grep 10.20.20.35
./.data_init_flag:last_ip_addr=10.20.20.35

修改完毕,我们启动一下新集群的服务,

[root@node02 data]# /etc/init.d/fdfs_trackerd restart
[root@node02 data]# /etc/init.d/fdfs_storaged restart
[root@node02 data]# ps -ef |grep fdfs |grep -v grep
root      4800     1  0 17:09 ?        00:00:00 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
root      4836     1  0 17:10 ?        00:00:00 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf
[root@node02 data]# netstat -antup |grep fdfs
tcp        0      0 0.0.0.0:23000           0.0.0.0:*               LISTEN      4836/fdfs_storaged  
tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      4800/fdfs_trackerd  
tcp        0      0 10.20.20.35:43132       10.20.20.35:22122       ESTABLISHED 4836/fdfs_storaged  
tcp        0      0 10.20.20.35:22122       10.20.20.35:43132       ESTABLISHED 4800/fdfs_trackerd

查看集群状态,

[root@node02 data]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
server_count=1, server_index=0

tracker server is 10.20.20.35:22122

group count: 1

Group 1:
group name = group1
disk total space = 102,388 MB
disk free space = 93,206 MB
trunk free space = 0 MB
storage server count = 1
active server count = 1
storage server port = 23000
storage HTTP port = 80
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

        Storage 1:
                id = 10.20.20.35
                ip_addr = 10.20.20.35  ACTIVE
                http domain = 
                version = 6.07
                join time = 2021-07-13 17:34:42
                up time = 2021-07-14 17:10:13
                total storage = 102,388 MB
                free storage = 93,206 MB
                upload priority = 10store_path_count = 1
                subdir_count_per_path = 256
                storage_port = 23000
                storage_http_port = 80
                current_write_path = 0
                source storage id = 
                if_trunk_server = 0
                connection.alloc_count = 256
                connection.current_count = 0
                connection.max_count = 0
                total_upload_count = 1006
                success_upload_count = 1006
                total_append_count = 0
                success_append_count = 0
                total_modify_count = 0
                success_modify_count = 0
                total_truncate_count = 0
                success_truncate_count = 0
                total_set_meta_count = 0
                success_set_meta_count = 0
                total_delete_count = 0
                success_delete_count = 0
                total_download_count = 0
                success_download_count = 0
                total_get_meta_count = 0
                success_get_meta_count = 0
                total_create_link_count = 0
                success_create_link_count = 0
                total_delete_link_count = 0
                success_delete_link_count = 0
                total_upload_bytes = 3122056616
                success_upload_bytes = 3122056616
                total_append_bytes = 0
                success_append_bytes = 0
                total_modify_bytes = 0
                success_modify_bytes = 0
                stotal_download_bytes = 0
                success_download_bytes = 0
                total_sync_in_bytes = 0
                success_sync_in_bytes = 0
                total_sync_out_bytes = 0
                success_sync_out_bytes = 0
                total_file_open_count = 1006
                success_file_open_count = 1006
                total_file_read_count = 0
                success_file_read_count = 0
                total_file_write_count = 12072
                success_file_write_count = 12072
                last_heart_beat_time = 2021-07-14 17:12:44
                last_source_update = 2021-07-14 15:59:55
                last_sync_update = 1970-01-01 08:00:00
                last_synced_timestamp = 1970-01-01 08:00:00

启动 Nginx,

[root@node02 data]# /data/apps/nginx-1.21.1/sbin/nginx -t
ngx_http_fastdfs_set pid=4901
nginx: the configuration file /data/apps/nginx-1.21.1/conf/nginx.conf syntax is ok
nginx: configuration file /data/apps/nginx-1.21.1/conf/nginx.conf test is successful

[root@node02 data]# /data/apps/nginx-1.21.1/sbin/nginx
ngx_http_fastdfs_set pid=4902

下载个文件进行测试,

[root@node02 data]# fdfs_download_file  /etc/fdfs/client.conf group1/M00/00/0A/ChQUI2DutyGACKWzAADhZvSQ2cY200.png

使用 curl 进行测试,

[root@node02 data]# curl -I "http://10.20.20.35/group1/M00/00/00/ChQUD2Dul5mAQKCoAC9azGybwbI681.png"
HTTP/1.1 200 OK
Server: nginx/1.21.1
Date: Wed, 14 Jul 2021 10:19:00 GMT
Content-Type: image/png
Content-Length: 3103436
Last-Modified: Wed, 14 Jul 2021 07:51:53 GMT
Connection: keep-alive
Accept-Ranges: bytes

应该是迁移成功了。

FastDFS 常用命令

查看集群状态

fdfs_monitor /etc/fdfs/client.conf

删除节点

fdfs_monitor /etc/fdfs/client.conf delete group1 10.20.20.15

我的操作步骤,

[root@ecs-3f8a-0003 fdfs]# hostname -I
172.16.2.12
[root@ecs-3f8a-0003 fdfs]# pwd
/etc/fdfs
[root@ecs-3f8a-0003 fdfs]# find -type f |xargs grep 172.23.12.46
./client.conf:tracker_server=172.23.12.46:22122
./storage.conf:tracker_server=172.23.12.46:22122
./mod_fastdfs.conf:tracker_server=172.23.12.46:22122

[root@ecs-3f8a-0003 ~]# fdfs_download_file /etc/fdfs/client.conf group1/M00/00/00/rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg
[root@ecs-3f8a-0003 ~]# ls
logs  nacos  rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg
[root@ecs-3f8a-0003 ~]# pwd
/root
[root@ecs-3f8a-0003 ~]# ll
total 80
drwxr-xr-x 3 root root  4096 Jul 17 15:11 logs
drwxr-xr-x 4 root root  4096 Jul 17 15:28 nacos
-rw-r--r-- 1 root root 72395 Jul 17 16:03 rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg
[root@ecs-3f8a-0003 ~]# ll -h
total 80K
drwxr-xr-x 3 root root 4.0K Jul 17 15:11 logs
drwxr-xr-x 4 root root 4.0K Jul 17 15:28 nacos
-rw-r--r-- 1 root root  71K Jul 17 16:03 rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg

集成 Nginx
[root@ecs-3f8a-0003 conf]# curl -I 172.16.2.12/group1/M00/00/00/rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg
HTTP/1.1 200 OK
Server: nginx/1.21.1
Date: Sat, 17 Jul 2021 08:22:51 GMT
Content-Type: image/jpeg
Content-Length: 72395
Last-Modified: Wed, 29 Jul 2020 03:34:36 GMT
Connection: keep-alive
Accept-Ranges: bytes

[root@ecs-3f8a-0003 conf]# curl -I file01.lavenliu.com/group1/M00/00/00/rBcMLl8g7kyAEDkzAAEay2hc49g500.jpg
curl: (7) Failed connect to file01.lavenliu.com:80; Connection refused

前端页面出现 404,找不到图片,

# 404
http://11.11.11.11:10081/file/group1/M01/00/04/rBcMLl_gE7uATJR1AAWvWeZi32s062.png
http://11.11.11.11:10081/file/group1/M01/00/06/rBcMLmA2A7eAdv6QAAC79Kkdcmk374.jpg
http://11.11.11.11:10081/file/group1/M01/00/05/rBcMLmAFAM6AZSeOAASnT2e3fUY073.png

# 200
https://h5.lavenliu.com/file/group1/M01/00/06/rBcMLmAfS3iAXnYiAADE-h-zygs369.png
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐