FastDFS蛋疼的集群和负载均衡(二)之tracker和stroage集群配置

diary_report.jpg

Interesting things

接着上一篇来写。

What did you do today

With RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.
It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service

  • 上一篇我们发现在/etc/sysconfig/下没有看到iptables,并且我们使用iptbales -P INPUT ACCESPT
    service iptables save命令写入防火墙策略的时候,抛出了"The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl."异常,而且用service iptables status查看防火墙的状态也提示"Unit iptables.service could not found"
image.png
  • 查阅资料发现,CentOS 7引入(默认firewalld)了firewalld来管理iptables,如果我们想回到iptables设置,首先我们需要停止firewalld,命令: systemctl stop firewalld, 然后屏蔽firewalld, systemctl mask firewalld,它是创建一个/etc/systemd/system/firewalld.service到 /dev/null的符号链接去屏蔽firewalld的服务。
image.png
  • 安装iptables-services, yum install iptables-services


    image.png
  • 启用iptables, systemctl enable iptables.


    image.png
  • 启动iptables, systemctl start iptables,然后我们进入/etc/sysconfig/,发现iptables文件出现了


    image.png
  • 我们进入/etc/sysconfig/iptables,添加访问22122端口的策略,直接复制上面就行了,改个端口号就ok了。
    -A INPUT -p tcp -m state --state NEW -m tcp --dport 22122 -j ACCEPT

image.png
  • 让防火墙策略生效。service iptables save.


    image.png
  • 重启防火墙,二种方式都可以。systemctl restart iptables
    service iptables restart

image.png
  • 通过/etc/init.d/fdfs_trackerd start来启动,然后用ps -ef|grep fdfs来查看tracker是否正常启动。
image.png

image.png
  • 还记得我们之前配置tracker.conf的时候把里面的base_path改成了/fastdfs/tracker,我们进入/fastdfs/tracker里面发现多了data,logs两个文件夹


    image.png
  • 我们打开logs文件夹会发现trackerd.log

[2017-12-27 10:38:41] INFO - FastDFS v5.05, base_path=/fastdfs/tracker, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=22122, bind_addr=, max_connections=256, accept_threads=1, work_threads=4, store_lookup=0, store_group=, store_server=0, store_path=0, reserved_storage_space=10.00%, download_server=0, allow_ip_count=-1, sync_log_buff_interval=10s, check_active_interval=120s, thread_stack_size=64 KB, storage_ip_changed_auto_adjust=1, storage_sync_file_max_delay=86400s, storage_sync_file_max_time=300s, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, use_storage_id=0, id_type_in_filename=ip, storage_id_count=0, rotate_error_log=0, error_log_rotate_time=00:00, rotate_error_log_size=0, log_file_keep_days=0, store_slave_file_use_link=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s

  • 打开data文件夹,会发现fdfs_trackerd.pid(里面是3185,进程标识符)和storage_changelog.dat(里面为空)。

  • 接下来我们需要配置storage,将group1(192.168.12.33, 192.168.12.44)和group2(192.18.12.55, 192.168.12.66)作为我们的存储节点。

  • 拷贝storage.conf.sample,并且重命名为storage.conf。


    image.png
  • 给192.168.12.44、192.168.12.55、192.168.12.66都拷贝一份storage.conf


    image.png
  • 更改group1的storage.conf,把base_path改成/fastdfs/storage


    image.png
  • 我们可以storage server的端口号是23000,group_name修改成自己所在的组,192.168.12.33和192.168.12.44对应的group_name=group1, 192.168.12.55和192.168.12.66对应的group_name=group2。


    image.png
  • 默认storage存储路径的个数是1,
    我们把storage_path0修改成/fastdfs/storage


    image.png
  • 配置tracker_server=192.168.12.11:22122、tracker_server=192.168.12.22:2212
    所配置的地址也就是tracker1和tracker2地址。


    image.png
  • 我们可以到storage的web端口是8888。


    image.png
  • 然后把配置完的stroage.conf拷贝给192.168.12.44、192.168.12.55、192.168.12.66。192.168.12.44不用做任何修改,192.168.12.55和192.168.12.66只用把group_name修改为group2即可。

  • 创建/fastdfs/storage。mkdir -p /fastdfs/storage


    image.png
  • 同样的往iptables里面添加端口23000。
    -A INPUT -p tcp -m state --state NEW -m tcp --dport 23000 -j ACCEPT


    image.png
  • 我们在192.168.12.33启动storage。


    image.png
  • 我们进入/fastdfs/storage,会发现也生成了logs和data文件夹。


    image.png
  • 我们先把所有的tracker1和tracker2, group1和group2都关闭。然后我先启动tracker2(192.168.12.22),tracker2会去尝试连接tracker1,tracker1肯定没有开启,所以会连接失败,tracker2会成为tracker leader。


    image.png
  • 开启tracker1,tracker2此时已经是leader了。所以会打印出"the tracker leader 192.168.12.22:22122"


    image.png
  • 开启192.168.12.33,发现成功连接tracker1,tracker2。tracker2是tracker leader。而且成功连接同一个group下的192.168.12.44.


    image.png
  • 开启192.168.12.44,成功连接tracker1,tracker2,192.168.12.33。


    image.png
  • 开启192.168.12.55 和 192.168.12.66,彼此相互连接成功,连接成功tracker1,tracker2.


    image.png

    image.png
  • tracker和storage集群搭建完毕了,我们可以测试一下tracker的高可用性。tracker leader一开始是tracker2(192.168.12.22),如果我们把tracker2关闭,tracker1(192.168.12.11)就会成为我们的tracker leader。


    image.png
  • 我们随便挑一个group1和group2所属的storage,也发现tracker leader切换到了tracker1,简直美滋滋。


    image.png
  • 接着我再启动tracker2,发现依然悍动不了tracker2身为leader的宝座。


    image.png
  • 所有的tracker和storage节点都启动成功后,我们可以在任意的一个storage查看storage集群信息。我这里就选192.168.12.33把,命令: /usr/bin/fdfs_monitor
    /etc/fdfs/storage.conf

 [root@localhost logs]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
[2017-12-27 16:26:30] DEBUG - base_path=/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

server_count=2, server_index=0

tracker server is 192.168.12.11:22122

group count: 2

Group 1:
group name = group1
disk total space = 18414 MB
disk free space = 17044 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

    Storage 1:
        id = 192.168.12.33
        ip_addr = 192.168.12.33 (localhost.localdomain)  ACTIVE
        http domain = 
        version = 5.05
        join time = 2017-12-27 11:52:50
        up time = 2017-12-27 16:01:59
        total storage = 18414 MB
        free storage = 17151 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2017-12-27 16:25:58
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00 
    Storage 2:
        id = 192.168.12.44
        ip_addr = 192.168.12.44  ACTIVE
        http domain = 
        version = 5.05
        join time = 2017-12-27 13:30:50
        up time = 2017-12-27 15:53:28
        total storage = 18414 MB
        free storage = 17044 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2017-12-27 16:26:27
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00 

Group 2:
group name = group2
disk total space = 17394 MB
disk free space = 16128 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

    Storage 1:
        id = 192.168.12.55
        ip_addr = 192.168.12.55  ACTIVE
        http domain = 
        version = 5.05
        join time = 2017-12-27 13:35:06
        up time = 2017-12-27 15:56:07
        total storage = 17394 MB
        free storage = 16128 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2017-12-27 16:26:07
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00 
    Storage 2:
        id = 192.168.12.66
        ip_addr = 192.168.12.66  ACTIVE
        http domain = 
        version = 5.05
        join time = 2017-12-27 13:36:48
        up time = 2017-12-27 15:56:54
        total storage = 18414 MB
        free storage = 17044 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 192.168.12.55
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2017-12-27 16:26:24
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00 
[root@localhost logs]# 
  • 输出的结果和我们预想的storage集群是一样一样的,tracker_server 有2个, group有2个。group1的ip有192.168.12.33和192.168.12.44,group2的ip有192.168.12.55和192.168.12.66.

  • 使用cd /usr/bin && ls |grep fdfs来查看fdfs所有的命令。


    image.png

Warning

请看下一篇。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,547评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,399评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,428评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,599评论 1 274
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,612评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,577评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,941评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,603评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,852评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,605评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,693评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,375评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,955评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,936评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,172评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,970评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,414评论 2 342

推荐阅读更多精彩内容