集群環境 配置基礎環境 #添加ceph.repowget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repoyum make ...
集群環境
配置基礎環境
添加ceph.repo
wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo yum makecache
配置NTP
yum -y install ntpdate ntp ntpdate cn.ntp.org.cn systemctl restart ntpd ntpdate;systemctl enable ntpd ntpdate
創建用戶和ssh免密登錄
useradd ceph-admin echo "ceph-admin"|passwd --stdin ceph-admin echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin sudo chmod 0440 /etc/sudoers.d/ceph-admin
配置host解析 cat >>/etc/hosts<<EOF 10.1.10.201 ceph01 10.1.10.202 ceph02 10.1.10.203 ceph03 EOF
配置sudo不需要tty
sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers
使用ceph-deploy部署集群
配置免密登錄
su - ceph-admin ssh-keygen ssh-copy-id ceph-admin@ceph01 ssh-copy-id ceph-admin@ceph02 ssh-copy-id ceph-admin@ceph03
安裝ceph-deploy
sudo yum install -y ceph-deploy python-pip
部署節點
mkdir my-cluster;cd my-cluster ceph-deploy new ceph01 ceph02 ceph03
編輯ceph.conf配置文件
echo >>/home/ceph-admin/my-cluster/ceph.conf<<EOF public network = 10.1.10.0/16 cluster network = 10.1.10.0/16 EOF
安裝ceph包(代替ceph-deploy install node1 node2,下麵命令需要在每台node上安裝)
sudo yum install -y ceph ceph-radosgw
配置初始monitor(s),收集所有密鑰
ceph-deploy mon create-initial ls -l *.keyring
把配置信息拷貝到各節點
ceph-deploy admin ceph01 ceph02 ceph03
配置osd
su - ceph-admin cd /home/my-cluster
for dev in /dev/sdb /dev/sdc /dev/sdd do ceph-deploy disk zap ceph01 $dev ceph-deploy osd create ceph01 --data $dev ceph-deploy disk zap ceph02 $dev ceph-deploy osd create ceph02 --data $dev ceph-deploy disk zap ceph03 $dev ceph-deploy osd create ceph03 --data $dev done
部署mgr,Luminous版以後才需要部署
ceph-deploy mgr create ceph01 ceph02 ceph03
開啟dashboard模塊
sudo chown -R ceph-admin /etc/ceph/ ceph mgr module enable dashboard netstat -lntup|grep 7000
http://10.1.10.201:7000
配置ceph塊存儲
檢查是否複合塊設備環境要求
uname -r modprobe rbd echo $?
創建池和塊設備
ceph osd lspools ceph osd pool create rbd 128
確定pg_num取值是強制性的,因為不能自動計算,下麵是幾個常用的值
少於5個OSD時,pg_num設置為128
OSD數量在5到10個時,pg_num設置為512
OSD數量在10到50個時,pg_num設置為4096
OSD數量大於50時,理解權衡方法、以及如何自己計算pg_num取值
客戶端創建塊設備
rbd create rbd1 --size 1G --image-feature layering --name client.admin
映射塊設備
rbd map --image rbd1 --name client.admin
創建文件系統並掛載
fdisk -l /dev/rbd0 mkfs.xfs /dev/rbd0 mkdir /mnt/ceph-disk1 mount /dev/rbd0 /mnt/ceph-disk1 df -h /mnt/ceph-disk1
寫入數據測試
dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M
採用fio軟體壓力測試
安裝fio壓測軟體
yum install zlib-devel -y yum install ceph-devel -y git clone git://git.kernel.dk/fio.git cd fio/ ./configure make;make install
測試磁碟性能
fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=readiops fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=writeiops fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=randreadiops fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=randwriteiops