Openstack Department of Computer Science and Information Engineering Chaoyang University of Technology Taichung, Taiwan, Republic of China Instructor:

Size: px
Start display at page:

Download "Openstack Department of Computer Science and Information Engineering Chaoyang University of Technology Taichung, Taiwan, Republic of China Instructor:"

Transcription

1 Openstack Department of Computer Science and Information Engineering Chaoyang University of Technology Taichung, Taiwan, Republic of China Instructor: De-Yu Wang ( 王德譽 ) dywang@csie.cyut.edu.tw Homepage: Phone: (04) ext 4538 Office: E738 March 9, 2019

2 Instructor: De-Yu Wang Homepage: 3. Phone: (04) ext Office: E738 參考資料 1. Openstack 官網 2. Install openstack on CentOS 6.4 using packstack 3. OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora 4. OpenStack Running an Instance 5. virsh command 6. Openstack Install Guide De-Yu Wang CSIE CYUT i

3 De-Yu Wang CSIE CYUT ii

4 CONTENTS CONTENTS Contents I CentOS 7 + Queens 1 1 環境準備 啟用 OpenStack repository OpenStack 套件 SQL 資料庫 Message queue Memcached Etcd Keystone 安裝前準備 安裝與設定 設定 HTTP Server 設定管理者變數 產生 domains, projects, users, roles 驗證 環境腳本 Glance 安裝前資料庫準備 安裝前環境準備 安裝與設定 驗證 - 新增 images Nova 安裝前資料庫準備 安裝前環境準備 控制節點安裝與設定 計算節點安裝與設定 新增計算節點到 CELL 資料庫 驗證 Neutron 安裝前資料庫準備 安裝前環境準備 控制節點 Provider Networks Self-service network 設定 metadata agent 服務啟動 De-Yu Wang CSIE CYUT iii

5 CONTENTS CONTENTS 5.7 驗證 重設 subnet pools q-router 問題解決 Virtual Network Proivder Network Self-service Network Router 驗證 Instance 新增 flavor 登入金鑰對 Security Group Rules 新增 Instance 前確認 使用 Selfservice network 新增 Instance cirros 存取 Instance cirros 使用 provider network 新增 Instance crtusb 驗證 VNC Client Snapshot Image 及 instance 刪除 Dashboard 安裝與設定 NOVNC 設定 Firewall 測試 除錯 其他及問題解決 問題 : 無法刪除 network 刪除 network 刪除 keypair 新增 Instance ERROR AMQP ERROR Dracut ERROR Nested Virtualization II DYW Linux 6 + Grizzly 前言 認識 Openstack 環境說明 訊息代理 QPID 認識 QPID QPID 安裝 De-Yu Wang CSIE CYUT iv

6 CONTENTS CONTENTS 12 身份識別 Keystone 認識 Keystone Keystone 安裝 建立 Keystone 管理者 建立一般使用者 myuser 物件儲存 Swift 認識 Swift Swift 安裝 建立 Swift Storage Node 設定 Swift Service Ring Swift Ring 配置錯誤修正 Swift 除錯 配置 Swift Object Storage Proxy 服務 確認 Swift Storage 安裝成功 Image 服務 Glance 認識 Glance Glance 安裝 新增作業系統 image 到 Glance 區塊儲存 Cinder 認識 Cinder Cinder 安裝 Cinder 服務啟動 建立 cinder-volumes group 新增 LVM Cinder volume Glusterfs 檔案系統 認識 Glusterfs 建置 SAMBA Server 建置 Glusterfs Server Gluster Client 新增 cinder volume 使用 Glusterfs volume Quantum 網路服務 認識 Openstack Networking 建立 Openstack-Quantum 用戶 安裝 Openstack-Quantum 設定 openvswitch 設定用戶網路 Nova Compute and Controller 認識 Nova Nova 安裝 建立 Instances 測試 Instances 虛擬機 Instance 管理 cinder 空間 自訂虛擬機 Images De-Yu Wang CSIE CYUT v

7 CONTENTS CONTENTS 19 Dashboard 認識 Dashboard Dashboard 安裝 Dashboard 使用 KVM 調整 Instance 螢幕保護 除錯 問題一 問題二 問題三 問題四 問題五 De-Yu Wang CSIE CYUT vi

8 Part I CentOS 7 + Queens De-Yu Wang CSIE CYUT 1

9

10 CHAPTER 1. 環境準備 Chapter 1 環境準備 1.1 啟用 OpenStack repository 1. 本文件範例使用 CentOS 7, 先使用 yum 安裝 openstack-queens repository 1 [root@ip112 ~]# yum install centos-release-openstack-queens 2. 更新套件 1 [root@ip112 ~]# yum upgrade 3. 如果更新套件包含 kernel, 必須重新開機才能使用新 kernel 1 [root@ip112 ~]# reboot 4. 重開機後確定是使用新 kernel 1 [root@ip112 ~]# uname -r el7.x86_64 3 [root@ip112 ~]# rpm -qa grep kernel kernel el7.x86_64 5 kernel el7.x86_64 kernel-tools-libs el7.x86_64 7 kernel-tools el7.x86_ OpenStack 套件 1. 安裝 OpenStack client De-Yu Wang CSIE CYUT 3

11 1.3. SQL 資料庫 CHAPTER 1. 環境準備 1 [root@ip112 ~]# yum install python-openstackclient 2. 預設啟用 SELinux, 所以要安裝 openstack-selinux 管理 openstack 服務的安全政策 1 [root@ip112 ~]# yum install openstack-selinux 1.3 SQL 資料庫 1. 安裝 mariadb 資料庫相關套件 1 [root@ip112 ~]# yum install mariadb mariadb-server python2-pymysql 2. 新增並編輯 /etc/my.cnf.d/openstack.cnf, 章節 mysqld 中加入以下設定, 其中 bind-address 為 contorl node 的 IP 1 [root@ip112 ~]# vim /etc/my.cnf.d/openstack.cnf [root@ip112 ~]# cat /etc/my.cnf.d/openstack.cnf 3 [mysqld] bind-address = default-storage-engine = innodb 7 innodb_file_per_table = on max_connections = collation-server = utf8_general_ci character-set-server = utf8 3. 啟動 mariadb 服務, 並設定開機啟動 [root@ip112 ~]# systemctl enable mariadb.service 2 Created symlink from /etc/systemd/system/multi-user.target.wants/ mariadb.service to /usr/lib/systemd/system/mariadb.service. 4 [root@ip112 ~]# systemctl start mariadb.service 6 [root@ip112 ~]# systemctl status mariadb.service mariadb.service - MariaDB 10.1 database server 8 Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) Active: active (running) since Wed :35:37 CST; 10s ago 10 Process: ExecStartPost=/usr/libexec/mysql-check-upgrade (code =exited, status=0/success) De-Yu Wang CSIE CYUT 4

12 1.3. SQL 資料庫 CHAPTER 1. 環境準備 Process: ExecStartPre=/usr/libexec/mysql-prepare-db-dir %n ( code=exited, status=0/success) 12 Process: ExecStartPre=/usr/libexec/mysql-check-socket (code= exited, status=0/success) Main PID: (mysqld) 14 Status: "Taking your SQL requests now..." CGroup: /system.slice/mariadb.service /usr/libexec/mysqld --basedir=/usr 18 May 23 16:35:37 ip112.csie.cyut.edu.tw systemd[1]: Started MariaDB 10.1 data... Hint: Some lines were ellipsized, use -l to show in full. 4. mysql 安全設定, 主要是設定 mysql 管理帳號 root 登入的密碼, 其餘都按 Enter 使用預設值 1 [root@ip112 ~]# mysql_secure_installation 3 NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! 5 In order to log into MariaDB to secure it, we ll need the current 7 password for the root user. If you ve just installed MariaDB, and you haven t set the root password yet, the password will be blank, 9 so you should just press enter here. 11 Enter current password for root (enter for none): OK, successfully used password, moving on Setting the root password ensures that nobody can log into the MariaDB 15 root user without the proper authorisation. 17 Set root password? [Y/n] Y New password: 19 Re-enter new password: Password updated successfully! 21 Reloading privilege tables..... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for 27 them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a 29 production environment. 31 Remove anonymous users? [Y/n]... Success! 33 De-Yu Wang CSIE CYUT 5

13 1.3. SQL 資料庫 CHAPTER 1. 環境準備 Normally, root should only be allowed to connect from localhost. This 35 ensures that someone cannot guess at the root password from the network. 37 Disallow root login remotely? [Y/n]... Success! 39 By default, MariaDB comes with a database named test that anyone can 41 access. This is also intended only for testing, and should be removed before moving into a production environment. 43 Remove test database and access to it? [Y/n] 45 - Dropping test database Success! 47 - Removing privileges on test database Success! 49 Reloading the privilege tables will ensure that all changes made so far 51 will take effect immediately. 53 Reload privilege tables now? [Y/n]... Success! 55 Cleaning up All done! If you ve completed all of the above steps, your MariaDB 59 installation should now be secure. 61 Thanks for using MariaDB! 5. 使用設定的密碼, 確認可以登入 mysql 1 [root@ip112 ~]# mysql -uroot -p Enter password: 3 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 10 5 Server version: MariaDB MariaDB Server 7 Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. 9 Type help; or \h for help. Type \c to clear the current input statement. 11 MariaDB [(none)]> exit Bye De-Yu Wang CSIE CYUT 6

14 1.4. MESSAGE QUEUE CHAPTER 1. 環境準備 1.4 Message queue 1. OpenStack 使用 message queue 訊息佇列來協調服務之間的操作和狀態訊息,OpenStack 支援多種訊息佇列服務, 包括 RabbitMQ,Qpid 和 ZeroMQ, 本文件使用 RabbitMQ 安裝 rabbitmq-server ~]# yum install rabbitmq-server 2. 啟動 rabbitmq-server 服務, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable rabbitmq-server.service Created symlink from /etc/systemd/system/multi-user.target.wants/ rabbitmq-server.service 3 to /usr/lib/systemd/system/rabbitmq-server.service. 5 [root@ip112 ~]# systemctl start rabbitmq-server.service [root@ip112 ~]# systemctl status rabbitmq-server.service 7 rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: disabled) 9 Active: active (running) since Wed :54:02 CST; 9s ago Main PID: (beam.smp) 11 Status: "Initialized" CGroup: /system.slice/rabbitmq-server.service /usr/lib64/erlang/erts /bin/beam.smp -W w -A erl_child_setup inet_gethost inet_gethost 4 17 May 23 16:54:02 ip112.csie.cyut.edu.tw systemd[1]: Started RabbitMQ broker. 19 Hint: Some lines were ellipsized, use -l to show in full. 3. rabbitmq 新增 openstak 用戶, 最後一個參數 RABBIT_PASS 為密碼, 請更改成自己適用的密碼 1 [root@ip112 ~]# rabbitmqctl add_user openstack RABBIT_PASS Creating user "openstack" rabbitmq 設定 openstak 用戶有讀取及寫入的權限 [root@ip112 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" 2 Setting permissions for user "openstack" in vhost "/"... De-Yu Wang CSIE CYUT 7

15 1.5. MEMCACHED CHAPTER 1. 環境準備 1.5 Memcached 1. OpenStack 使用 Memcached 做 cache tokens 暫存令牌, 來進行服務的身份認證 memcached 服務通常運作在控制器節點上, 安裝 memcached 服務 ~]# yum install memcached python-memcached 2. 編輯 /etc/sysconfig/memcached, 修改 OPTIONS="-l ,::1" 中 IP 為控制節點的 IP, 讓其他節點可以存取 1 [root@ip112 ~]# vim /etc/sysconfig/memcached [root@ip112 ~]# cat /etc/sysconfig/memcached 3 PORT="11211" USER="memcached" 5 MAXCONN="1024" CACHESIZE="64" 7 OPTIONS="-l ,::1" 3. 啟動 memcached, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable memcached.service Created symlink from /etc/systemd/system/multi-user.target.wants/ memcached.service 3 to /usr/lib/systemd/system/memcached.service. 5 [root@ip112 ~]# systemctl start memcached.service [root@ip112 ~]# systemctl status memcached.service 7 memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled ; vendor preset: disabled) 9 Active: active (running) since Wed :15:58 CST; 6s ago Main PID: (memcached) 11 CGroup: /system.slice/memcached.service /usr/bin/memcached -p u memcached -m 64 -c l May 23 17:15:58 ip112.csie.cyut.edu.tw systemd[1]: Started memcached daemon. 15 May 23 17:15:58 ip112.csie.cyut.edu.tw systemd[1]: Starting memcached daemon... Hint: Some lines were ellipsized, use -l to show in full. 1.6 Etcd 1. Etcd 是一個分散式鍵值儲存系統, 用於分散式密鑰鎖定 OpenStack 可以使用 Etcd 為集群之間提供可靠的共享數據服務 De-Yu Wang CSIE CYUT 8

16 1.6. ETCD CHAPTER 1. 環境準備 ~]# yum install etcd 2. 編輯 /etc/etcd/etcd.conf 設定檔, 設定以下變數, 為控制節點的 IP 1 [root@ip112 ~]# vim /etc/etcd/etcd.conf [root@ip112 ~]# egrep (#\[[MC]^ETCD) /etc/etcd/etcd.conf 3 #[Member] ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 5 ETCD_LISTEN_PEER_URLS=" ETCD_LISTEN_CLIENT_URLS=" 7 ETCD_NAME="controller" #[Clustering] 9 ETCD_INITIAL_ADVERTISE_PEER_URLS=" ETCD_ADVERTISE_CLIENT_URLS=" 11 ETCD_INITIAL_CLUSTER="default= ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" 13 ETCD_INITIAL_CLUSTER_STATE="new" 3. 啟動 etcd 服務, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable etcd.service [root@ip112 ~]# systemctl start etcd.service 3 [root@ip112 ~]# systemctl status etcd.service etcd.service - Etcd Server 5 Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since Wed :23:23 CST; 19s ago 7 Main PID: (etcd) CGroup: /system.slice/etcd.service /usr/bin/etcd --name=controller --data-dir=/var/lib/ etcd/default.etcd --listen-client-url... De-Yu Wang CSIE CYUT 9

17 1.6. ETCD CHAPTER 1. 環境準備 De-Yu Wang CSIE CYUT 10

18 CHAPTER 2. KEYSTONE Chapter 2 Keystone 2.1 安裝前準備 1. keystone 是 OpenStack 在控制節點的身份認證服務, 為了擴展性, 這個配置部署了 Fernet 令牌和 Apache HTTP 服務來處理請求 2. 安裝 keystone 前必須先產生資料庫, 登入 mysql: 1 [root@ip112 ~]# mysql -uroot -p Enter password: 3 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 11 5 Server version: MariaDB MariaDB Server 7 Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. 9 Type help; or \h for help. Type \c to clear the current input statement. MariaDB [(none)]> 3. 產生資料庫 keystone MariaDB [(none)]> CREATE DATABASE keystone; 2 Query OK, 1 row affected (0.00 sec) 4. 產生用戶 keystone 在 localhost, 密碼 123qwe, 對資料庫 keystone 有全部的存取權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO \ 2 localhost IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 5. 產生用戶 keystone 在所有地方, 密碼 123qwe, 對資料庫 keystone 有全部的存取權 De-Yu Wang CSIE CYUT 11

19 2.2. 安裝與設定 CHAPTER 2. KEYSTONE 1 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO \ % IDENTIFIED BY 123qwe ; 3 Query OK, 0 rows affected (0.00 sec) 6. 退出 mysql 1 MariaDB [(none)]> exit Bye 2.2 安裝與設定 1. 安裝 keystone [root@ip112 ~]# yum install openstack-keystone httpd mod_wsgi 2. 編輯 keystone.conf 設定檔, 設定資料連線及使用的令牌, 這兩項必須與前一節的資料設定匹配 1 [root@ip112 ~]# vim /etc/keystone/keystone.conf [root@ip112 ~]# egrep ^(\[token\]\[databaseconnectprovider) \ 3 /etc/keystone/keystone.conf 5 [database] connection = mysql+pymysql://keystone:123qwe@controller/keystone 7 [token] provider = fernet 3. 確認 keystone 帳號存在, 以 keystone 帳號執行 keystone-manage db_sync 進行資料庫同步 [root@ip112 ~]# id keystone 2 uid=163(keystone) gid=163(keystone) groups=163(keystone) 4 [root@ip112 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone 4. 初始化 Fernet key 資料庫 [root@ip112 ~]# keystone-manage fernet_setup --keystone-user \ 2 keystone --keystone-group keystone [root@ip112 ~]# keystone-manage credential_setup --keystone-user \ 4 keystone --keystone-group keystone De-Yu Wang CSIE CYUT 12

20 2.3. 設定 HTTP SERVER CHAPTER 2. KEYSTONE 5. Bootstap keystone 服務, 其中主機名稱 controller 直接在 /etc/hosts 中做對應 ~]# vim /etc/hosts 2 [root@ip112 ~]# grep controller /etc/hosts controller 4 [root@ip112 ~]# keystone-manage bootstrap --bootstrap-password 123qwe \ --bootstrap-admin-url \ 6 --bootstrap-internal-url \ --bootstrap-public-url \ 8 --bootstrap-region-id RegionOne 2.3 設定 HTTP Server 1. 設定 HTTP server 的名稱為 controller [root@ip112 ~]# vim /etc/httpd/conf/httpd.conf 2 [root@ip112 ~]# grep ^ServerName /etc/httpd/conf/httpd.conf ServerName controller 2. 建立 wsgi-keystone.conf 連結檔 1 [root@ip112 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/ httpd/conf.d/ 3. 啟動 httpd 服務, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/ httpd.service 3 to /usr/lib/systemd/system/httpd.service. 5 [root@ip112 ~]# systemctl restart httpd.service [root@ip112 ~]# systemctl status httpd.service 7 httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) 9 Active: active (running) since Wed :26:17 CST; 22s ago Docs: man:httpd(8) 11 man:apachectl(8) De-Yu Wang CSIE CYUT 13

21 2.4. 設定管理者變數 CHAPTER 2. KEYSTONE 2.4 設定管理者變數 1. 編輯 admin.token, 設定管理者令牌變數 1 [root@ip112 ~]# vim admin.token [root@ip112 ~]# cat admin.token 3 export OS_USERNAME=admin export OS_PASSWORD=123qwe 5 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default 7 export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL= 9 export OS_IDENTITY_API_VERSION=3 2. 要以管理者 admin 進行啟動 mariadb 服務, 並設定開機啟動 1 [root@ip112 ~]# source admin.token 2.5 產生 domains, projects, users, roles 1. keystone 認證服務使用 domains, projects, users, 及 roles 的組合,keystonemanage 預設就已存在 default domain 1 [root@ip112 ~]# source admin.token 3 [root@ip112 ~]# openstack domain list ID Name Enabled Description default Default True The default domain 如果要自行產生 domain, 可以如下產生名為 deyu 的 domain [root@ip112 ~]# openstack domain create --description "An DE-YU Domain" deyu Field Value description An DE-YU Domain 6 enabled True id f95fc426fb0d46f481f0e0fed290c2c8 8 name deyu tags [] De-Yu Wang CSIE CYUT 14

22 2.5. 產生 DOMAINS, PROJECTS, USERS, ROLES CHAPTER 2. KEYSTONE 3. 在 default domain 產生名為 service 的 project ~]# openstack project create --domain default \ 2 --description "Service Project" service Field Value description Service Project domain_id default 8 enabled True id de4c047c39f6e22e c3 10 is_domain False name service 12 parent_id default tags [] 一般工作必須使用非管理者帳號, 所以在 default domain 產生名為 demo 的 project 作為非管理工作用 [root@ip112 ~]# openstack project create --domain default \ 2 > --description "Demo Project" demo Field Value description Demo Project domain_id default 8 enabled True id 92d1ec3e04384ad599c1a8f5aed is_domain False name demo 12 parent_id default tags [] 在 default domain 中產生名為 demo 的 user, 輸入密碼後產生 [root@ip112 ~]# openstack user create --domain default \ 2 > --password-prompt demo User Password: 4 Repeat User Password: Field Value domain_id default enabled True 10 id 0d7bb abb97ec0b40c445b1 name demo 12 options {} password_expires_at None De-Yu Wang CSIE CYUT 15

23 2.6. 驗證 CHAPTER 2. KEYSTONE 產生名為 user 的 role ~]# openstack role create user Field Value domain_id None 6 id 2fae e4efe94a55bae18259edd name user 將名為 user 的 role 加到 project demo, user demo [root@ip112 ~]# openstack role add --project demo --user demo user 2.6 驗證 1. 先取消變數 OS_AUTH_URL OS_PASSWORD 1 [root@ip112 ~]# unset OS_AUTH_URL OS_PASSWORD 2. 使用 admin 用戶取得認證令牌, 輸入密碼後取得 1 [root@ip112 ~]# openstack --os-auth-url \ --os-project-domain-name Default --os-user-domain-name Default \ 3 --os-project-name admin --os-username admin token issue Password: Field Value expires T01:06: id gaaaaabbbgirei4wvh...8cqomrgxjwapvudjix-l4 project_id e28adc c96b faa08c 11 user_id c5f79bab788049e7a59395b6f94a911f 使用 demo 用戶取得認證令牌, 輸入密碼後取得 [root@ip112 ~]# openstack --os-auth-url \ 2 --os-project-domain-name Default --os-user-domain-name Default \ De-Yu Wang CSIE CYUT 16

24 2.7. 環境腳本 CHAPTER 2. KEYSTONE --os-project-name demo --os-username demo token issue 4 Password: Field Value expires T01:11: id gaaaaabbbgnachj1wmnwybt-7pcskoibj7pkzkm27ao3s8 10 project_id 92d1ec3e04384ad599c1a8f5aed73663 user_id 0d7bb abb97ec0b40c445b 環境腳本 1. admin 及 demo 用戶權限不同, 使用時環境變數必須調整, 為方便使用會將環境變數寫在一腳本檔案中 [root@ip112 ~]# vim demo.token 2 [root@ip112 ~]# cat demo.token export OS_PROJECT_DOMAIN_NAME=Default 4 export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo 6 export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS 8 export OS_AUTH_URL= export OS_IDENTITY_API_VERSION=3 10 export OS_IMAGE_API_VERSION=2 2. 使用 demo 用戶時, 讀取 demo.token 環境變數 [root@ip112 ~]#. demo.token 3. 列出令牌資訊, 其中 user_id 為 demo 用戶無誤 1 [root@ip112 ~]# openstack token issue Field Value expires T01:18: id gaaaaabbbgtx8h2-kbbuqppyywe-cflot_mj1slbmq1xy 7 project_id 92d1ec3e04384ad599c1a8f5aed73663 user_id 0d7bb abb97ec0b40c445b De-Yu Wang CSIE CYUT 17

25 2.7. 環境腳本 CHAPTER 2. KEYSTONE De-Yu Wang CSIE CYUT 18

26 CHAPTER 3. GLANCE Chapter 3 Glance 3.1 安裝前資料庫準備 1. Glance 是 Image 服務, 安裝 glance 前必須先產生資料庫, 登入 mysql: 1 [root@ip112 ~]# mysql -uroot -p Enter password: 3 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 11 5 Server version: MariaDB MariaDB Server 7 Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. 9 Type help; or \h for help. Type \c to clear the current input statement. MariaDB [(none)]> 2. 產生資料庫 glance MariaDB [(none)]> CREATE DATABASE glance; 2 Query OK, 1 row affected (0.00 sec) 3. 產生用戶 keystone 在 localhost, 密碼 123qwe, 對資料庫 glance 有全部的存取權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO localhost \ 2 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 4. 產生用戶 keystone 在所有地方, 密碼 123qwe, 對資料庫 glance 有全部的存取權 De-Yu Wang CSIE CYUT 19

27 3.2. 安裝前環境準備 CHAPTER 3. GLANCE 1 MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO % \ IDENTIFIED BY 123qwe ; 3 Query OK, 0 rows affected (0.00 sec) 5. 退出 mysql 1 MariaDB [(none)]> exit Bye 3.2 安裝前環境準備 1. 先讀取 openstack 管理者 admin 環境變數 [root@ip112 ~]# source admin.token 2. 新增 glance 用戶, 輸入密碼後產生 1 [root@ip112 ~]# openstack user create --domain default --passwordprompt glance User Password: 3 Repeat User Password: Field Value domain_id default enabled True 9 id a222f467cc614229b61cbd0d35745a9e name glance 11 options {} password_expires_at None 新增 admin role 到 glance user 及 service project 1 [root@ip112 ~]# openstack role add --project service --user glance admin 4. 新增 glance service 1 [root@ip112 ~]# openstack service create --name glance \ --description "OpenStack Image" image De-Yu Wang CSIE CYUT 20

28 3.2. 安裝前環境準備 CHAPTER 3. GLANCE Field Value description OpenStack Image 7 enabled True id e0d28b4eb10f49cab4c2826f15525ef3 9 name glance type image 新增 image service API endpoints,interface public 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ image public Field Value enabled True 7 id b77ce68dd22f47b59df84f09cae14fac interface public 9 region RegionOne region_id RegionOne 11 service_id e0d28b4eb10f49cab4c2826f15525ef3 service_name glance 13 service_type image url 新增 image service API endpoints,interface internal 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ image internal Field Value enabled True 7 id db7eb4eecb184a189216ce7690a5858f interface internal 9 region RegionOne region_id RegionOne 11 service_id e0d28b4eb10f49cab4c2826f15525ef3 service_name glance 13 service_type image url 新增 image service API endpoints,interface admin De-Yu Wang CSIE CYUT 21

29 3.3. 安裝與設定 CHAPTER 3. GLANCE 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ image admin Field Value enabled True 7 id a0a c55ebeab interface admin 9 region RegionOne region_id RegionOne 11 service_id e0d28b4eb10f49cab4c2826f15525ef3 service_name glance 13 service_type image url 安裝與設定 1. 安裝 openstack-glance 1 [root@ip112 ~]# yum install openstack-glance 2. 編輯 /etc/glance/glance-api.conf, 設定如下章節 : 1 [root@ip112 ~]# vim /etc/glance/glance-api.conf [root@ip112 ~]# egrep ^(\[database\[keystone_auth\[paste_deploy\ 3 \[glance_store[a-z]) /etc/glance/glance-api.conf [database] 5 connection = mysql+pymysql://glance:123qwe@controller/glance [glance_store] 7 stores = file,http default_store = file 9 filesystem_store_datadir = /var/lib/glance/images/ [keystone_authtoken] 11 auth_uri = auth_url = 13 memcached_servers = controller:11211 auth_type = password 15 project_domain_name = Default user_domain_name = Default 17 project_name = service username = glance 19 password = 123qwe [paste_deploy] 21 flavor = keystone 3. 編輯 /etc/glance/glance-registry.conf, 設定如下章節 : De-Yu Wang CSIE CYUT 22

30 3.4. 驗證 - 新增 IMAGES CHAPTER 3. GLANCE 1 [root@ip112 ~]# vim /etc/glance/glance-registry.conf [root@ip112 ~]# egrep ^(\[database\[keystone_auth\[paste_deploy\ 3 \[glance_store[a-z]) /etc/glance/glance-registry.conf [database] 5 connection = mysql+pymysql://glance:123qwe@controller/glance [keystone_authtoken] 7 auth_uri = auth_url = 9 memcached_servers = controller:11211 auth_type = password 11 project_domain_name = Default user_domain_name = Default 13 project_name = service username = glance 15 password = 123qwe [paste_deploy] 17 flavor = keystone 4. 以用戶 glance 執行 glance-manage 進行 glance 資料庫同步 1 [root@ip112 ~]# su -s /bin/sh -c "glance-manage db_sync" glance 5. 啟動 openstack-glance-api openstack-glance-registry 服務, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable openstack-glance-api.service \ openstack-glance-registry.service 3 [root@ip112 ~]# systemctl start openstack-glance-api.service \ openstack-glance-registry.service 3.4 驗證 - 新增 images 1. 載入管理者環境變變數 [root@ip112 ~]#. admin.token 2. 取得 VM image 檔, 如果只是測試, 可以下載 cirros image 1 [root@ip112 ~]# wget x86_64-disk.img 3. 取得 crtusb.qcow2 De-Yu Wang CSIE CYUT 23

31 3.4. 驗證 - 新增 IMAGES CHAPTER 3. GLANCE 1 [root@ip112 ~]# scp :/var/lib/libvirt/images/crtusb. qcow2. root@ s password: 3 crtusb.qcow2 100% 3165MB 11.2MB/s 04:42 4. 上傳 crtusb.qcow2 到 image 服務,disk 格式為 qcow2,container 格式為 bare,public 讓所有 project 可以存取 1 [root@ip112 ~]# openstack image create "crtusb" --file crtusb.qcow2 \ --disk-format qcow2 --container-format bare --public Field Value checksum 0c7385b6fabaf8b17cba5d0ede98abe2 7 container_format bare created_at T13:55:20Z 9 disk_format qcow2 file /v2/images/e4bea3db-581a-472d-a0dd-37b1e7055bbf/ file 11 id e4bea3db-581a-472d-a0dd-37b1e7055bbf min_disk 0 13 min_ram 0 name crtusb 15 owner e28adc c96b faa08c protected False 17 schema /v2/schemas/image size status active tags 21 updated_at T13:55:33Z virtual_size None 23 visibility public De-Yu Wang CSIE CYUT 24

32 3.4. 驗證 - 新增 IMAGES CHAPTER 3. GLANCE 列出 image, 確認上傳的 crtusb 可用 [root@ip112 ~]# openstack image list ID Name Status e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active De-Yu Wang CSIE CYUT 25

33 3.4. 驗證 - 新增 IMAGES CHAPTER 3. GLANCE De-Yu Wang CSIE CYUT 26

34 CHAPTER 4. NOVA Chapter 4 Nova 4.1 安裝前資料庫準備 1. Nova 是 Compute 服務, 安裝 nova 前必須先產生資料庫, 登入 mysql: [root@ip112 ~]# mysql -uroot -p 2 Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. 4 Your MariaDB connection id is 11 Server version: MariaDB MariaDB Server 6 Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. 8 Type help; or \h for help. Type \c to clear the current input statement. 10 MariaDB [(none)]> 2. 產生資料庫 nova_api, nova, nova_cell0 MariaDB [(none)]> CREATE DATABASE nova_api; 2 Query OK, 1 row affected (0.00 sec) 4 MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) 6 MariaDB [(none)]> CREATE DATABASE nova_cell0; 8 Query OK, 1 row affected (0.00 sec) 3. 產生用戶 nova 在 localhost, 密碼 123qwe, 對資料庫 nova_api, nova, nova_cell0 有全部的存取權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO localhost \ 2 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 4 De-Yu Wang CSIE CYUT 27

35 4.2. 安裝前環境準備 CHAPTER 4. NOVA MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO localhost \ 6 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 8 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO localhost \ 10 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 4. 產生用戶 nova 在 localhost, 密碼 123qwe, 對資料庫 nova_api, nova, nova_cell0 有全部的存取權 1 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO % \ IDENTIFIED BY 123qwe ; 3 Query OK, 0 rows affected (0.00 sec) 5 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO % \ IDENTIFIED BY 123qwe ; 7 Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO % \ 9 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 5. 退出 mysql MariaDB [(none)]> exit 2 Bye 4.2 安裝前環境準備 1. 先讀取 openstack 管理者 admin 環境變數 [root@ip112 ~]# source admin.token 2. 新增 nova 用戶, 輸入密碼後產生 1 [root@ip112 ~]# openstack user create --domain default --passwordprompt nova User Password: 3 Repeat User Password: Field Value De-Yu Wang CSIE CYUT 28

36 4.2. 安裝前環境準備 CHAPTER 4. NOVA domain_id default enabled True 9 id 97aea43e81b047b39a58af7c4d4d1b1e name nova 11 options {} password_expires_at None 新增 admin role 到 nova user 及 service project 1 [root@ip112 ~]# openstack role add --project service --user nova admin 4. 新增 nova service 1 [root@ip112 ~]# openstack service create --name nova \ --description "OpenStack Compute" compute Field Value description OpenStack Compute 7 enabled True id ca566f55091a40d9b e6e1a13c 9 name nova type compute 新增 Compute API service endpoints,interface public 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ compute public Field Value enabled True 7 id f66129d889db4659aa9ae8731b7be7b9 interface public 9 region RegionOne region_id RegionOne 11 service_id ca566f55091a40d9b e6e1a13c service_name nova 13 service_type compute url De-Yu Wang CSIE CYUT 29

37 4.2. 安裝前環境準備 CHAPTER 4. NOVA 6. 新增 Compute API service endpoints,interface internal 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ compute internal Field Value enabled True 7 id e2266bff21074f ffd761 interface internal 9 region RegionOne region_id RegionOne 11 service_id ca566f55091a40d9b e6e1a13c service_name nova 13 service_type compute url 新增 Compute API service endpoints,interface admin 1 [root@ip112 ~]# openstack endpoint create --region RegionOne \ compute admin Field Value enabled True 7 id 3745bbc0c b17b567256b7d53c interface admin 9 region RegionOne region_id RegionOne 11 service_id ca566f55091a40d9b e6e1a13c service_name nova 13 service_type compute url 新增 Placement service 用戶, 輸入密碼 123qwe 後產生 1 [root@ip112 ~]# openstack user create --domain default --passwordprompt placement User Password: 3 Repeat User Password: Field Value domain_id default enabled True 9 id 3c5acc18f314451eafd55a51ed595e62 name placement 11 options {} De-Yu Wang CSIE CYUT 30

38 4.2. 安裝前環境準備 CHAPTER 4. NOVA password_expires_at None 新增 admin role 到 placement user 及 service project 1 [root@ip112 ~]# openstack role add --project service --user placement admin 10. 新增 Placement API entry 1 [root@ip112 ~]# openstack service create --name placement -- description "Placement API" placement Field Value description Placement API enabled True 7 id b28b45e816a74c3a89f91dd620fea022 name placement 9 type placement 新增 Placement API service endpoints,interface public [root@ip112 ~]# openstack endpoint create --region RegionOne placement public Field Value enabled True 6 id 033e4a2cfe554791be117b6e907f1328 interface public 8 region RegionOne region_id RegionOne 10 service_id b28b45e816a74c3a89f91dd620fea022 service_name placement 12 service_type placement url 新增 Placement API service endpoints,interface internal [root@ip112 ~]# openstack endpoint create --region RegionOne placement internal Field Value De-Yu Wang CSIE CYUT 31

39 4.3. 控制節點安裝與設定 CHAPTER 4. NOVA enabled True 6 id e7c2f2d7b4d f6af4979c1c46b interface internal 8 region RegionOne region_id RegionOne 10 service_id b28b45e816a74c3a89f91dd620fea022 service_name placement 12 service_type placement url 新增 Placement API service endpoints,interface admin [root@ip112 ~]# openstack endpoint create --region RegionOne placement admin Field Value enabled True 6 id 80829b253d9c4eacb f0c989d interface admin 8 region RegionOne region_id RegionOne 10 service_id b28b45e816a74c3a89f91dd620fea022 service_name placement 12 service_type placement url 控制節點安裝與設定 1. 安裝控制節點 openstack-nova-* 套件 [root@ip112 ~]# yum install openstack-nova-api openstack-novaconductor \ 2 openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api 2. 編輯 /etc/nova/nova.conf, 設定如下章節 : 1 [root@ip112 ~]# vim /etc/nova/nova.conf [root@ip112 ~]# egrep ^(\[DEF\[api_d\[data\[api\[keyst\[vnc\ 3 \[glance\[oslo_c\[place[a-z]) /etc/nova/nova.conf [DEFAULT] 5 enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:123qwe@controller De-Yu Wang CSIE CYUT 32

40 4.3. 控制節點安裝與設定 CHAPTER 4. NOVA 7 my_ip = use_neutron = True 9 firewall_driver = nova.virt.firewall.noopfirewalldriver [api] 11 auth_strategy = keystone [api_database] 13 connection = mysql+pymysql://nova:123qwe@controller/nova_api [database] 15 connection = mysql+pymysql://nova:123qwe@controller/nova [glance] 17 api_servers = [keystone] 19 [keystone_authtoken] auth_url = 21 memcached_servers = controller:11211 auth_type = password 23 project_domain_name = default user_domain_name = default 25 project_name = service username = nova 27 password = 123qwe [oslo_concurrency] 29 lock_path = /var/lib/nova/tmp [placement] 31 os_region_name = RegionOne project_domain_name = Default 33 project_name = service auth_type = password 35 user_domain_name = Default auth_url = 37 username = placement password = 123qwe 39 [vnc] enabled = true 41 server_listen = $my_ip server_proxyclient_address = $my_ip 3. 編輯 /etc/httpd/conf.d/00-nova-placement-api.conf, 開放所有 IP 可以存取 /usr/bin 目錄 [root@ip112 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf 2 [root@ip112 ~]# cat /etc/httpd/conf.d/00-nova-placement-api.conf Listen <VirtualHost *:8778> 6 WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} 8 WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user= nova group=nova 10 WSGIScriptAlias / /usr/bin/nova-placement-api <IfVersion >= 2.4> 12 ErrorLogFormat "%M" De-Yu Wang CSIE CYUT 33

41 4.3. 控制節點安裝與設定 CHAPTER 4. NOVA </IfVersion> 14 ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On 16 #SSLCertificateFile... #SSLCertificateKeyFile </VirtualHost> 20 Alias /nova-placement-api /usr/bin/nova-placement-api <Location /nova-placement-api> 22 SetHandler wsgi-script Options +ExecCGI 24 WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} 26 WSGIPassAuthorization On </Location> 28 <Directory /usr/bin> <IfVersion >= 2.4> 30 Require all granted </IfVersion> 32 <IfVersion < 2.4> Order allow,deny 34 Allow from all </IfVersion> 36 </Directory> 4. 重新啟動 httpd 服務 ~]# systemctl restart httpd 5. 以用戶 nova 執行 nova-manage 進行 nova-api 資料庫同步, 此步驟輸出的錯誤訊息皆可忽略 1 [root@ip112 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: 3 NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported exception.notsupportedwarning 6. 以用戶 nova 執行 nova-manage 產生 cell0 資料庫, 此步驟輸出的錯誤訊息皆可忽略 [root@ip112 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova 2 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported 4 exception.notsupportedwarning De-Yu Wang CSIE CYUT 34

42 4.3. 控制節點安裝與設定 CHAPTER 4. NOVA 7. 以用戶 nova 執行 nova-manage 產生 cell1 資料庫, 此步驟輸出的錯誤訊息皆可忽略 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell -- name=cell1 --verbose" nova 2 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported 4 exception.notsupportedwarning 4b4e2923-ee b7d f313c2 8. 以用戶 nova 執行 nova-manage 進行 nova 資料庫同步, 此步驟輸出的錯誤訊息皆可忽略 1 [root@ip112 ~]# su -s /bin/sh -c "nova-manage db sync" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: 3 NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported exception.notsupportedwarning 5 /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx. 7 This is deprecated and will be disallowed in a future release. ) result = self._query(query) 9 /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u Duplicate index uniq_instances0uuid. This is deprecated and will be disallowed in a future release. ) 11 result = self._query(query) 9. 確認 nova cell0 cell1 註冊正確, 此步驟輸出的錯誤訊息皆可忽略 1 [root@ip112 ~]# nova-manage cell_v2 list_cells /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: 3 NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported exception.notsupportedwarning Name UUID cell cell1 4b4e2923-ee b7d f313c2 De-Yu Wang CSIE CYUT 35

43 4.4. 計算節點安裝與設定 CHAPTER 4. NOVA Transport URL Database Connection none:/ 啟動 openstack-nova 控制節點相關的服務, 並設定開機啟動 1 [root@ip112 ~]# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ 3 openstack-nova-conductor.service openstack-nova-novncproxy.service [root@ip112 ~]# systemctl start openstack-nova-api.service \ 5 openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service 4.4 計算節點安裝與設定 1. 安裝 openstack-nova-compute 套件 [root@ip112 ~]# yum install openstack-nova-compute 2. 編輯 /etc/nova/nova.conf, 設定如下章節 : 1 [root@ip112 ~]# vim /etc/nova/nova.conf [root@ip112 ~]# egrep ^(\[DEF\[api_d\[data\[api\[keyst\[vnc\ 3 \[glance\[oslo_c\[place[a-z]) /etc/nova/nova.conf [DEFAULT] 5 enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:123qwe@controller 7 my_ip = use_neutron = True 9 firewall_driver = nova.virt.firewall.noopfirewalldriver [api] 11 auth_strategy = keystone [api_database] 13 connection = mysql+pymysql://nova:123qwe@controller/nova_api [database] 15 connection = mysql+pymysql://nova:123qwe@controller/nova De-Yu Wang CSIE CYUT 36

44 4.4. 計算節點安裝與設定 CHAPTER 4. NOVA [glance] 17 api_servers = [keystone] 19 [keystone_authtoken] auth_url = 21 memcached_servers = controller:11211 auth_type = password 23 project_domain_name = default user_domain_name = default 25 project_name = service username = nova 27 password = 123qwe [oslo_concurrency] 29 lock_path = /var/lib/nova/tmp [placement] 31 os_region_name = RegionOne project_domain_name = Default 33 project_name = service auth_type = password 35 user_domain_name = Default auth_url = 37 username = placement password = 123qwe 39 [vnc] enabled = true 41 server_listen = $my_ip server_proxyclient_address = $my_ip 3. 查看主機是否支援虛擬技術, 只要數字大於 1, 就表示支援 [root@ip112 ~]# egrep -c (vmxsvm) /proc/cpuinfo 編輯 /etc/nova/nova.conf, 在章節 libvirt 中設定虛擬化 type 為 qemu [root@ip112 ~]# vim /etc/nova/nova.conf 2 [root@ip112 ~]# egrep ^(\[libvirtv) /etc/nova/nova.conf [libvirt] 4 virt_type=qemu 5. 啟動 libvirtd 及 openstack-nova-compute 服務, 並設定開機啟動 [root@ip112 ~]# systemctl enable libvirtd.service openstack-novacompute.service 2 [root@ip112 ~]# systemctl start libvirtd.service openstack-novacompute.service De-Yu Wang CSIE CYUT 37

45 4.5. 新增計算節點到 CELL 資料庫 CHAPTER 4. NOVA 4.5 新增計算節點到 CELL 資料庫 1. 載入管理者環境變變數 ~]#. admin.token 2. 確認資料庫中有計算主機 1 [root@ip112 ~]# openstack compute service list --service nova-compute ID Binary Host Zone Status State Updated At nova-compute ip112.csie.cyut.edu.tw nova enabled up T14:28: 發理取得資料庫中的計算主機 [root@ip112 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova 2 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported 4 exception.notsupportedwarning Found 2 cell mappings. 6 Skipping cell0 since it does not contain hosts. Getting computes from cell cell1 : 4b4e2923-ee b7d f313c2 8 Checking host mapping for compute host ip112.csie.cyut.edu.tw : 9784 a642-7c16-473b-91ed-95ee3d Creating host mapping for compute host ip112.csie.cyut.edu.tw : 9784 a642-7c16-473b-91ed-95ee3d Found 1 unmapped computes in cell: 4b4e2923-ee b7d f313c2 4. 當新增計算節點時, 必須再執行上一個動作, 也可在 /etc/nova/nova.conf 的 scheduler 章節中設定自動發現週期 [root@ip112 ~]# egrep ^(\[scheduler^discover) /etc/nova/nova.conf 2 [scheduler] discover_hosts_in_cells_interval = 300 De-Yu Wang CSIE CYUT 38

46 4.6. 驗證 CHAPTER 4. NOVA 4.6 驗證 1. 載入管理者環境變變數 1 [root@ip112 ~]#. admin.token 2. 列出計算服務, 確認有如下四項 1 [root@ip112 ~]# openstack compute service list ID Binary Host Zone Status State Updated At nova-consoleauth ip112.csie.cyut.edu.tw internal enabled up T14:37: nova-conductor ip112.csie.cyut.edu.tw internal enabled up T14:37: nova-scheduler ip112.csie.cyut.edu.tw internal enabled up T14:37: nova-compute ip112.csie.cyut.edu.tw nova enabled up T14:37: 列出身份服務中的 API endpoints 以確認連線 1 [root@ip112 ~]# openstack catalog list Name Type Endpoints keystone identity RegionOne internal: 7 RegionOne public: 9 RegionOne admin: 11 placement placement RegionOne 13 public: RegionOne 15 admin: RegionOne 17 internal: 19 nova compute RegionOne admin: 21 RegionOne internal: De-Yu Wang CSIE CYUT 39

47 4.6. 驗證 CHAPTER 4. NOVA 23 RegionOne public: 25 glance image RegionOne 27 admin: RegionOne 29 public: RegionOne 31 internal: 上傳 crtusb.qcow2 到 image 服務,disk 格式為 qcow2,container 格式為 bare,public 讓所有 project 可以存取 5. 列出 image, 確認連線 image 服務 ~]# openstack image list ID Name Status e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active 確認 cells 和 placement API 工作正常 [root@ip112 ~]# nova-status upgrade check 2 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py :332: NotSupportedWarning: Configuration option(s) [ use_tpool ] not supported 4 exception.notsupportedwarning Option "os_region_name" from group "placement" is deprecated. 6 Use option "region-name" from group "placement" Upgrade Check Results Check: Cells v2 Result: Success 12 Details: None Check: Placement API Result: Success 16 Details: None Check: Resource Providers Result: Success De-Yu Wang CSIE CYUT 40

48 4.6. 驗證 CHAPTER 4. NOVA 20 Details: None De-Yu Wang CSIE CYUT 41

49 4.6. 驗證 CHAPTER 4. NOVA De-Yu Wang CSIE CYUT 42

50 CHAPTER 5. NEUTRON Chapter 5 Neutron 5.1 安裝前資料庫準備 1. Neutron 是 Openstack 的網路服務, 安裝 nova 前必須先產生資料庫, 登入 mysql: 1 [root@controller ~]# mysql -uroot -p Enter password: 3 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 11 5 Server version: MariaDB MariaDB Server 7 Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. 9 Type help; or \h for help. Type \c to clear the current input statement. MariaDB [(none)]> 2. 產生資料庫 neutron MariaDB [(none)]> CREATE DATABASE neutron; 2 Query OK, 1 row affected (0.00 sec) 3. 產生用戶 nova 在 localhost, 密碼 123qwe, 對資料庫 neutron 有全部的存取權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO localhost \ 2 IDENTIFIED BY 123qwe ; Query OK, 0 rows affected (0.00 sec) 4. 產生用戶 nova 在 localhost, 密碼 123qwe, 對資料庫 neutron 有全部的存取權 1 MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO % \ De-Yu Wang CSIE CYUT 43

51 5.2. 安裝前環境準備 控制節點 CHAPTER 5. NEUTRON IDENTIFIED BY 123qwe ; 3 Query OK, 0 rows affected (0.00 sec) 5. 退出 mysql 1 MariaDB [(none)]> exit Bye 5.2 安裝前環境準備 控制節點 1. 先讀取 openstack 管理者 admin 環境變數 [root@controller ~]# source admin.token 2. 新增 neutron 用戶, 輸入密碼後產生 1 [root@controller ~]# openstack user create --domain default -- password-prompt neutron User Password: 3 Repeat User Password: Field Value domain_id default enabled True 9 id a8cdd73f42c145019b6d622a1a name neutron 11 options {} password_expires_at None 新增 admin role 到 neutron user 及 service project 1 [root@controller ~]# openstack role add --project service --user neutron admin 4. 新增 neutron service 1 [root@controller ~]# openstack service create --name neutron \ --description "OpenStack Networking" network Field Value De-Yu Wang CSIE CYUT 44

52 5.2. 安裝前環境準備 控制節點 CHAPTER 5. NEUTRON description OpenStack Networking 7 enabled True id 216edf9649f24fb3bb8a15f5c917e9b2 9 name neutron type network 新增 Networking API service endpoints,interface public 1 [root@controller ~]# openstack endpoint create --region RegionOne \ network public Field Value enabled True 7 id f4ffec0c55bd44018be966980ff0a772 interface public 9 region RegionOne region_id RegionOne 11 service_id 216edf9649f24fb3bb8a15f5c917e9b2 service_name neutron 13 service_type network url 新增 Networking API service endpoints,interface internal 1 [root@controller ~]# openstack endpoint create --region RegionOne \ network internal Field Value enabled True 7 id b946fece6fc5401eb29898c18b44e040 interface internal 9 region RegionOne region_id RegionOne 11 service_id 216edf9649f24fb3bb8a15f5c917e9b2 service_name neutron 13 service_type network url 新增 Networking API service endpoints,interface admin 1 [root@controller ~]# openstack endpoint create --region RegionOne \ network admin De-Yu Wang CSIE CYUT 45

53 5.3. PROVIDER NETWORKS CHAPTER 5. NEUTRON Field Value enabled True 7 id fa883698df874c6d8a22db2de9a221ee interface admin 9 region RegionOne region_id RegionOne 11 service_id 216edf9649f24fb3bb8a15f5c917e9b2 service_name neutron 13 service_type network url Provider Networks 1. 安裝 neutron 網路套件 1 [root@controller ~]# yum install openstack-neutron openstack-neutronml2 \ openstack-neutron-linuxbridge ebtables 2. 編輯 /etc/neutron/neutron.conf, 設定如下 : [root@controller ~]# vim /etc/neutron/neutron.conf 2 [root@controller ~]# egrep ^(\[data\[def\[keystone_a\[nova\[ oslo_c[a-z]) \ /etc/neutron/neutron.conf 4 [DEFAULT] 6 core_plugin = ml2 service_plugins = 8 transport_url = rabbit://openstack:123qwe@controller auth_strategy = keystone 10 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true 12 [database] 14 connection = mysql+pymysql://neutron:123qwe@controller/neutron 16 [keystone_authtoken] auth_uri = 18 auth_url = memcached_servers = controller: auth_type = password project_domain_name = default 22 user_domain_name = default project_name = service 24 username = neutron De-Yu Wang CSIE CYUT 46

54 5.3. PROVIDER NETWORKS CHAPTER 5. NEUTRON password = 123qwe 26 [nova] 28 auth_url = auth_type = password 30 project_domain_name = default user_domain_name = default 32 region_name = RegionOne project_name = service 34 username = nova password = 123qwe 36 [oslo_concurrency] 38 lock_path = /var/lib/neutron/tmp 3. 編輯 /etc/neutron/plugins/ml2/ml2_conf.ini, 設定 Modular Layer2 (ML2) 外掛 [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini 2 [root@controller ~]# egrep ^(\[ml2\]\[ml2_type_flat\[sec[a-z]) \ /etc/neutron/plugins/ml2/ml2_conf.ini 4 [ml2] 6 type_drivers = flat,vlan tenant_network_types = 8 mechanism_drivers = linuxbridge extension_drivers = port_security 10 [ml2_type_flat] 12 flat_networks = provider 14 [securitygroup] enable_ipset = true 4. 編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini, 設定 linux 橋接代理, 其中 physical_interface_mappings 設定 em2 為控制節點主機的網卡代號 1 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent. ini [root@controller ~]# egrep ^(\[linux_b\[vxlan\[sec[a-z]) \ 3 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 5 [linux_bridge] physical_interface_mappings = provider:em2 7 [securitygroup] 9 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall. IptablesFirewallDriver De-Yu Wang CSIE CYUT 47

55 5.3. PROVIDER NETWORKS CHAPTER 5. NEUTRON 11 [vxlan] 13 enable_vxlan = false 5. 核心必須載入 br_netfilter 模組 1 [root@controller ~]# modprobe br_netfilter [root@controller ~]# lsmod grep br_net 3 br_netfilter bridge br_netfilter,ebtable_broute 6. 設定開機時核心載入 br_netfilter 模組 [root@controller ~]# echo "br_netfilter" > /etc/modules-load.d/ br_netfilter.conf 7. sysctl 設定 net.bridge.bridge-nf-call-iptables 開啟 1 [root@controller ~]# vim /etc/sysctl.d/k8s.conf [root@controller ~]# cat /etc/sysctl.d/k8s.conf 3 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 5 [root@controller ~]# /sbin/sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 7 net.bridge.bridge-nf-call-iptables = 1 [root@controller ~]# sysctl -a grep bridge 9 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 11 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-filter-pppoe-tagged = 0 13 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-pass-vlan-input-dev = 0 15 sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" 17 sysctl: reading key "net.ipv6.conf.em1.stable_secret" sysctl: reading key "net.ipv6.conf.em2.stable_secret" 19 sysctl: reading key "net.ipv6.conf.em3.stable_secret" sysctl: reading key "net.ipv6.conf.em4.stable_secret" 21 sysctl: reading key "net.ipv6.conf.lo.stable_secret" 8. 編輯 /etc/neutron/dhcp_agent.ini, 設定 DHCP 代理 1 [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [root@controller ~]# egrep ^(\[DEF[a-z]) /etc/neutron/dhcp_agent. ini 3 De-Yu Wang CSIE CYUT 48

56 5.4. SELF-SERVICE NETWORK CHAPTER 5. NEUTRON [DEFAULT] 5 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.dnsmasq 7 enable_isolated_metadata = true 5.4 Self-service network 1. 編輯 /etc/neutron/neutron.conf, 增加 service_plugins, allow_overlapping_ips 兩項設定 1 [root@controller ~]# vim /etc/neutron/neutron.conf [root@controller ~]# egrep ^(\[data\[def\[keystone_a\[nova\[ oslo_c[a-z]) \ 3 /etc/neutron/neutron.conf 5 [DEFAULT] core_plugin = ml2 7 service_plugins = router ## allow_overlapping_ips = true ## 9 transport_url = rabbit://openstack:123qwe@controller auth_strategy = keystone 11 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true 13 [database] 15 connection = mysql+pymysql://neutron:123qwe@controller/neutron 17 [keystone_authtoken] auth_uri = 19 auth_url = memcached_servers = controller: auth_type = password project_domain_name = default 23 user_domain_name = default project_name = service 25 username = neutron password = 123qwe 27 [nova] 29 auth_url = auth_type = password 31 project_domain_name = default user_domain_name = default 33 region_name = RegionOne project_name = service 35 username = nova password = 123qwe 37 [oslo_concurrency] 39 lock_path = /var/lib/neutron/tmp De-Yu Wang CSIE CYUT 49

57 5.4. SELF-SERVICE NETWORK CHAPTER 5. NEUTRON 2. 編輯 /etc/neutron/plugins/ml2/ml2_conf.ini, 設定 Modular Layer2 (ML2) 外掛 type_drivers 增加 vxlan,tenant_network_types 設為 vxlan,mechanism_drivers 增加 l2population 1 [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# egrep ^(\[ml2\]\[ml2_type_flat\[sec[a-z]) \ 3 /etc/neutron/plugins/ml2/ml2_conf.ini 5 [ml2] type_drivers = flat,vlan,vxlan ## 7 tenant_network_types = vxlan ## mechanism_drivers = linuxbridge,l2population ## 9 extension_drivers = port_security 11 [ml2_type_flat] flat_networks = provider 13 vni_ranges = 1:1000 ## 15 [securitygroup] enable_ipset = true 3. 編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini, 設定 linux 橋接代理, 其中 physical_interface_mappings 設定 em2 為控制節點主機的網卡代號 啟動 vxlan, 設定 local_ip 為控制主機的 IP 1 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent. ini [root@controller ~]# egrep ^(\[linux_b\[vxlan\[sec[a-z]) \ 3 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 5 [linux_bridge] physical_interface_mappings = provider:em2 7 [securitygroup] 9 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall. IptablesFirewallDriver 11 [vxlan] 13 enable_vxlan = true ## local_ip = ## 15 l2_population = true ## 4. 編輯 /etc/neutron/dhcp_agent.ini, 設定 DHCP 代理 1 [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [root@controller ~]# egrep ^(\[DEF[a-z]) /etc/neutron/dhcp_agent. ini 3 [DEFAULT] De-Yu Wang CSIE CYUT 50

58 5.5. 設定 METADATA AGENT CHAPTER 5. NEUTRON 5 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.dnsmasq 7 enable_isolated_metadata = true 5.5 設定 metadata agent 1. 編輯 /etc/neutron/metadata_agent.ini, 設定 [DEFAULT] 章節如下 : 1 [root@controller ~]# vim /etc/neutron/metadata_agent.ini [root@controller ~]# egrep ^(\[DEF[a-z]) /etc/neutron/ metadata_agent.ini 3 [DEFAULT] nova_metadata_host = controller 5 metadata_proxy_shared_secret = METADATA_SECRET 2. 編輯 /etc/nova/nova.conf, 確認 [neutron] 章節有如下設定 : 1 [root@controller ~]# vim /etc/nova/nova.conf [root@controller ~]# egrep ^(\[neutron) -A20 /etc/nova/nova.conf grep -v ^# 3 [neutron] 5 url = auth_url = 7 auth_type = password project_domain_name = default 9 user_domain_name = default region_name = RegionOne 11 project_name = service username = neutron 13 password = 123qwe service_metadata_proxy = true 15 metadata_proxy_shared_secret = METADATA_SECRET 5.6 服務啟動 1. 設定 /etc/neutron/plugin.ini 連結到 /etc/neutron/plugins/ml2/ml2_conf.ini 1 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc /neutron/plugin.ini 2. 以編輯好的設定檔, 更新資料庫 De-Yu Wang CSIE CYUT 51

59 5.7. 驗證 CHAPTER 5. NEUTRON 1 [root@controller ~]# su -s /bin/sh -c "neutron-db-manage \ --config-file /etc/neutron/neutron.conf \ 3 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron... 5 OK 3. 重新啟動 compute API 服務 1 [root@controller ~]# systemctl restart openstack-nova-api.service 4. 啟動 neutron 相關服務, 並設定開機啟動 1 [root@controller ~]# systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ 3 neutron-metadata-agent.service [root@controller ~]# systemctl start neutron-server.service \ 5 neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service 7 [root@controller ~]# systemctl enable neutron-l3-agent.service [root@controller ~]# systemctl start neutron-l3-agent.service 5. 如果同時有 provider 及 self-service networks, 重新啟動以下 neutron 相關服務, 並設定開機啟動 [root@controller ~]# systemctl restart neutron-server.service \ 2 neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service \ 4 neutron-metadata-agent.service \ neutron-l3-agent.service 6 [root@controller ~]# systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service \ 8 neutron-dhcp-agent.service \ neutron-metadata-agent.service \ 10 neutron-l3-agent.service 5.7 驗證 1. 載入管理者環境變變數 [root@controller ~]#. admin.token De-Yu Wang CSIE CYUT 52

60 5.7. 驗證 CHAPTER 5. NEUTRON 2. 列出 agent, 確認可以使用 neutron agent, 如果只有 provider network, 會有以下三種 agent 1 [root@controller ~]# openstack network agent list ID Agent Type Host eb768a6-2f98-4c71-9c92-a964677c58d8 DHCP agent controller 532abb67-c6eb e Linux bridge agent controller 7 c4d53419-f a4eb-3dfe45a4f063 Metadata agent controller Availability Zone Alive State Binary nova :-) UP neutron-dhcp-agent 13 None :-) UP neutron-linuxbridge-agent None :-) UP neutron-metadata-agent 列出 agent, 確認可以使用 neutron agent, 如果也有 self-service network, 則會多一項 L3 agent 1 [root@controller ~]#. admin.token [root@controller ~]# openstack network agent list ID Agent Type Host c0fc32-63ff-44cd-8464-f756dfac0c8e DHCP agent controller 7 87c9eccb-24f bcf c88767 Linux bridge agent controller b5276b97-37a4-4a0e-828d-c0f5a6b1c947 Metadata agent controller 9 f0c9588f-161e-4b8a-9ed5-bdfe3203e4eb L3 agent controller Availability Zone Alive State Binary nova :-) UP neutron-dhcp-agent 15 None :-) UP neutron-linuxbridge-agent None :-) UP neutron-metadata-agent De-Yu Wang CSIE CYUT 53

61 5.8. 重設 SUBNET POOLS CHAPTER 5. NEUTRON 17 nova :-) UP neutron-l3-agent 重設 subnet pools 1. 載入管理者環境變變數 ~]#. admin.token 2. 列出 network,provider network 的 Subnets 是 2a7287d7-4a4f-43cc-beb2- bc9c023af929 1 [root@controller ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 2a7287d7-4a4f -43cc-beb2-bc9c023af929 37d0dee7-d573-4f3b-863d-c92c05b6dbbb selfservice 36c a28-4cdf-bfee-9c65f subnet show 查看 2a7287d7-4a4f-43cc-beb2-bc9c023af929 的 IP 範圍為 至 [root@controller ~]# openstack subnet show 2a7287d7-4a4f-43cc-beb2- bc9c023af Field Value allocation_pools cidr /24 7 created_at T07:32:01Z description 9 dns_nameservers enable_dhcp True 11 gateway_ip host_routes 13 id 2a7287d7-4a4f-43cc-beb2-bc9c023af929 ip_version 4 15 ipv6_address_mode None ipv6_ra_mode None De-Yu Wang CSIE CYUT 54

62 5.8. 重設 SUBNET POOLS CHAPTER 5. NEUTRON 17 name provider network_id 0b0ebcee-057b-4798-bf73-5dd449e7879a 19 project_id e28adc c96b faa08c revision_number 0 21 segment_id None service_types 23 subnetpool_id None tags 25 updated_at T07:32:01Z unset subnet 2a7287d7-4a4f-43cc-beb2-bc9c023af929 的 allocation_pools [root@controller ~]# openstack subnet unset --allocation-pool \ 2 start= ,end= a7287d7-4a4f-43cc-beb2- bc9c023af 再設定 subnet 2a7287d7-4a4f-43cc-beb2-bc9c023af929 的 IP 範圍為 至 [root@controller ~]# openstack subnet set --allocation-pool start = ,end= a7287d7-4a4f-43cc-beb2- bc9c023af subnet show 再查看 2a7287d7-4a4f-43cc-beb2-bc9c023af929 的 IP 範圍為 至 [root@controller ~]# openstack subnet show 2a7287d7-4a4f-43cc-beb2- bc9c023af Field Value allocation_pools cidr /24 7 created_at T07:32:01Z description 9 dns_nameservers enable_dhcp True 11 gateway_ip host_routes 13 id 2a7287d7-4a4f-43cc-beb2-bc9c023af929 ip_version 4 15 ipv6_address_mode None ipv6_ra_mode None 17 name provider network_id 0b0ebcee-057b-4798-bf73-5dd449e7879a 19 project_id e28adc c96b faa08c revision_number 2 De-Yu Wang CSIE CYUT 55

63 5.9. Q-ROUTER 問題解決 CHAPTER 5. NEUTRON 21 segment_id None service_types 23 subnetpool_id None tags 25 updated_at T08:26:49Z q-router 問題解決 1. 載入管理者環境變變數 ~]#. admin.token 2. 問題 :ip netns 列出 namspace, 只有 qdhcp, 沒有 qrouter 1 [root@ip110 ~]# ip netns list qdhcp-6809ccd aede-0261ec25c159 (id: 1) 3 qdhcp-1ca dec-4e35-be69-725fb35d2b86 (id: 0) 3. 查詢 neutron 相關的服務狀態, 發現 neutron-linuxbridge-agent 無法啟動 1 [root@ip110 ~]# systemctl status neutron-linuxbridge-agent.service neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent 3 Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent. service; enabled; vendor preset: disabled) 5 Active: failed (Result: start-limit) since Wed :00:33 CST; 10s ago Process: ExecStart=/usr/bin/neutron-linuxbridge-agent -- config-file 7 /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/ neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config -dir 9 /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutronlinuxbridge-agent --log-file /var/log/neutron/linuxbridge-agent.log (code=exited, status=1/failure) 11 Process: ExecStartPre=/usr/bin/neutron-enable-bridgefirewall.sh (code=exited, status=0/success) Main PID: (code=exited, status=1/failure) 13 Dec 12 17:00:32 ip110 systemd[1]: Unit neutron-linuxbridge-agent. service entered failed state. 15 Dec 12 17:00:32 ip110 systemd[1]: neutron-linuxbridge-agent.service failed. De-Yu Wang CSIE CYUT 56

64 5.9. Q-ROUTER 問題解決 CHAPTER 5. NEUTRON Dec 12 17:00:33 ip110 systemd[1]: neutron-linuxbridge-agent.service holdoff time over, scheduling restart. 17 Dec 12 17:00:33 ip110 systemd[1]: start request repeated too quickly for neutron-linuxbridge-agent.service Dec 12 17:00:33 ip110 systemd[1]: Failed to start OpenStack Neutron Linux Bridge Agent. 19 Dec 12 17:00:33 ip110 systemd[1]: Unit neutron-linuxbridge-agent. service entered failed state. Dec 12 17:00:33 ip110 systemd[1]: neutron-linuxbridge-agent.service failed. 4. 解決方式 : 降低 neutron 相關套件版本為 [root@ip110 ~]# yum downgrade python-neutron openstack-neutron-* 2 [root@ip110 ~]# rpm -qa grep neutron openstack-neutron el7.noarch 4 python-neutron el7.noarch python2-neutronclient el7.noarch 6 openstack-neutron-common el7.noarch python2-neutron-lib el7.noarch 8 openstack-neutron-linuxbridge el7.noarch openstack-neutron-ml el7.noarch 5. 編輯 /usr/lib/python2.7/site-packages/tenacity/ init.py, 如下註解 elif tornado... ( 無效再改回 ) 1 [root@ip110 ~]# vim /usr/lib/python2.7/site-packages/tenacity/ init.py [root@ip110 ~]# grep tornado.gen -A5 -B3 /usr/lib/python2.7/sitepackages/tenacity/ init.py 3 def wrap(f): if asyncio and asyncio.iscoroutinefunction(f): 5 r = AsyncRetrying(*dargs, **dkw) #elif tornado and tornado.gen.is_coroutine_function(f): 7 r = TornadoRetrying(*dargs, **dkw) else: 9 r = Retrying(*dargs, **dkw) 11 return r.wraps(f) 6. 再重啟 neutron-linuxbridge-agent 服務, 正常啟動 1 [root@ip110 ~]# systemctl restart neutron-linuxbridge-agent.service [root@ip110 ~]# systemctl status neutron-linuxbridge-agent.service 3 neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent. service; enabled; vendor preset: disabled) De-Yu Wang CSIE CYUT 57

65 5.9. Q-ROUTER 問題解決 CHAPTER 5. NEUTRON 5 Active: active (running) since Wed :22:00 CST; 16min ago Process: ExecStartPre=/usr/bin/neutron-enable-bridgefirewall.sh (code=exited, status=0/success) 7 Main PID: (neutron-linuxbr) Tasks: 5 9 CGroup: /system.slice/neutron-linuxbridge-agent.service /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf sudo neutron-rootwrap-daemon /etc/neutron/rootwrap. conf /usr/bin/python2 /usr/bin/neutron-rootwrap-daemon / etc/neutron/rootwrap.conf 13 Dec 12 17:21:59 ip110 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent Dec 12 17:21:59 ip110 neutron-enable-bridge-firewall.sh[159291]: net. bridge.bridge-nf-call-iptables = 1 Dec 12 17:21:59 ip110 neutron-enable-bridge-firewall.sh[159291]: net. bridge.bridge-nf-call-ip6tables = 1 17 Dec 12 17:22:00 ip110 systemd[1]: Started OpenStack Neutron Linux Bridge Agent. Dec 12 17:22:02 ip110 sudo[159320]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemo...p.conf 19 Hint: Some lines were ellipsized, use -l to show in full. 7. ip netns 列出 namspace, 已出現 qrouter 1 [root@ip110 ~]# ip netns list qrouter-6c759b42-53f7-4ca7-bd8f-a861870b0f8c (id: 2) 3 qdhcp-6809ccd aede-0261ec25c159 (id: 1) qdhcp-1ca dec-4e35-be69-725fb35d2b86 (id: 0) De-Yu Wang CSIE CYUT 58

66 CHAPTER 6. VIRTUAL NETWORK Chapter 6 Virtual Network 6.1 Proivder Network 1. 要開始 instance 前, 必須先產生 virtual networks, 載入 openstack 管理者 admin 環境變數 [root@ip112 ~]# source admin.token 2. 產生新的網路, share 允許所有 projects 使用, external 設定為對外網路 1 [root@ip112 ~]# openstack network create --share --external \ --provider-physical-network provider \ 3 --provider-network-type flat provider Field Value admin_state_up UP availability_zone_hints 9 availability_zones created_at T07:25:41Z 11 description dns_domain None 13 id 0b0ebcee-057b-4798-bf73-5dd449e7879a ipv4_address_scope None 15 ipv6_address_scope None is_default False 17 is_vlan_transparent None mtu name provider port_security_enabled True 21 project_id e28adc c96b faa08c provider:network_type flat 23 provider:physical_network provider provider:segmentation_id None 25 qos_policy_id None revision_number 5 27 router:external External segments None De-Yu Wang CSIE CYUT 59

67 6.1. PROIVDER NETWORK CHAPTER 6. VIRTUAL NETWORK 29 shared True status ACTIVE 31 subnets tags 33 updated_at T07:25:41Z 查詢 network, 出現 provider ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 產生子網域 /24, 此網段為 provider 橋接對外網卡 em2 的網段 [root@controller ~]# grep provider:em2 -R /etc/neutron/ 2 /etc/neutron/plugins/ml2/linuxbridge_agent.ini: physical_interface_mappings = provider:em2 4 [root@ip112 ~]# openstack subnet create --network provider \ --allocation-pool start= ,end= \ 6 --dns-nameserver gateway \ --subnet-range /24 provider Field Value allocation_pools cidr /24 created_at T07:32:01Z 14 description dns_nameservers enable_dhcp True gateway_ip host_routes id 2a7287d7-4a4f-43cc-beb2-bc9c023af ip_version 4 ipv6_address_mode None 22 ipv6_ra_mode None name provider 24 network_id 0b0ebcee-057b-4798-bf73-5dd449e7879a project_id e28adc c96b faa08c 26 revision_number 0 segment_id None 28 service_types subnetpool_id None 30 tags updated_at T07:32:01Z De-Yu Wang CSIE CYUT 60

68 6.1. PROIVDER NETWORK CHAPTER 6. VIRTUAL NETWORK 5. 查詢網路 provider 出現 subnet ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 2a7287d7-4a4f-43 cc-beb2-bc9c023af 查詢網路 subnet,id 為 2a7287d7-4a4f-43cc-beb2-bc9c023af929, 出現 subnet 網段為 /24 [root@controller ~]# openstack subnet list ID Name Network Subnet a7287d7-4a4f-43cc-beb2-bc9c023af929 provider 0b0ebcee-057b bf73-5dd449e7879a / 查詢 port,subnet 2a7287d7-4a4f-43cc-beb2-bc9c023af929 有一個 port,ip 為 為產生網路時設定的 IP 起點 [root@controller ~]# openstack port list ID Name MAC Address Fixed IP Addresses e5cf9d2-eed5-41e0-978b-f816ba0e85d6 fa:16:3e:d3:7a:11 ip_address= , Status De-Yu Wang CSIE CYUT 61

69 6.2. SELF-SERVICE NETWORK CHAPTER 6. VIRTUAL NETWORK 10 subnet_id= 2a7287d7-4a4f-43cc-beb2-bc9c023af929 ACTIVE ping port, 成功回覆 1 [root@controller ~]# ping -c PING ( ) 56(84) bytes of data bytes from : icmp_seq=1 ttl=64 time=0.055 ms 64 bytes from : icmp_seq=2 ttl=64 time=0.043 ms 5 64 bytes from : icmp_seq=3 ttl=64 time=0.042 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2000ms 9 rtt min/avg/max/mdev = 0.042/0.046/0.055/0.009 ms 6.2 Self-service Network 1. Self-service network 為私有內網, 以一般用戶產生, 載入一般用戶 demo 環境變變數 1 [root@controller ~]#. demo.token 2. 新增網路 selfservice 1 [root@controller ~]# openstack network create selfservice Field Value admin_state_up UP availability_zone_hints 7 availability_zones created_at T07:45:07Z 9 description dns_domain None 11 id 37d0dee7-d573-4f3b-863d-c92c05b6dbbb ipv4_address_scope None 13 ipv6_address_scope None is_default False 15 is_vlan_transparent None mtu name selfservice port_security_enabled True 19 project_id 92d1ec3e04384ad599c1a8f5aed73663 provider:network_type None 21 provider:physical_network None provider:segmentation_id None 23 qos_policy_id None De-Yu Wang CSIE CYUT 62

70 6.2. SELF-SERVICE NETWORK CHAPTER 6. VIRTUAL NETWORK revision_number 2 25 router:external Internal segments None 27 shared False status ACTIVE 29 subnets tags 31 updated_at T07:45:07Z 查詢網路, 出現 selfservice, 但其還沒有 subnet [root@controller ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 2a7287d7-4a4f -43cc-beb2-bc9c023af d0dee7-d573-4f3b-863d-c92c05b6dbbb selfservice 網路 selfservice 新增 subnet, 網段為 /24,gateway 為 [root@controller ~]# openstack subnet create --network selfservice \ --dns-nameserver gateway \ 3 --subnet-range /24 selfservice Field Value allocation_pools cidr /24 9 created_at T07:48:45Z description 11 dns_nameservers enable_dhcp True 13 gateway_ip host_routes 15 id 36c a28-4cdf-bfee-9c65f ip_version 4 17 ipv6_address_mode None ipv6_ra_mode None 19 name selfservice network_id 37d0dee7-d573-4f3b-863d-c92c05b6dbbb 21 project_id 92d1ec3e04384ad599c1a8f5aed73663 revision_number 0 23 segment_id None De-Yu Wang CSIE CYUT 63

71 6.3. ROUTER CHAPTER 6. VIRTUAL NETWORK service_types 25 subnetpool_id None tags 27 updated_at T07:48:45Z 查詢 subnet,selfservice 出現網段為 /24 的 subnet ~]# openstack subnet list ID Name Network Subnet a7287d7-4a4f-43cc-beb2-bc9c023af929 provider 0b0ebcee-057b bf73-5dd449e7879a / c a28-4cdf-bfee-9c65f selfservice 37d0dee7-d573-4f3b-863d-c92c05b6dbbb / Router 1. Self-service network 以 virtual router 連線到對外的 provider network,provider network 欄位 router:external 必須是 external, 才可以讓 selfservice network 經由 provider network 對外, 這由產生 provider network 時以參數 external 達成 2. 以一般用戶 demo 產生 router, 載入一般用戶 demo 環境變變數 1 [root@controller ~]#. demo.token 3. 新增 router 1 [root@controller ~]# openstack router create router Field Value admin_state_up UP availability_zone_hints 7 availability_zones created_at T08:07:54Z 9 description distributed False 11 external_gateway_info None flavor_id None De-Yu Wang CSIE CYUT 64

72 6.4. 驗證 CHAPTER 6. VIRTUAL NETWORK 13 ha False id 3ce2307f-d558-4aa3-b723-8e71ed name router project_id 92d1ec3e04384ad599c1a8f5aed revision_number 1 routes 19 status ACTIVE tags 21 updated_at T08:07:54Z 查詢 router, 新增的為 3ce2307f-d558-4aa3-b723-8e71ed [root@controller ~]# openstack router list ID Name Status State ce2307f-d558-4aa3-b723-8e71ed router ACTIVE UP 6 53bc7576-bcee-4ad4-ae06-d18d9be372a2 router ACTIVE UP Distributed HA Project False False 92d1ec3e04384ad599c1a8f5aed False False 92d1ec3e04384ad599c1a8f5aed 新增 selfservice 的 subnet 到 router 3ce2307f-d558-4aa3-b723-8e71ed [root@controller ~]# openstack router add subnet \ 3ce2307f-d558-4aa3-b723-8e71ed selfservice 6. 設定 router 3ce2307f-d558-4aa3-b723-8e71ed 對外 gateway 為 provider network [root@controller ~]# openstack router set \ 2 3ce2307f-d558-4aa3-b723-8e71ed external-gateway provider 6.4 驗證 1. 載入管理者 admin 環境變變數 [root@controller ~]#. admin.token De-Yu Wang CSIE CYUT 65

73 6.4. 驗證 CHAPTER 6. VIRTUAL NETWORK 2. 查詢 namespace,qrouter 3ce2307f-d558-4aa3-b723-8e71ed 會有兩個 qdhcp 1 [root@controller ~]#. admin.token [root@controller ~]# ip netns 3 qrouter-3ce2307f-d558-4aa3-b723-8e71ed (id: 3) qdhcp-37d0dee7-d573-4f3b-863d-c92c05b6dbbb (id: 1) 5 qdhcp-0b0ebcee-057b-4798-bf73-5dd449e7879a (id: 0) qrouter-53bc7576-bcee-4ad4-ae06-d18d9be372a2 (id: 2) 3. 查詢 port, 多出 IP 為 的 port, 這為 selfservice 對外的 gateway [root@controller ~]# openstack port list ID Name MAC Address Fixed IP Add e5cf9d2-eed5-41e0-978b-f816ba0e85d6 fa:16:3e:d3:7a:11 ip_address= 6 62d8f5fd-4d fce7172d fa:16:3e:bd:be:2d ip_address= 90ab060a-c f-9fb4-0b17dac6b68a fa:16:3e:95:ab:92 ip_address= 8 bbec6143-2d2a-48ec-afa1-c39733ff6e73 fa:16:3e:7b:6c:92 ip_address= resses Status , subnet_id= 2a7287d7-4a4f-43cc-beb2-bc9c023af929 ACTIVE , subnet_id= 36c a28-4cdf-bfee-9c65f ACTIVE , subnet_id= 2a7287d7-4a4f-43cc-beb2-bc9c023af929 ACTIVE , subnet_id= 36c a28-4cdf-bfee-9c65f ACTIVE ping , 成功回覆 1 [root@controller ~]# ping -c PING ( ) 56(84) bytes of data. De-Yu Wang CSIE CYUT 66

74 6.4. 驗證 CHAPTER 6. VIRTUAL NETWORK 3 64 bytes from : icmp_seq=1 ttl=64 time=0.105 ms 64 bytes from : icmp_seq=2 ttl=64 time=0.047 ms 5 64 bytes from : icmp_seq=3 ttl=64 time=0.041 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2000ms 9 rtt min/avg/max/mdev = 0.041/0.064/0.105/0.029 ms De-Yu Wang CSIE CYUT 67

75 6.4. 驗證 CHAPTER 6. VIRTUAL NETWORK De-Yu Wang CSIE CYUT 68

76 CHAPTER 7. INSTANCE Chapter 7 Instance 7.1 新增 flavor 1. 載入管理者 admin 環境變變數 1 [root@controller ~]#. admin.token 2. 新增名為 m1.nano flavor,64m ram 1 [root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano Field Value OS-FLV-DISABLED:disabled False OS-FLV-EXT-DATA:ephemeral 0 7 disk 1 id 0 9 name m1.nano os-flavor-access:is_public True 11 properties ram rxtx_factor 1.0 swap 15 vcpus 新增名為 m1.crt flavor,4096m ram [root@controller ~]# openstack flavor create --id 1 --vcpus 2 --ram disk 16 m1.crt Field Value OS-FLV-DISABLED:disabled False 6 OS-FLV-EXT-DATA:ephemeral 0 De-Yu Wang CSIE CYUT 69

77 7.2. 登入金鑰對 CHAPTER 7. INSTANCE disk 16 8 id 1 name m1.crt 10 os-flavor-access:is_public True properties 12 ram 4096 rxtx_factor swap vcpus 查詢 flavor, 有 m1.nano 及 m1.crt [root@controller ~]# openstack flavor list ID Name RAM Disk Ephemeral VCPUs Is Public m1.nano True 6 1 m1.crt True 登入金鑰對 1. 載入用戶 demo 的環境變數 1 [root@controller ~]#. demo.token 2. 如果.ssh 目錄沒有金鑰對, 就先執行 ssh-keygen 產生 1 [root@controller ~]# ssh-keygen -q -N "" Enter file in which to save the key (/root/.ssh/id_rsa): ^C 3 [root@controller ~]# ll.ssh/ total r root root 788 May 22 17:50 authorized_keys -rw root root 1675 May 22 17:50 id_rsa 7 -rw-r--r--. 1 root root 392 May 22 17:50 id_rsa.pub -rw-r--r--. 1 root root 1139 May 22 19:25 known_hosts 3. 將 public key 加到 openstack, 設定名為 mykey [root@controller ~]# openstack keypair create --public-key ~/.ssh/ id_rsa.pub mykey Field Value De-Yu Wang CSIE CYUT 70

78 7.3. SECURITY GROUP RULES CHAPTER 7. INSTANCE fingerprint 2d:0a:a2:58:e1:e7:f6:bd:55:c8:6f:27:9d:64:be:d9 6 name mykey user_id 0d7bb abb97ec0b40c445b 確認 openstack 有 mykey 的金鑰對 [root@controller ~]# openstack keypair list Name Fingerprint mykey 2d:0a:a2:58:e1:e7:f6:bd:55:c8:6f:27:9d:64:be:d Security Group Rules 1. 載入用戶 demo 的環境變數 [root@controller ~]#. demo.token 2. 預設的 default security group 防火牆不允許遠端存取 instance, 新增 ICMP(ping) 1 [root@controller ~]# openstack security group list ID Name Description Project f c2-43b5-a3b3-89eb153c9249 default Default security group 92d1ec3e04384ad599c1a8f5aed default security group 防火牆新增允許 ICMP(ping) 規則 [root@controller ~]# openstack security group rule create --proto icmp default Field Value created_at T09:44:27Z De-Yu Wang CSIE CYUT 71

79 7.3. SECURITY GROUP RULES CHAPTER 7. INSTANCE 6 description direction ingress 8 ether_type IPv4 id 0e52cde4-637b-4c35-be1e-cc1b4a1c name None port_range_max None 12 port_range_min None project_id 92d1ec3e04384ad599c1a8f5aed protocol icmp remote_group_id None 16 remote_ip_prefix /0 revision_number 0 18 security_group_id 1f c2-43b5-a3b3-89eb153c9249 updated_at T09:44:27Z default security group 防火牆新增允許 SSH 規則 [root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default Field Value created_at T09:45:17Z 6 description direction ingress 8 ether_type IPv4 id bc34846d c7c-a250-33d37f name None port_range_max port_range_min 22 project_id 92d1ec3e04384ad599c1a8f5aed protocol tcp remote_group_id None 16 remote_ip_prefix /0 revision_number 0 18 security_group_id 1f c2-43b5-a3b3-89eb153c9249 updated_at T09:45:17Z 列出 default security group, 防火牆新有允許 ICMP 及 SSH 規則 [root@controller ~]# openstack security group show default Field Value created_at T11:36:54Z 6 description Default security group id 1f c2-43b5-a3b3-89eb153c9249 De-Yu Wang CSIE CYUT 72

80 7.3. SECURITY GROUP RULES CHAPTER 7. INSTANCE 8 name default project_id 92d1ec3e04384ad599c1a8f5aed revision_number 6 rules created_at= T09:44:27Z, direction= ingress, ethertype= IPv4, 12 created_at= T11:36:54Z, direction= egress, ethertype= IPv4, created_at= T11:36:54Z, direction= ingress, ethertype= IPv4, 14 created_at= T11:36:54Z, direction= egress, ethertype= IPv6, created_at= T09:45:17Z, direction= ingress, ethertype= IPv4, 16 created_at= T11:36:54Z, direction= ingress, ethertype= IPv6, updated_at T09:45:17Z id= 0e52cde4-637b-4c35-be1e-cc1b4a1c0518, protocol= icmp, remote_ip_prefix= /0, up id= 83bd79ae-99a dde-835bbb8233cd, updated_at= T11 :36:54Z 30 id= 987bfd05-3b42-4ae7-a50e-58bb8ec7c6a1, remote_group_id= 1f c2-43b5-a3b3-89eb153c id= b84190e5-a ee-d08eeb797d6a, updated_at= T11 :36:54Z 32 id= bc34846d c7c-a250-33d37f363242, port_range_max= 22, port_range_min= 22, protoco id= fc9d ac-4d18-b4c0-b6dc1f197d08, remote_group_id= 1f c2-43b5-a3b3-89eb153c De-Yu Wang CSIE CYUT 73

81 7.4. 新增 INSTANCE 前確認 CHAPTER 7. INSTANCE dated_at= T09:44:27Z , updated_at= T11:36:54Z 48 l= tcp, remote_ip_prefix= /0, updated_at= T09 :45:17Z , updated_at= T11:36:54Z 新增 Instance 前確認 1. 載入用戶 demo 的環境變數 ~]#. demo.token 2. 列出 falvor 1 [root@controller ~]# openstack flavor list ID Name RAM Disk Ephemeral VCPUs Is Public m1.nano True 1 m1.crt True 列出 image 1 [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active De-Yu Wang CSIE CYUT 74

82 7.5. 使用 SELFSERVICE NETWORK 新增 INSTANCE CIRROS CHAPTER 7. INSTANCE e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active 列出 network 1 [root@controller ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 2a7287d7-4a4f -43cc-beb2-bc9c023af929 37d0dee7-d573-4f3b-863d-c92c05b6dbbb selfservice 36c a28-4cdf-bfee-9c65f 列出 security group 1 [root@controller ~]# openstack security group list ID Name Description Project f c2-43b5-a3b3-89eb153c9249 default Default security group 92d1ec3e04384ad599c1a8f5aed 確認 project 92d1ec3e04384ad599c1a8f5aed73663 是 demo, 不是管理者 admin [root@controller ~]# openstack project list ID Name d1ec3e04384ad599c1a8f5aed73663 demo 使用 Selfservice network 新增 Instance cirros 1. 載入用戶 demo 的環境變數 De-Yu Wang CSIE CYUT 75

83 7.5. 使用 SELFSERVICE NETWORK 新增 INSTANCE CIRROS CHAPTER 7. ~]#. demo.token INSTANCE 2. 以 flavor m1.nano, image cirros, network selfservice, security group default 新增 server 1 [root@controller ~]# openstack server create --flavor m1.nano --image cirros \ --nic net-id=37d0dee7-d573-4f3b-863d-c92c05b6dbbb \ 3 --security-group default \ --key-name mykey cirros Field Value OS-DCF:diskConfig MANUAL 9 OS-EXT-AZ:availability_zone OS-EXT-STS:power_state NOSTATE 11 OS-EXT-STS:task_state scheduling OS-EXT-STS:vm_state building 13 OS-SRV-USG:launched_at None OS-SRV-USG:terminated_at None 15 accessipv4 accessipv6 17 addresses adminpass KN4nA7xQgkYx 19 config_drive created T13:32:56Z 21 flavor m1.nano (0) hostid 23 id bfaefe bbea-e02e1a4a482f image cirros (5bb44bde-d1e b0fcd3cdab6929c5) 25 key_name mykey De-Yu Wang CSIE CYUT 76

84 7.6. 存取 INSTANCE CIRROS CHAPTER 7. INSTANCE name cirros1 27 progress 0 project_id 92d1ec3e04384ad599c1a8f5aed properties security_groups name= 1f c2-43b5-a3b3-89 eb153c status BUILD updated T13:32:56Z 33 user_id 0d7bb abb97ec0b40c445b1 volumes_attached 列出 instance cirros1, 狀態是 ACTIVE 1 [root@controller ~]# openstack server list ID Name Status Networks Image Flavor bfaefe bbea-e02e1a4a482f cirros1 ACTIVE selfservice= cirros m1.nano 存取 Instance cirros 1. 載入用戶 demo 的環境變數 [root@controller ~]#. demo.token 2. 取得 VNC session URL, 可以使用網頁開啟終端機, 但這是在本機 主機上才能開啟 1 [root@controller ~]# openstack console url show cirros De-Yu Wang CSIE CYUT 77

85 7.7. 使用 PROVIDER NETWORK 新增 INSTANCE CRTUSB CHAPTER 7. INSTANCE 3 Field Value type novnc url b-95d7-8e666af 列出 instance crt1-instance 及 cirros1-instance, 但狀態都是 ERROR ID Name Status Networks Image Flavor fec7b0e-dabb-47da-bb41-f8ade99b3a5d cirros1-instance ERROR cirros m1.crt 5 df02f9e2-833e-44d6-b25e d0a crt1-instance ERROR crtusb m1.crt 4. 刪除 instance crt1-instance 及 cirros1-instance [root@controller ~]# openstack server delete 3fec7b0e-dabb-47da-bb41- f8ade99b3a5d 2 [root@controller ~]# openstack server delete df02f9e2-833e-44d6-b25e d0a [root@controller ~]# openstack server list 7.7 使用 provider network 新增 Instance crtusb 1. 載入用戶 demo 的環境變數 1 [root@controller ~]#. demo.token 2. 再列一次必要環境 De-Yu Wang CSIE CYUT 78

86 7.7. 使用 PROVIDER NETWORK 新增 INSTANCE CRTUSB CHAPTER 7. INSTANCE 1 [root@controller ~]# openstack flavor list ID Name RAM Disk Ephemeral VCPUs Is Public m1.nano True 1 m1.crt True [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active 13 e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active [root@controller ~]# openstack network list ID Name Subnets b0ebcee-057b-4798-bf73-5dd449e7879a provider 2a7287d7-4a4f -43cc-beb2-bc9c023af929 37d0dee7-d573-4f3b-863d-c92c05b6dbbb selfservice 36c a28-4cdf-bfee-9c65f [root@controller ~]# openstack security group list ID Name Description Project f c2-43b5-a3b3-89eb153c9249 default Default security group 92d1ec3e04384ad599c1a8f5aed 以 flavor m1.crt, image crtusb, network selfservice, security group default 新增名為 crt1 的 server 1 [root@controller ~]# openstack server create --flavor m1.crt --image crtusb \ --nic net-id=0b0ebcee-057b-4798-bf73-5dd449e7879a --security-group default \ 3 --key-name mykey crt Field Value De-Yu Wang CSIE CYUT 79

87 7.7. 使用 PROVIDER NETWORK 新增 INSTANCE CRTUSB CHAPTER 7. INSTANCE 7 OS-DCF:diskConfig MANUAL OS-EXT-AZ:availability_zone 9 OS-EXT-STS:power_state NOSTATE OS-EXT-STS:task_state scheduling 11 OS-EXT-STS:vm_state building OS-SRV-USG:launched_at None 13 OS-SRV-USG:terminated_at None accessipv4 15 accessipv6 addresses 17 adminpass YURzGpQ8sK2r config_drive 19 created T08:51:58Z flavor m1.crt (1) 21 hostid id 60da86d9-bb99-4a20-80cd-6c62581d image crtusb (e4bea3db-581a-472d-a0dd-37 b1e7055bbf) key_name mykey 25 name crt1 progress 0 27 project_id 92d1ec3e04384ad599c1a8f5aed73663 properties 29 security_groups name= 1f c2-43b5-a3b3-89 eb153c9249 status BUILD 31 updated T08:51:58Z user_id 0d7bb abb97ec0b40c445b1 33 volumes_attached De-Yu Wang CSIE CYUT 80

88 7.7. 使用 PROVIDER NETWORK 新增 INSTANCE CRTUSB CHAPTER 7. INSTANCE 列出 server, 目前有兩台 VM, 一台使用 selfservice network 的 cirros1,ip 為 , 另一台使用 provider network 剛新增的 crt1, 自動取得 IP 為 ~]# openstack server list ID Name Status Networks Image Flavor da86d9-bb99-4a20-80cd-6c62581d6276 crt1 ACTIVE provider = crtusb m1.crt 6 bfaefe bbea-e02e1a4a482f cirros1 ACTIVE selfservice= cirros m1.nano 成功 ping [root@controller ~]# ping -c PING ( ) 56(84) bytes of data bytes from : icmp_seq=1 ttl=64 time=1.65 ms 64 bytes from : icmp_seq=2 ttl=64 time=1.64 ms 5 64 bytes from : icmp_seq=3 ttl=64 time=7.86 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2003ms 9 rtt min/avg/max/mdev = 1.640/3.721/7.868/2.932 ms 6. SSH 成功連線 [root@controller ~]# ssh The authenticity of host ( ) can t be established. 3 RSA key fingerprint is SHA256: bvyrbjlstxva7sleyd3adbhysnziuioyxrybckrckjo. RSA key fingerprint is MD5:9e:5d:fa:d4:3c:ea:66:e4:2c:4e:46:99:ca:14: c7:07. 5 Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added (RSA) to the list of known hosts. 7 root@ s password: [root@deyu ~]# exit 9 logout De-Yu Wang CSIE CYUT 81

89 7.8. 驗證 CHAPTER 7. INSTANCE Connection to closed. 11 ~]# 7.8 驗證 1. 載入管理者環境變變數 1 [root@controller ~]#. admin.token 2. 列出 agent, 確認可以使用 neutron agent 1 [root@controller ~]# openstack network agent list ID Agent Type Host eb768a6-2f98-4c71-9c92-a964677c58d8 DHCP agent controller 532abb67-c6eb e Linux bridge agent controller 7 c4d53419-f a4eb-3dfe45a4f063 Metadata agent controller Availability Zone Alive State Binary nova :-) UP neutron-dhcp-agent 13 None :-) UP neutron-linuxbridge-agent None :-) UP neutron-metadata-agent VNC Client 1. 查詢目前 instance 狀態, 只有 crt2 active 1 [root@controller ~]#. demo.token [root@controller ~]# openstack server list ID Name Status Networks Image Flavor De-Yu Wang CSIE CYUT 82

90 7.10. SNAPSHOT CHAPTER 7. INSTANCE 4c06d1cd-4dbf-43de-a5e3-3c5945f61ff1 fsa1 SHUTOFF provider = fsa m2.crt 7 c92c1859-ec57-46f8-a13f-6f23edbfc761 esm1 SHUTOFF provider = esm m1.esm 84c92c8d-0b a711-ea0a06e2f6c7 crt2 ACTIVE provider = crtusb m2.crt 9 bfaefe bbea-e02e1a4a482f cirros1 SHUTOFF selfservice= cirros m1.nano 取得 crt2 的 novnc 連線網址, 使用瀏覽器輸入此網址即可進入操作 console [root@controller ~]# openstack console url show 84c92c8d-0b a711-ea0a06e2f6c Field Value type novnc url -4ff9-a03c-c356e238dda3 3. 每個 active 的 instance 都可以使用 vncviewer 連線 IP:PORT, 其中 IP 為 openstack host IP 此例為 ,PORT 預設從 5900 開始 1 [root@dyw219 ~]# vncviewer : Snapshot 1. 取得 demo 的令牌 1 [root@controller ~]# source demo.token 2. 查看 instance De-Yu Wang CSIE CYUT 83

91 7.10. SNAPSHOT CHAPTER 7. INSTANCE 1 [root@controller ~]# openstack server list ID Name Status Networks Image Flavor cefeb1d-e5aa bf4-25ab f gitlab ACTIVE provider= kvm7-14g m1.crt e-e2ae-4daa-9c d0b7fd9b kvm7-centos7 ACTIVE provider= kvm7 m1.crt 7 4c06d1cd-4dbf-43de-a5e3-3c5945f61ff1 fsa1 SHUTOFF provider= fsa m2.crt c92c1859-ec57-46f8-a13f-6f23edbfc761 esm1 SHUTOFF provider= esm m1.esm 9 84c92c8d-0b a711-ea0a06e2f6c7 crt2 SHUTOFF provider= crtusb m2.crt bfaefe bbea-e02e1a4a482f cirros1 SHUTOFF selfservice= cirros m1.nano 產生 gitlab 的 snapshot 1 [root@controller ~]# openstack server image create --name gitlab qcow2 gitlab Field Value checksum None container_format None 7 created_at T08:58:12Z disk_format None 9 file /v2/images/2e3d968a-57b a95f-66a09cdc4c02/ file id 2e3d968a-57b a95f-66a09cdc4c02 11 min_disk 16 min_ram 0 13 name gitlab qcow2 owner 92d1ec3e04384ad599c1a8f5aed73663 De-Yu Wang CSIE CYUT 84

92 7.10. SNAPSHOT CHAPTER 7. INSTANCE 15 properties base_image_ref= c6-4b30-91db-0 f7d05f4d695, protected False 17 schema /v2/schemas/image size None 19 status queued tags 21 updated_at T08:58:12Z virtual_size None 23 visibility private 列出 image, 出現新建立的 gitlab qcow2 ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active 6 ba1684d2-def a2a4-ab7689baf0ab crt active e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active 8 be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 5c7046c a7e3-50acaf32d41f fsa active 10 2e3d968a-57b a95f-66a09cdc4c02 gitlab qcow2 active b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active 刪除新建立的 gitlab qcow2 De-Yu Wang CSIE CYUT 85

93 7.10. SNAPSHOT CHAPTER 7. INSTANCE 1 [root@controller ~]# openstack image delete 2e3d968a-57b a95f -66a09cdc4c02 [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active 7 ba1684d2-def a2a4-ab7689baf0ab crt active e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active 9 be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 5c7046c a7e3-50acaf32d41f fsa active b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active 重新產生 gitlab 的 snapshot gitlab [root@controller ~]# openstack server image create --name gitlab gitlab Field Value checksum None container_format None 7 created_at T09:16:30Z disk_format None 9 file /v2/images/0e608ed1-70ac-4f39-a b9eb181fa/ file id dc6f8495-0f f8e6b1bdf77 11 min_disk 16 min_ram 0 13 name gitlab owner 92d1ec3e04384ad599c1a8f5aed properties base_image_ref= c6-4b30-91db-0 f7d05f4d695, protected False 17 schema /v2/schemas/image size None De-Yu Wang CSIE CYUT 86

94 7.10. SNAPSHOT CHAPTER 7. INSTANCE 19 status queued tags 21 updated_at T09:16:30Z virtual_size None 23 visibility private 列出 image, 出現新建立的 gitlab , 狀態 queued ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active 6 ba1684d2-def a2a4-ab7689baf0ab crt active e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active 8 be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 5c7046c a7e3-50acaf32d41f fsa active 10 dc6f8495-0f f8e6b1bdf77 gitlab queued b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active 稍後再列出 image, 出現新建立的 gitlab , 狀態 active 1 [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active ba1684d2-def a2a4-ab7689baf0ab crt active 7 e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 9 5c7046c a7e3-50acaf32d41f fsa active dc6f8495-0f f8e6b1bdf77 gitlab active b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active 將新建立的 gitlab image 存檔 /usr/src/gitlab img, 大小 7.3G De-Yu Wang CSIE CYUT 87

95 7.11. IMAGE 及 INSTANCE 刪除 CHAPTER 7. INSTANCE 1 [root@controller ~]# openstack image save --file /usr/src/gitlab img gitlab [root@controller ~]# ll -h /usr/src/gitlab img 3 -rw-r--r--. 1 root root 7.3G Nov 21 17:32 /usr/src/gitlab img 7.11 Image 及 instance 刪除 1. 取得 demo 的令牌 1 [root@controller ~]# source demo.token 2. 查看 instance 1 [root@controller ~]# openstack server list ID Name Status Networks Image Flavor b50da3e d-b9aa-325c win7-test1 SHUTOFF selfservice= win7 m1.crt bfaefe bbea-e02e1a4a482f cirros1 SHUTOFF selfservice= cirros m1.nano 刪除 instance win7-test1 1 [root@controller ~]# openstack server delete win7-test1 [root@controller ~]# openstack server list ID Name Status Networks Image Flavor bfaefe bbea-e02e1a4a482f cirros1 SHUTOFF selfservice= cirros m1.nano 取得 admin 的令牌 De-Yu Wang CSIE CYUT 88

96 7.11. IMAGE 及 INSTANCE 刪除 CHAPTER 7. INSTANCE 1 [root@controller ~]# source admin.token 5. 列出 images 1 [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active ba1684d2-def a2a4-ab7689baf0ab crt active 7 e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 9 5c7046c a7e3-50acaf32d41f fsa active dc6f8495-0f f8e6b1bdf77 gitlab active b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active 13 c2e0bf58-c d-bfb2-4d16c7d9567e win7 active 刪除 image win7 [root@controller ~]# openstack image delete win7 7. 列出所有 images,win7 已刪除 1 [root@controller ~]# openstack image list ID Name Status bb44bde-d1e b0fc-d3cdab6929c5 cirros active ba1684d2-def a2a4-ab7689baf0ab crt active 7 e4bea3db-581a-472d-a0dd-37b1e7055bbf crtusb active be9ab170-d0b4-46ec-8eda-e073b03f00a0 esm active 9 5c7046c a7e3-50acaf32d41f fsa active dc6f8495-0f f8e6b1bdf77 gitlab active b b4732c893bc kvm7 active c6-4b30-91db-0f7d05f4d695 kvm7-14g active De-Yu Wang CSIE CYUT 89

97 7.11. IMAGE 及 INSTANCE 刪除 CHAPTER 7. INSTANCE De-Yu Wang CSIE CYUT 90

98 CHAPTER 8. DASHBOARD Chapter 8 Dashboard 8.1 安裝與設定 1. 安裝 openstack-dashboard 套件 1 [root@controller ~]# yum install openstack-dashboard 2. 編輯設定檔 /etc/openstack-dashboard/local_settings 1 [root@controller ~]# vim /etc/openstack-dashboard/local_settings 3. 設定允許所有 IP 存取 dashboard,dashboard 主機在 controoler, 其中 表示接受 xxx IP, 另 controller IP 一定要加入, 否則將無法存取 1 [root@controller ~]# egrep ^(OPENSTACK_HOSTALLOWED_HOSTS) /etc/ openstack-dashboard/local_settings ALLOWED_HOSTS = [.cyut.edu.tw, localhost ] 3 ALLOWED_HOSTS = [ , , localhost, ] 5 OPENSTACK_HOST = "controller" 4. 設定 memcached session 儲存服務 1 [root@controller ~]# egrep -A6 ^SESSION_ENGINE /etc/openstackdashboard/local_settings SESSION_ENGINE = django.contrib.sessions.backends.cache 3 CACHES = { default : { 5 BACKEND : django.core.cache.backends.memcached. MemcachedCache, LOCATION : controller:11211, De-Yu Wang CSIE CYUT 91

99 8.1. 安裝與設定 CHAPTER 8. DASHBOARD 7 }, } 5. 啟動 Identity API version 3, 支援 multidomain, 預設 doamin 為 default, 預設 role 為 user [root@controller ~]# egrep ^OPENSTACK_KEYSTONE_ /etc/openstackdashboard/local_settings 2 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = Default 4 OPENSTACK_KEYSTONE_URL = " % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" 6 OPENSTACK_KEYSTONE_BACKEND = { 6. API 版本 [root@controller ~]# egrep -A4 ^OPENSTACK_API_VERSIONS openstack-dashboard/local_settings 2 OPENSTACK_API_VERSIONS = { "identity": 3, 4 "image": 2, "volume": 2, 6 } /etc/ 7. 網路設定 [root@controller ~]# egrep -A8 ^OPENSTACK_NEUTRON_NETWORK openstack-dashboard/local_settings 2 OPENSTACK_NEUTRON_NETWORK = { enable_router : False, 4 enable_quotas : False, enable_distributed_router : False, 6 enable_ha_router : False, enable_lb : False, 8 enable_firewall : False, enable_vpn : False, 10 enable_fip_topology_check : False, /etc/ 8. 設定時區 Asia/Taipei [root@controller ~]# egrep ^TIME_ZONE local_settings 2 TIME_ZONE = "Asia/Taipei" /etc/openstack-dashboard/ De-Yu Wang CSIE CYUT 92

100 8.2. NOVNC 設定 CHAPTER 8. DASHBOARD 9. 重新啟動 httpd 及 memcached 服務 ~]# systemctl restart httpd.service memcached. service 8.2 NOVNC 設定 1. 設定 novncproxy url 1 [root@controller ~]# vim /etc/nova/nova.conf [root@controller ~]# grep ^novncproxy /etc/nova/nova.conf 3 novncproxy_base_url= 2. 重新啟動服務 1 [root@controller ~]# systemctl restart openstack-nova-novncproxy. service \ openstack-nova-compute.service openstack-nova-console.service 8.3 Firewall 1. Dashboard vnc console 的 6080/tcp 必須開啟 [root@controller ~]# firewall-cmd --permanent --add-port=6080/tcp 2 success [root@controller ~]# firewall-cmd --permanent --add-port=5900/tcp 4 success [root@controller ~]# firewall-cmd --reload 6 success 2. Dashboard 開啟 instance console 進行 SSH 連線, 所以必須確認防火牆有開啟 ssh 服務 [root@controller ~]# firewall-cmd --add-service=ssh 2 success [root@controller ~]# firewall-cmd --add-service=ssh --permanent 4 success 3. 查詢目前的 firewall De-Yu Wang CSIE CYUT 93

101 8.4. 測試 CHAPTER 8. DASHBOARD ~]# firewall-cmd --list-all 2 public (active) target: default 4 icmp-block-inversion: no interfaces: em2 6 sources: services: dhcpv6-client https http ssh 8 ports: 6080/tcp 5900/tcp protocols: 10 masquerade: no forward-ports: 12 source-ports: icmp-blocks: 14 rich rules: 8.4 測試 1. 使用瀏覽器連到 出現畫面如下 : 2. 管理者帳號為 admin, 密碼是 keystone 安裝時設定 3. 登入後即可進行配置 De-Yu Wang CSIE CYUT 94

102 8.5. 除錯 CHAPTER 8. DASHBOARD 8.5 除錯 1. dashboard 登入畫面沒有 domain 可選, 無法登入, 檢查 memcached 服務啟動失敗 ~]# systemctl status memcached.service 2 memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled ; vendor preset: disabled) 4 Active: failed (Result: exit-code) since Tue :34:49 CST; 5s ago Process: ExecStart=/usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} De-Yu Wang CSIE CYUT 95

103 8.5. 除錯 CHAPTER 8. DASHBOARD 6 -c ${MAXCONN} $OPTIONS (code=exited, status=71) Main PID: (code=exited, status=71) 8 Sep 18 19:34:49 controller systemd[1]: Started memcached daemon. 10 Sep 18 19:34:49 controller systemd[1]: Starting memcached daemon... Sep 18 19:34:49 controller memcached[177450]: bind(): Cannot assign requested address 12 Sep 18 19:34:49 controller memcached[177450]: failed to listen on TCP port 11211: Cannot assign requested address 14 Sep 18 19:34:49 controller systemd[1]: memcached.service: main process exited, code=exited, status=71/n/a 16 Sep 18 19:34:49 controller systemd[1]: Unit memcached.service entered failed state. Sep 18 19:34:49 controller systemd[1]: memcached.service failed. 2. /usr/bin/memcached 執行命令的參數在 /etc/sysconfig/memcached 1 [root@controller ~]# cat /etc/sysconfig/memcached PORT="11211" 3 USER="memcached" MAXCONN="1024" 5 CACHESIZE="64" OPTIONS="-l ,::1" 3. 問題出在 /etc/sysconfig/memcached 中 OPTIONS 參數不對, 修改如下 : [root@controller ~]# grep OPTION /etc/sysconfig/memcached 2 OPTIONS="-l " 4. 重新啟動 memcached 服務, 狀態正常 [root@controller ~]# systemctl restart memcached.service 2 [root@controller ~]# systemctl status memcached.service memcached.service - memcached daemon 4 Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled ; vendor preset: disabled) Active: active (running) since Tue :40:44 CST; 4s ago 6 Main PID: (memcached) Tasks: 10 8 CGroup: /system.slice/memcached.service /usr/bin/memcached -p u memcached -m 64 -c l Sep 18 19:40:44 controller systemd[1]: Started memcached daemon. 12 Sep 18 19:40:44 controller systemd[1]: Starting memcached daemon... De-Yu Wang CSIE CYUT 96

104 CHAPTER 9. 其他及問題解決 Chapter 9 其他及問題解決 9.1 問題 : 無法刪除 network 1. 使用 openstack network delete network-id 無法刪除 network, 訊息如下 : i[root@controller ~]# openstack network delete 21cfc7b1-5a8e dd-83b872b46cab 2 Failed to delete network with name or ID 21cfc7b1-5a8e dd-83 b872b46cab : Unable to delete Network for openstack.network.v2.network.network( provider:physical_network=none, 4 ipv6_address_scope=none, revision_number=3, port_security_enabled= True, mtu=1450, id=21cfc7b1-5a8e dd-83b872b46cab, router:external=false, availability_zone_hints=[], 6 availability_zones=[u nova ], ipv4_address_scope=none, shared=false, project_id=92d1ec3e04384ad599c1a8f5aed73663, status=active, 8 subnets=[u 538a1fa a5-8be33f9d02e0 ], description=, tags =[], updated_at= t12:28:23z, provider:segmentation_id=56, name= selfservice, 10 admin_state_up=true, created_at= t12:19:45z, provider: network_type=vxlan) 1 of 1 networks failed to delete. 2. 原因為 network 下有 subnet,subnet 下有 port, 其中 route-gateway 是必要的 port, 但這個 port 無法使用 openstack port delete port-id 刪除, 必須以 clean 清除 解決步驟如下 : (a) 查詢 port, 要移除的是 53bc7576-bcee-4ad4-ae06-d18d9be372a2 1 [root@controller ~]# neutron router-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead id name tenant_id De-Yu Wang CSIE CYUT 97

105 9.1. 問題 : 無法刪除 NETWORK CHAPTER 9. 其他及問題解決 bc7576-bcee-4ad4-ae06-d18d9be372a2 router 92 d1ec3e04384ad599c1a8f5aed (b) 使用 neutron route-gateway-clear 移除 53bc7576-bcee-4ad4-ae06- d18d9be372a2 1 [root@controller ~]# neutron router-gateway-clear 53bc7576-bcee -4ad4-ae06-d18d9be372a2 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. 3 Removed gateway from router 53bc7576-bcee-4ad4-ae06-d18d9be372a2 (c) 查詢 subnet id 是 538a1fa a5-8be33f9d02e0 1 [root@controller ~]# openstack subnet list ID Name adab37-d f-bdfa-66c2a4d13203 selfservice 538a1fa a5-8be33f9d02e0 provider Network Subnet d2fe8f c83f597cda /24 21cfc7b1-5a8e dd-83b872b46cab / (d) 刪除 subnet 538a1fa a5-8be33f9d02e0 1 [root@controller ~]# openstack subnet delete 538a1fa a5-8be33f9d02e0 (e) 查詢要刪除的 network provider 已無 subnet, 其 id 是 21cfc7b1-5a8e dd-83b872b46cab 1 [root@controller ~]# openstack network list ID Name Subnets De-Yu Wang CSIE CYUT 98

106 9.2. 刪除 NETWORK CHAPTER 9. 其他及問題解決 5 21cfc7b1-5a8e dd-83b872b46cab provider 3d2fe8f c83f597cda1 selfservice 37adab37- d f-bdfa-66c2a4d (f) 刪除 network 21cfc7b1-5a8e dd-83b872b46cab 1 [root@controller ~]# openstack network delete 21cfc7b1-5a8e dd-83b872b46cab (g) 再列出 network,provider 已刪除 1 [root@controller ~]# openstack network list ID Name Subnets d2fe8f c83f597cda1 selfservice 37adab37- d f-bdfa-66c2a4d 刪除 network 1. 要刪除網路 selfservice 是用戶 demo 創建, 所以先載入用戶 demo 的環境變數 [root@controller ~]#. demo.token 2. 要刪除網路是 selfservice 1 [root@controller ~]# openstack network list ID Name Subnets d2fe8f c83f597cda1 selfservice 37adab37-d f-bdfa-66c2a4d De-Yu Wang CSIE CYUT 99

107 9.2. 刪除 NETWORK CHAPTER 9. 其他及問題解決 7 \end{enumerate} \item 列出, 兩個 ports ports 於 subnet 37adab37-d f-bdfa-66, 這是 c2a4d13203 selfservice 的 subnet 9 \begin{myverbatim} [root@controller ~]# openstack port list ID Name MAC Address Fixed IP Addresses f50-e5c0-471a-9f24-ba1355d0f36c fa:16:3e:de:db:25 ip_address= , 15 b1ba39ec b6-850c-13ab5cfd9c3f fa:16:3e:ea:66:23 ip_address= , Status subnet_id= 37adab37-d f-bdfa-66c2a4d13203 ACTIVE 21 subnet_id= 37adab37-d f-bdfa-66c2a4d13203 ACTIVE \end{enumerate} \item 以 port delete 刪除, portport b1ba39ec b6-850c-13ab5cfd9c3f 無法刪除, 原於是其為 router 的 gateway 25 \begin{myverbatim} [root@controller ~]# openstack port delete 92485f50-e5c0-471a-9f24- ba1355d0f36c 27 [root@controller ~]# openstack port delete b1ba39ec b6-850c-13 ab5cfd9c3f Failed to delete port with name or ID b1ba39ec b6-850c-13 ab5cfd9c3f : 29 Unable to delete Port for openstack.network.v2.port.port(status=build, created_at= t12:29:00z, description=, allowed_address_pairs =[], tags=[], 31 network_id=3d2fe8f c83f597cda1, tenant_id=92 d1ec3e04384ad599c1a8f5aed73663, extra_dhcp_opts=[], admin_state_up=true, updated_at= t04 :23:50Z, name=, 33 device_owner=network:router_interface, revision_number=15, mac_address=fa:16:3e:ea:66:23, port_security_enabled=false, binding:vnic_type=normal, fixed_ips=[{u subnet_id : 35 u 37adab37-d f-bdfa-66c2a4d13203, u ip_address : u }], id=b1ba39ec b6-850c-13ab5cfd9c3f, security_groups=[], 37 device_id=53bc7576-bcee-4ad4-ae06-d18d9be372a2) 1 of 1 ports failed to delete. 39 \end{enumerate} \item router list 列出, 但一樣無法刪除 router 41 \begin{myverbatim} [root@controller ~]# openstack router list De-Yu Wang CSIE CYUT 100

108 9.2. 刪除 NETWORK CHAPTER 9. 其他及問題解決 ID Name Status State Distributed HA Project bc7576-bcee-4ad4-ae06-d18d9be372a2 router ACTIVE UP False False 92d1ec3e04384ad599c1a8f5aed ~]# openstack router delete 53bc7576-bcee-4ad4-ae06- d18d9be372a2 49 Failed to delete router with name or ID 53bc7576-bcee-4ad4-ae06- d18d9be372a2 : Unable to delete Router for openstack.network.v2.router.router(status =ACTIVE, 51 external_gateway_info=none, availability_zone_hints=[], availability_zones=[u nova ], description=, tags=[], tenant_id=92d1ec3e04384ad599c1a8f5aed73663, 53 created_at= t12:28:46z, admin_state_up=true, updated_at = T01:37:32Z, flavor_id=none, routes=[], revision=7, id=53bc7576-bcee-4ad4-ae06- d18d9be372a2, name=router) 55 1 of 1 routers failed to delete. \end{enumerate} 57 \item router 先 unset gateway \begin{myverbatim} 59 [root@controller ~]# openstack router unset --external-gateway 53 bc7576-bcee-4ad4-ae06-d18d9be372a2 \end{enumerate} 61 \item 再查一下, 除原無法刪除的 ports b1ba39ec b6-850c-13, 又自動生成 ab5cfd9c3f e34033f0-dfef-40d5-933d- b7a5584cca94 \begin{myverbatim} 63 [root@controller ~]# openstack port list ID Name MAC Address Fixed IP Addresses 67 b1ba39ec b6-850c-13ab5cfd9c3f fa:16:3e:ea:66:23 ip_address= , e34033f0-dfef-40d5-933d-b7a5584cca94 fa:16:3e:68:a3:d2 ip_address= , Status subnet_id= 37adab37-d f-bdfa-66c2a4d13203 BUILD subnet_id= 37adab37-d f-bdfa-66c2a4d13203 BUILD \end{enumerate} 77 \item 將 port b1ba39ec b6-850c-13ab5cfd9c3f 從 router b1ba39ec b6-850c-13ab5cfd9c3f 中移 除 \begin{myverbatim} De-Yu Wang CSIE CYUT 101

109 9.2. 刪除 NETWORK CHAPTER 9. 其他及問題解決 79 ~]# openstack router remove port 53bc7576-bcee-4ad4- ae06-d18d9be372a2 b1ba39ec b6-850c-13ab5cfd9c3f \end{enumerate} 81 \item 再查一下, 只剩剛剛生成 ports e34033f0-dfef-40d5-933d- b7a5584cca94 \begin{myverbatim} 83 ~]# openstack port list ID Name MAC Address Fixed IP Addresses e34033f0-dfef-40d5-933d-b7a5584cca94 fa:16:3e:68:a3:d2 ip_address= , Status subnet_id= 37adab37-d f-bdfa-66c2a4d13203 BUILD \end{enumerate} 95 \item port e34033f0-dfef-40d5-933d-b7a5584cca94 不是, 直接以 gateway port delete 刪除, 直到沒有列出 port \begin{myverbatim} 97 ~]# openstack port delete e34033f0-dfef-40d5-933db7a5584cca94 ~]# openstack port list 99 \end{enumerate} \item 查詢 subnet id 37adab37-d f-bdfa-66 c2a4d \begin{myverbatim} ~]# openstack subnet list ID Name Network Subnet adab37-d f-bdfa-66c2a4d13203 selfservice 3d2fe8f c83f597cda / \end{enumerate} 109 \item 刪除 subnet 37adab37-d f-bdfa-66 c2a4d13203 \begin{myverbatim} 111 ~]# openstack subnet delete 37adab37-d f-bdfa -66c2a4d13203 \end{enumerate} 113 \item 查詢 network id 是 3d2fe8f c83f597cda1 \begin{myverbatim} 115 ~]# openstack network list ID Name Subnets De-Yu Wang CSIE CYUT 102

110 9.3. 刪除 KEYPAIR CHAPTER 9. 其他及問題解決 119 3d2fe8f c83f597cda1 selfservice \end{enumerate} \item 刪除 network 3d2fe8f c83f597cda1 123 \begin{myverbatim} ~]# openstack network delete 3d2fe8f c83f597cda1 125 \end{enumerate} \item 再查 network 已是空的 127 \begin{myverbatim} ~]# openstack network list 9.3 刪除 keypair 1. 載入管理者 admin 環境變變數 1 [root@controller ~]#. admin.token 2. openstack 有 mykey 的金鑰對 1 [root@controller ~]# openstack keypair list Name Fingerprint mykey 2d:0a:a2:58:e1:e7:f6:bd:55:c8:6f:27:9d:64:be:d 刪除 keypair mykey [root@controller ~]# openstack keypair delete mykey 4. 沒有任何 keypair 1 [root@controller ~]# openstack keypair list 9.4 新增 Instance ERROR 1. 以 flavor m1.crt, image crtusb, network provider 新增 server, 因為 security group 有三個 default, 所以使用 id De-Yu Wang CSIE CYUT 103

111 9.4. 新增 INSTANCE ERROR CHAPTER 9. 其他及問題解決 ~]# openstack server create --flavor m1.crt --image crtusb \ 2 --nic net-id=0b0ebcee-057b-4798-bf73-5dd449e7879a \ --security-group 03924e93-f88e-4000-acb4-2c06922dae59 \ 4 --key-name mykey crt1-instance Field Value 8 OS-DCF:diskConfig MANUAL OS-EXT-AZ:availability_zone 10 OS-EXT-SRV-ATTR:host None OS-EXT-SRV-ATTR:hypervisor_hostname None 12 OS-EXT-SRV-ATTR:instance_name OS-EXT-STS:power_state NOSTATE 14 OS-EXT-STS:task_state scheduling OS-EXT-STS:vm_state building 16 OS-SRV-USG:launched_at None OS-SRV-USG:terminated_at None 18 accessipv4 accessipv6 20 addresses adminpass srxshjy3rhlg 22 config_drive created T09:09:58Z 24 flavor m1.crt (0) hostid 26 id df02f9e2-833e-44d6-b25e d0a image crtusb (e4bea3db-581a-472da0dd-37b1e7055bbf) 28 key_name mykey name crt1-instance De-Yu Wang CSIE CYUT 104

112 9.4. 新增 INSTANCE ERROR CHAPTER 9. 其他及問題解決 30 progress 0 project_id e28adc c96b faa08c 32 properties security_groups name= 03924e93-f88e-4000-acb4-2c06922dae59 34 status BUILD updated T09:09:58Z 36 user_id c5f79bab788049e7a59395b6f94a911f volumes_attached 列出 instance crt1-instance, 但狀態是 ERROR [root@controller ~]# openstack server list ID Name Status Networks Image Flavor df02f9e2-833e-44d6-b25e d0a crt1-instance ERROR crtusb m1.crt 以 flavor m1.crt, image crtusb, network provider 新增 server, 因為 security group 有三個 default, 所以使用 id [root@controller ~]# openstack server create --flavor m1.crt --image cirros \ 2 --nic net-id=0b0ebcee-057b-4798-bf73-5dd449e7879a \ --security-group 03924e93-f88e-4000-acb4-2c06922dae59 \ 4 --key-name mykey cirros1-instance Field Value 8 OS-DCF:diskConfig MANUAL De-Yu Wang CSIE CYUT 105

113 9.4. 新增 INSTANCE ERROR CHAPTER 9. 其他及問題解決 OS-EXT-AZ:availability_zone 10 OS-EXT-SRV-ATTR:host None OS-EXT-SRV-ATTR:hypervisor_hostname None 12 OS-EXT-SRV-ATTR:instance_name OS-EXT-STS:power_state NOSTATE 14 OS-EXT-STS:task_state scheduling OS-EXT-STS:vm_state building 16 OS-SRV-USG:launched_at None OS-SRV-USG:terminated_at None 18 accessipv4 accessipv6 20 addresses adminpass L46SXegpeSji 22 config_drive created T09:15:16Z 24 flavor m1.crt (0) hostid 26 id 3fec7b0e-dabb-47da-bb41- f8ade99b3a5d image cirros (5bb44bde-d1e b0fc-d3cdab6929c5) 28 key_name mykey name cirros1-instance 30 progress 0 project_id e28adc c96b faa08c 32 properties security_groups name= 03924e93-f88e-4000-acb4-2c06922dae59 34 status BUILD updated T09:15:16Z 36 user_id De-Yu Wang CSIE CYUT 106

114 9.5. AMQP ERROR CHAPTER 9. 其他及問題解決 c5f79bab788049e7a59395b6f94a911f volumes_attached 列出 instance crt1-instance 及 cirros1-instance, 但狀態都是 ERROR ~]# openstack server list ID Name Status Networks Image Flavor fec7b0e-dabb-47da-bb41-f8ade99b3a5d cirros1-instance ERROR cirros m1.crt 6 df02f9e2-833e-44d6-b25e d0a crt1-instance ERROR crtusb m1.crt 5. 原因 :nova-compute 服務啟動不正常, 先刪除 instance crt1-instance 及 cirros1- instance 解決方式如下節 AMQP ERROR 1 [root@controller ~]# openstack server delete 3fec7b0e-dabb-47da-bb41- f8ade99b3a5d [root@controller ~]# openstack server delete df02f9e2-833e-44d6-b25e d0a 3 [root@controller ~]# openstack server list 9.5 AMQP ERROR 1. AMQP server 無法連線, 錯誤訊息中 user openstack 無法登入 1 [root@controller ~]# grep -i error /var/log/nova/nova-scheduler.log 3 MessageDeliveryFailure: Unable to connect to AMQP server on controller:5672 after None tries: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication 5 mechanism AMQPLAIN. For details see the broker logfile. 7 [root@controller ~]# tail /var/log/rabbitmq/rabbit@controller.log 9 =INFO REPORT==== 26-May-2018::20:18:00 === De-Yu Wang CSIE CYUT 107

115 9.5. AMQP ERROR CHAPTER 9. 其他及問題解決 Connection < > ( : > :5672) has a client-provided 11 name: nova-compute:99477:d58489ed-f453-4fc6-b3bb-60334e49cb87 13 =ERROR REPORT==== 26-May-2018::20:18:00 === Error on AMQP connection < > ( : > :5672, state: starting): 15 AMQPLAIN login refused: user openstack - invalid credentials 17 =INFO REPORT==== 26-May-2018::20:18:00 === closing AMQP connection < > ( : > : nova-compute:99477:d58489ed-f453-4fc6-b3bb-60334e49cb87) 2. rabbitmqctl 列出 user, 只有 guest, 沒有 openstack 1 [root@controller ~]# rabbitmqctl list_users Listing users... 3 guest [administrator] 3. rabbitmqctl 新增 user openstack 1 [root@controller ~]# rabbitmqctl add_user openstack 123qwe Creating user "openstack"... 3 [root@controller ~]# rabbitmq list_users -bash: rabbitmq: command not found 5 [root@controller ~]# rabbitmqctl list_users Listing users... 7 openstack [] guest [administrator] 4. 重新啟動 neutron-l3-agent, 再查看 log, 訊息是 user openstack 拒存取 [root@controller ~]# systemctl restart neutron-l3-agent.service 2 [root@controller ~]# tail /var/log/rabbitmq/rabbit@controller.log 4 =ERROR REPORT==== 26-May-2018::20:23:06 === Error on AMQP connection < > ( : > : nova-conductor:104133:d627cecc-52f8-47b8-a30c-41d37a474ebd, user: openstack, state: opening): access to vhost / refused for user openstack 8 [root@controller ~]# systemctl restart neutron-l3-agent.service [root@controller ~]# tail /var/log/rabbitmq/rabbit@controller.log 10 =INFO REPORT==== 26-May-2018::20:23:06 === 12 Connection < > ( : > :5672) has a client-provided De-Yu Wang CSIE CYUT 108

116 9.6. DRACUT ERROR CHAPTER 9. 其他及問題解決 name: nova-conductor:104133:d627cecc-52f8-47b8-a30c-41d37a474ebd 14 =ERROR REPORT==== 26-May-2018::20:23:06 === 16 Error on AMQP connection < > ( : > : nova-conductor:104133:d627cecc-52f8-47b8-a30c-41d37a474ebd, user: openstack, state: opening): 18 access to vhost / refused for user openstack 20 =INFO REPORT==== 26-May-2018::20:23:06 === closing AMQP connection < > ( : > : nova-conductor:104133:d627cecc-52f8-47b8-a30c-41d37a474ebd) 5. rabbitmqctl 新增 user openstack 存取權 [root@controller ~]#. admin.token 2 [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" 重新啟動 neutron-l3-agent, 再查看 log, 已正常連線 1 [root@controller ~]# systemctl restart neutron-l3-agent.service [root@controller ~]# tail /var/log/rabbitmq/rabbit@controller.log 3 accepting AMQP connection < > ( : > :5672) 5 =INFO REPORT==== 26-May-2018::20:25:39 === Connection < > ( : > :5672) has a client-provided 7 name: nova-conductor:106769:310eab3b-768d-43df-981c-6af4d8286e88 9 =INFO REPORT==== 26-May-2018::20:25:40 === accepting AMQP connection < > ( : > :5672) 11 =INFO REPORT==== 26-May-2018::20:25:40 === 13 Connection < > ( : > :5672) has a client-provided name: nova-conductor:106771:046fa1d3-4c17-486c-8aed-5b26d84109ed 9.6 Dracut ERROR 1. CentOS 7 的 Instance 啟動時出現如下訊息, 無法開機 [...] Warning: dracut-initqueue timeout - starting timeout scripts De-Yu Wang CSIE CYUT 109

117 9.6. DRACUT ERROR CHAPTER 9. 其他及問題解決 2. 原因是 Instance 使用的原始 Image 是 Xen (libvirt) Host, 要使用 KVM 開機時,initramfs 檔中沒有 virt 相關的驅動程式, 所以無法開機 解決方式 : 回到原始來源 Image 的系統, 將 virt 相關的驅動程式加入 initramfs 3. 找到目前版本核心中的 virt 相關的模組 1 [root@esm ~]# find /lib/modules/ el7.x86_64/\ kernel/drivers/ -name *virt*.ko 3 /lib/modules/ el7.x86_64/kernel/drivers/block/virtio_blk.ko /lib/modules/ el7.x86_64/kernel/drivers/char/hw_random/ virtio-rng.ko 5 /lib/modules/ el7.x86_64/kernel/drivers/char/virtio_console.ko /lib/modules/ el7.x86_64/kernel/drivers/dma/virt-dma.ko 7 /lib/modules/ el7.x86_64/kernel/drivers/gpu/drm/virtio/ virtio-gpu.ko /lib/modules/ el7.x86_64/kernel/drivers/net/virtio_net.ko 9 /lib/modules/ el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko /lib/modules/ el7.x86_64/kernel/drivers/virtio/virtio.ko 11 /lib/modules/ el7.x86_64/kernel/drivers/virtio/ virtio_balloon.ko /lib/modules/ el7.x86_64/kernel/drivers/virtio/virtio_input.ko 13 /lib/modules/ el7.x86_64/kernel/drivers/virtio/virtio_pci. ko /lib/modules/ el7.x86_64/kernel/drivers/virtio/virtio_ring. ko 4. 加入找到 virt 相關模組重新產生並覆蓋原來的 initramfs [root@esm ~]# dracut --add-drivers \ 2 " find /lib/modules/ el7.x86_64/kernel/drivers/ \ -name *virt*.ko sed s.*/g " --force 5. 檢查新產生的 initramfs 已包含 virt 相關模組 1 [root@esm ~]# zcat /boot/initramfs el7.x86_64.img cpio - t grep virt usr/bin/systemd-detect-virt 3 usr/lib/modules/ el7.x86_64/kernel/drivers/virtio usr/lib/modules/ el7.x86_64/kernel/drivers/virtio/virtio.ko 5 usr/lib/modules/ el7.x86_64/kernel/drivers/virtio/ virtio_ring.ko usr/lib/modules/ el7.x86_64/kernel/drivers/virtio/ virtio_balloon.ko De-Yu Wang CSIE CYUT 110

118 9.7. NESTED VIRTUALIZATION CHAPTER 9. 其他及問題解決 7 usr/lib/modules/ el7.x86_64/kernel/drivers/virtio/ virtio_input.ko usr/lib/modules/ el7.x86_64/kernel/drivers/virtio/ virtio_pci.ko 9 usr/lib/modules/ el7.x86_64/kernel/drivers/block/virtio_blk.ko usr/lib/modules/ el7.x86_64/kernel/drivers/char/hw_random/ virtio-rng.ko 11 usr/lib/modules/ el7.x86_64/kernel/drivers/char/ virtio_console.ko usr/lib/modules/ el7.x86_64/kernel/drivers/dma/virt-dma.ko 13 usr/lib/modules/ el7.x86_64/kernel/drivers/gpu/drm/virtio usr/lib/modules/ el7.x86_64/kernel/drivers/gpu/drm/virtio/ virtio-gpu.ko 15 usr/lib/modules/ el7.x86_64/kernel/drivers/net/virtio_net. ko usr/lib/modules/ el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko blocks 9.7 Nested Virtualization 1. 先確定 Openstack 主機 nested kvm ENABLE 1 [root@controller ~]# cat /sys/module/kvm_intel/parameters/nested Y 2. nova.conf 的 virt_type=kvm 而且 cpu_mode=host-passthrough [root@controller ~]# vim /etc/nova/nova.conf 2 [root@controller ~]# egrep ^(virtcpu)_ /etc/nova/nova.conf virt_type=kvm 4 cpu_mode=host-passthrough 3. 重新啟動 openstack-nova 相關服務 [root@controller ~]# systemctl restart openstack-nova-* 4. instance crt2 先關機 1 [root@controller ~]# openstack server stop crt2 De-Yu Wang CSIE CYUT 111

119 9.7. NESTED VIRTUALIZATION CHAPTER 9. 其他及問題解決 5. instance crt2 安裝時使用的 cpu mode 是 host-modle, 將其修改為 hostpassthrough 1 [root@controller ~]# vim /etc/libvirt/qemu/instance xml [root@controller ~]# grep cpu mode /etc/libvirt/qemu/instance xml 3 <cpu mode= host-passthrough check= none > 6. instance crt2 開機 1 [root@controller ~]# openstack server start crt2 7. 登入 instance crt2, 查看 CPU 訊息,Virtualization: VT-x 表示支援虛擬化技術 1 [root@dywmac ~]# ssh -X xx.xxx [root@deyu ~]# lscpu 3 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit 5 Byte Order: Little Endian CPU(s): 4 7 On-line CPU(s) list: 0-3 Thread(s) per core: 1 9 Core(s) per socket: 1 Socket(s): 4 11 NUMA node(s): 1 Vendor ID: GenuineIntel 13 CPU family: 6 Model: Stepping: 2 CPU MHz: BogoMIPS: Virtualization: VT-x 19 Hypervisor vendor: KVM Virtualization type: full 21 L1d cache: 32K L1i cache: 32K 23 L2 cache: 4096K L3 cache: 16384K 25 NUMA node0 CPU(s): 0-3 De-Yu Wang CSIE CYUT 112

120 Part II DYW Linux 6 + Grizzly De-Yu Wang CSIE CYUT 113

121

122 CHAPTER 10. 前言 Chapter 10 前言 10.1 認識 Openstack 1. OpenStack 由好幾個不同的功能的雲端服務套件所構成的開源雲端軟體 2. 目前共有 7 個功能不同的套件 : (a) 身分識別套件 Keystone: 具有中央目錄, 能查看哪位使用者可存取哪些服務, 並且, 提供了多種驗證方式 (b) 物件儲存套件 Swift: 可擴展的分布式儲存平臺, 以防止單點故障的情況產生, 可存放非結構化的資料 (c) 映象檔管理套件 Glance: 硬碟或伺服器的映象檔尋找 註冊以及服務交付等功能 (d) 區塊儲存套件 Cinder: 整合了運算套件, 可讓 IT 人員查看儲存設備的容量使用狀態, 具有快照功能 (e) 網通套件 Quantum: 可擴展 隨插即用, 透過 API 來管理的網路架構系統, 以確保 IT 人員在部署雲端服務時, 網路服務不會出現瓶頸, 或是成為無法部署的因素之一 (f) 運算套件 Nova: 部署與管理虛擬機器的功能 (g) 提供管理介面的儀表板套件 Horizon: 圖形化的網頁介面, 讓 IT 人員可以綜觀雲端服務目前的規模與狀態, 並能夠統一存取 部署與管理所有雲端服務所使用到的資源 3. 可依照自己的需求安裝套件, 並非每個套件皆要安裝 4. OpenStack 每個版本都有不同的名稱 : (a) 2010 年 10 月,OpenStack 第一版誕生, 名為 Austin, 這個版本僅有運算套件 Nova 與物件儲存套件 Swift (b) 2013 年 4 月釋出安全性支援的穩定版 Grizzly (c) 2013 年 12 月版本 Havana, 為目前最新的穩定版本 (d) 2014 年發展版為 Icehouse 5. 本文件實作以 Grizzly 版本進行雲端架設 De-Yu Wang CSIE CYUT 115

123 10.2. 環境說明 CHAPTER 10. 前言 10.2 環境說明 1. 作業系統 :DYW Linux ( 以 CentOS 為基礎 ) 2. Grizzly 套件已納入 DWY Linux YUM server 設定 YUM 如下 : 1 [root@kvm7 ~]# vim /etc/yum.repos.d/dywang.repo [dywang] 3 name=de-yu WANG baseurl= 5 gpgcheck=0 enabled=1 3. 下載 DYW Linux, 安裝在 64 G 以上的硬碟 4. Openstack 各個服務可獨立安裝在不同的電腦 5. 若作業系統無其他用途, 可不使用虛擬機架設 Openstack 6. 本文件實作環境, 使用 KVM 虛擬機, 且所有服務皆安裝此虛擬機 (a) 虛擬機安裝硬碟 32G (b) 增加 /dev/vdb /dev/vdc 各 1G 的硬碟 (c) 單一網卡, 實際使用時建議使用 2 張網卡 (d) 記憶體 RAM 最少 3G (e) CPU CORE 數最少 2, 這裡用 4 7. 虛擬機安裝指令如下 : virt-install --accelerate --name osusb --ram vcpus 4\ 2 --disk path=/var/lib/libvirt/images/osusb.qcow2,format=qcow2,size=32, bus=virtio \ --extra "ks=ftp:// /pub/centos6/add/os-ks.cfg" \ 4 --location=ftp:// /pub/centos6/ \ --disk path=/var/lib/libvirt/images/vdb.ovl,format=qcow2,size=1,bus= virtio \ 6 --disk path=/var/lib/libvirt/images/vdc.ovl,format=qcow2,size=1,bus= virtio De-Yu Wang CSIE CYUT 116

124 CHAPTER 11. 訊息代理 QPID Chapter 11 訊息代理 QPID 11.1 認識 QPID 1. 訊息安全有兩個方式 : (a) 使用者帳號及密碼認證, 才能使用 Openstack 的服務 (b) 使用 SSL 加密通訊資料, 以增加安全性 2. Openstack 服務使用 Qpid 訊息系統進行安全的資料通訊 11.2 QPID 安裝 1. 安裝套件 [root@kvm4 ~]# yum install -y qpid-cpp-server qpid-cpp-server-ssl cyrus-sasl-md5 nss-tools 2. 產生 Simple Authentication and Security Layer(SASL) 使用者帳號及密碼 1 [root@kvm4 ~]# saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID qpidauth Password: 123qwe 3 Again (for verification): 123qwe 3. 確認 SASL 使用者帳號 1 [root@kvm4 ~]# sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb qpidauth@qpid: userpassword De-Yu Wang CSIE CYUT 117

125 11.2. QPID 安裝 CHAPTER 11. 訊息代理 QPID 4. 設定使用者 qpidauth 認證 ~]# echo acl allow all all > /etc/qpid/ qpidauth.acl 2 [root@kvm4 ~]# echo "QPIDD_OPTIONS= --acl-file /etc/qpid/qpidauth.acl " >> /etc/sysconfig/qpidd [root@kvm4 ~]# chown qpidd /etc/qpid/qpidauth.acl 4 [root@kvm4 ~]# chmod 600 /etc/qpid/qpidauth.acl 5. 不允許匿名者連線 [root@kvm4 ~]# sed -i s/\(^cluster.* \)\(ANONYMOUS\)/\1#\2/ /etc/ qpidd.conf 2 [root@kvm4 ~]# grep ANONYMOUS -A1 /etc/qpidd.conf cluster-mechanism=digest-md5 #ANONYMOUS 4 auth=yes 6. 建立 Qpid 認證目錄 [root@kvm4 ~]# mkdir /etc/pki/tls/qpid 2 [root@kvm4 ~]# chmod 700 /etc/pki/tls/qpid [root@kvm4 ~]# chown qpidd /etc/pki/tls/qpid 7. 產生 Qpid 認證密碼 1 [root@kvm4 ~]# echo 123qwe > /etc/qpid/qpid.pass [root@kvm4 ~]# chmod 600 /etc/qpid/qpid.pass 3 [root@kvm4 ~]# chown qpidd /etc/qpid/qpid.pass 8. 產生 Qpid 認證資料庫, 其中命令選項 : -N 產生新的認證, 後接 -d 指定目錄 -f 指定密碼檔案 -S 建立認證並加入資料庫 1 [root@kvm4 ~]# echo $HOSTNAME kvm4.deyu.wang 3 [root@kvm4 ~]# certutil -N -d /etc/pki/tls/qpid/ -f /etc/qpid/qpid. pass [root@kvm4 ~]# certutil -S -d /etc/pki/tls/qpid/ -n $HOSTNAME \ 5 -s "CN=$HOSTNAME" -t "CT,," -x -f /etc/qpid/qpid.pass -z /usr/bin/ certutil De-Yu Wang CSIE CYUT 118

126 11.2. QPID 安裝 CHAPTER 11. 訊息代理 QPID 7 Generating key. This may take a few moments 確認使用者 qpidd 可使用 Qpid 認證目錄 1 [root@kvm4 ~]# chown -R qpidd /etc/pki/tls/qpid/ 10. 修改 /etc/qpidd.conf 設定檔 1 [root@kvm4 ~]# chown -R qpidd /etc/pki/tls/qpid/ [root@kvm4 ~]# cat >> /etc/qpidd.conf << EOF 3 > ssl-cert-db=/etc/pki/tls/qpid/ > ssl-cert-name=$hostname 5 > ssl-cert-password-file=/etc/qpid/qpid.pass > require-encryption=yes 7 > EOF 11. 啟動並設定開機啟動 qpidd 服務 1 [root@kvm4 ~]# /etc/init.d/qpidd start Starting Qpid AMQP daemon: [ OK ] 3 [root@kvm4 ~]# chkconfig qpidd on De-Yu Wang CSIE CYUT 119

127 11.2. QPID 安裝 CHAPTER 11. 訊息代理 QPID De-Yu Wang CSIE CYUT 120

128 CHAPTER 12. 身份識別 KEYSTONE Chapter 12 身份識別 Keystone 12.1 認識 Keystone 1. Keystone 服務提供身份識別 服務規則及服務令牌 (Token) 2. Keystone 是整個 Openstack 的註表, 其他服務經由 keystone 來註冊其 Endpoint( 服務存取的 URL) 3. 任何服務之間相互作用, 都需要經由 Keystone 的身份驗證, 以獲得目標服務的 Endpoint 4. Keystone 基本概念 (a) User, 用戶 : 可以經由 keystone 認證, 進而存取各服務的用戶或程序 (b) Tenant, 租戶 : 各個服務中一些可以存取的資合集合 例如, 在 Nova 中一個 tenant 可以是一些電腦, 在 Swift 和 Glance 中一個 tenant 可以是一些儲存裝置, 在 Quantum 中一個 tenant 可以是一些網路資源 (c) Role, 角色 : 用戶可以存取的資源權限 例如 Nova 中的虛擬機 Glance 中的儲存裝置 用戶可以被添加到任意一個全域或租戶內的角色中 在全局的角色中, 用戶的角色權限作用於所有的租戶, 可以對所有的策戶執行角色規定的權限 ; 在租戶的角色, 用戶僅能在目前租戶內執行角色規定的權限 (d) Service, 服務 : 如 Nova Glance Swift 根據 User,Tenant 和 Role 三個概念, 一個服務可以確認目前用戶是否具有存取其資源的權限 (e) Endpoint, 端點 : 存取一個服務的 URL 12.2 Keystone 安裝 1. 安裝套件 :openstack-selinux 提供 Openstack SELinux 策略 1 [root@kvm4 ~]# yum install -y openstack-keystone openstack-selinux 2. 安裝 openstack 工具, 並利用其建立 MySQL 資料庫 De-Yu Wang CSIE CYUT 121

129 12.2. KEYSTONE 安裝 CHAPTER 12. 身份識別 KEYSTONE 1 [root@kvm4 ~]# yum install -y openstack-utils [root@kvm4 ~]# openstack-db --init --service keystone 3 mysql-server is not installed. Would you like to install it now? (y/ n): y Loaded plugins: fastestmirror 5 Loading mirror speeds from cached hostfile Setting up Install Process 7 Resolving Dependencies... 9 Transaction Summary ==================================================================== 11 Install 4 Package(s) Upgrade 0 Package(s) 13 Total download size: 10 M 15 Installed size: 29 M Is this ok [y/n]: y 17 Downloading Packages: Complete! mysqld is not running. Would you like to start it now? (y/n): y Please report any problems with the /usr/bin/mysqlbug script! 23 [ OK ] 25 Starting mysqld: [ OK ] Since this is a fresh installation of MySQL, please set a password for the root mysql user. 27 Enter new password for root mysql user: 123qwe Enter new password again: 123qwe 29 Verified connectivity to MySQL. Creating keystone database. 31 Initializing the keystone database, please wait... Complete! 3. 設定 keystone 的 PKI(Public Key Infrastructure), 公開金鑰基礎設施 [root@kvm4 ~]# keystone-manage pki_setup --keystone-user keystone -- keystone-group keystone 2 Generating RSA private key, 1024 bit long modulus e is (0x10001) 6 Generating RSA private key, 1024 bit long modulus e is (0x10001) 10 Using configuration from /etc/keystone/ssl/certs/openssl.conf Check that the request matches the signature 12 Signature ok The Subject s Distinguished Name is as follows De-Yu Wang CSIE CYUT 122

130 12.2. KEYSTONE 安裝 CHAPTER 12. 身份識別 KEYSTONE 14 countryname :PRINTABLE: US stateorprovincename :PRINTABLE: Unset 16 localityname :PRINTABLE: Unset organizationname :PRINTABLE: Unset 18 commonname :PRINTABLE: Certificate is to be certified until Jan 24 11:38: GMT (365 days) 20 Write out database with 1 new entries 22 Data Base Updated 4. 設定 /etc/keystone/ssl 的用戶及群組為 keystone [root@kvm4 ~]# chown -R keystone:keystone /etc/keystone/ssl/ 5. 必須指定環境變數 SERVICE_TOKEN 及 SERVICE_ENDPOINT 才能管理 keystone 服務 1 [root@kvm4 ~]# export SERVICE_TOKEN=$(openssl rand -hex 10) [root@kvm4 ~]# export SERVICE_ENDPOINT= 3 [root@kvm4 ~]# echo $SERVICE_TOKEN > /root/ks_admin_token [root@kvm4 ~]# cat /root/ks_admin_token 5 cce1ab806ec c 6. 在 /etc/keystone/keystone.conf 中設定管理的 TOKEN 為剛剛產生的 SERVICE_TOKEN cce1ab806ec c 1 [root@kvm4 ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN [root@kvm4 ~]# grep admin_token /etc/keystone/keystone.conf 3 admin_token = cce1ab806ec c 7. 啟動並設定開機啟動 openstack-keystone 服務 1 [root@kvm4 ~]# /etc/init.d/openstack-keystone start Starting keystone: [ OK ] 3 [root@kvm4 ~]# chkconfig openstack-keystone on De-Yu Wang CSIE CYUT 123

131 12.3. 建立 KEYSTONE 管理者 CHAPTER 12. 身份識別 KEYSTONE 8. 確認 keystone-all 程序執行中並檢查有無錯誤? 1 [root@kvm4 ~]# ps -ef grep keystone-all keystone :49? 00:00:00 /usr/bin/python /usr/bin/ keystone-all \ 3 --config-file /usr/share/keystone/keystone-dist.conf \ --config-file /etc/keystone/keystone.conf 5 root :52 pts/0 00:00:00 grep keystone-all [root@kvm4 ~]# grep ERROR /var/log/keystone/keystone.log 9. 產生 keystone 服務, 並記下其 id 作為建立 endpoint 連結服務使用 [root@kvm4 ~]# keystone service-create --name=keystone \ 2 --type=identity --description="keystone Identity Service" Property Value description Keystone Identity Service id 487d5875de1a fb7b68c718e8 8 name keystone type identity 建立 keystone 服務的 endpoint [root@kvm4 ~]# keystone endpoint-create \ 2 --service-id 487d5875de1a fb7b68c718e8 \ --publicurl \ 4 --adminurl \ --internalurl Property Value adminurl 10 id 4b065578e3d74a3c9c9268b01f5a244a internalurl 12 publicurl region regionone 14 service_id 487d5875de1a fb7b68c718e 建立 Keystone 管理者 1. 產生管理用戶 admin 及設定密碼 De-Yu Wang CSIE CYUT 124

132 12.3. 建立 KEYSTONE 管理者 CHAPTER 12. 身份識別 KEYSTONE 1 [root@kvm4 ~]# keystone user-create --name admin --pass 123qwe Property Value enabled True 7 id 14613b36f501472abd93caa822477e90 name admin 9 tenantid 產生管理者角色 admin [root@kvm4 ~]# keystone role-create --name admin Property Value id 91f1e2429c614bac8efe19cef39e8e7d 6 name admin 產生管理者租戶 admin 1 [root@kvm4 ~]# keystone tenant-create --name admin Property Value description enabled True 7 id a57da7d8a5f944c9b6258d0d91b8dee2 name admin 增加用戶 admin 為角色 admin, 租戶為 admin 1 [root@kvm4 ~]# keystone user-role-add --user admin --role admin -- tenant admin 5. 要存取 openstack 的各項服務, 必須先確認身份, 為變換身份方便起見, 每一身份皆必須建立自己的身份變數腳本 以下為管理者的身份變數腳本 : De-Yu Wang CSIE CYUT 125

133 12.4. 建立一般使用者 MYUSER CHAPTER 12. 身份識別 KEYSTONE 1 [root@kvm4 ~]# cat > /root/keystonerc_admin << EOF > if [ "\$1" == "" -o "\$1" == export ];then 3 > export OS_USERNAME=admin > export OS_TENANT_NAME=admin 5 > export OS_PASSWORD=123qwe > export OS_AUTH_URL= 7 > export PS1= [\u@\h \W(keystone_admin)]\\$ > else 9 > unset OS_USERNAME > unset OS_TENANT_NAME 11 > unset OS_PASSWORD > unset OS_AUTH_URL 13 > export PS1= [\u@\h \W]\\$ > fi 15 > EOF 6. 清除 SERVICE_TOKEN 及 SERVICE_ENDPOINT 變數後, 沒有 token, 無法以 keystone 執行任何命令 1 [root@kvm4 ~]# unset SERVICE_TOKEN [root@kvm4 ~]# unset SERVICE_ENDPOINT 3 [root@kvm4 ~]# keystone user-list Expecting authentication method via 5 either a service token, --os-token or env[os_service_token], or credentials, --os-username or env[os_username]. 7. 執行管理者身份變數腳本, 以管理者身份列出目前使用者 [root@kvm4 ~]# source /root/keystonerc_admin 2 [root@kvm4 ~(keystone_admin)]# keystone user-list id name enabled b36f501472abd93caa822477e90 admin True 建立一般使用者 myuser 1. 載入 keystone 管理者 admin 環境變數 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# De-Yu Wang CSIE CYUT 126

134 12.4. 建立一般使用者 MYUSER CHAPTER 12. 身份識別 KEYSTONE 2. 產生一般使用者 myuser 及設定密碼 ~(keystone_admin)]# keystone user-create --name myuser -- pass 123qwe Property Value enabled True id 9904d73f9a1841fa89381fccd2bc59df 8 name myuser tenantid 產生 member 角色 [root@kvm4 ~(keystone_admin)]# keystone role-create --name member Property Value id 1b5a478342bc4f67a83484ce60e0f322 6 name member 產生租戶 mytenant 1 [root@kvm4 ~(keystone_admin)]# keystone tenant-create --name mytenant Property Value description enabled True 7 id 6c26d132e38f4b83a1d9f5a777beeb52 name mytenant 增加用戶 myuser 在租戶 mytenant 的角色為 member 1 [root@kvm4 ~(keystone_admin)]# keystone user-role-add --user myuser --role member --tenant mytenant De-Yu Wang CSIE CYUT 127

135 12.4. 建立一般使用者 MYUSER CHAPTER 12. 身份識別 KEYSTONE 6. 建立 myuser 環境變數匯入腳本 1 [root@kvm4 ~(keystone_admin)]# cat > /root/keystonerc_myuser << EOF > if [ "\$1" == "" -o "\$1" == export ];then 3 > export OS_USERNAME=myuser > export OS_TENANT_NAME=mytenant 5 > export OS_PASSWORD=123qwe > export OS_AUTH_URL= 7 > export PS1= [\u@\h \W(keystone_myuser)]\\$ > else 9 > unset OS_USERNAME > unset OS_TENANT_NAME 11 > unset OS_PASSWORD > unset OS_AUTH_URL 13 > export PS1= [\u@\h \W]\\$ > fi 15 > EOF De-Yu Wang CSIE CYUT 128

136 CHAPTER 13. 物件儲存 SWIFT Chapter 13 物件儲存 Swift 13.1 認識 Swift 1. Swift 服務提供用戶諸存物件於虛擬容器中 2. Swift 支援水平擴展 資料除錯, 適用多租用戶配置 3. Swift 基本概念 (a) Storage replicas, 儲存副本 : 為保護儲存的物件, 每個物件最好設定複製三個以上副本 (b) Storage zones, 儲存區 : 為確保每個物件副件是分開儲存, 一個 zone 代表獨立的硬碟 陣列 主機或整個資料中心 (c) Storage regions, 儲存區域 : 一組儲存區 4. Swift 組件 (a) openstack-swift-proxy, 代理服務器 : 負責將 Swift 架構其餘的部分整合起來 查閱每一個請求位於 ring 上的 account, container, 或者 object, 然後對應這個請求的路由 (b) openstack-swift-object, 物件服務器 : 用來儲存 檢索和刪除儲存在本地設備上的物件 物件以二進制文件的形式儲存在系統上, 數據儲存在文件的擴展屬性 (xattrs) 中, 物件服務器的文件系統支持 xattrs, 一些文件系統, 例如 ext3 預設 xattrs 是關閉的 每個物件使用物件名稱的 hash 值及操作時間組成的路徑來儲存 (c) openstack-swift-container, 容器服務器 : 物件儲存位置列表 這個列表以 sqlite 資料庫形式儲存 (d) openstack-swift-account, 帳號服務器 : 與容器服務類似, 只是負責列表為容器, 而不是物件 5. Ring files: 包含所有儲存的細節 ring files 產生工具 swift-ring-builder,swift 需要 account, container 和 object 三個 ring files 6. Ring files 產生必須有三個參數 : (a) partition power: 計算 partition 的數量用,partitions 的數量 = 2 ^ partition power, 實作時 partition 的數量設置成磁碟數 De-Yu Wang CSIE CYUT 129

137 13.2. SWIFT 安裝 CHAPTER 13. 物件儲存 SWIFT 的 100 倍, 會有比較好的效率 例如 : 預計整個系統不會使用超過 5 個磁碟,partition 數為 500, 則設定 partition power 為 9 ( 2^9=512) (b) replica count: 資料在叢集複製的數量, 主要是為了除錯 副本數愈大, 用來儲存資料的 partitions 就愈少, 同一個 partition, 不同的副本, 存在不同的 zone 預設副本數為 3 (c) min_part_hours: 最小移動間隔 partition 會因某些原因移動數據 為了避免網路擁塞,partition 不會頻繁的移動 預設最小移動間隔為 1 小時 7. Object storage, 物件儲存 : 檔案系統格式必須是 ext4, 或是 XFS 掛載點在 /srv/node 13.2 Swift 安裝 1. 安裝套件 1 [root@kvm4 ~]# yum install -y openstack-swift openstack-swift-proxy \ openstack-swift-object openstack-swift-container \ 3 openstack-swift-account memcached 2. 載入管理者環境變數 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# 3. 產生用戶 swift [root@kvm4 ~(keystone_admin)]# keystone user-create --name swift -- pass 123qwe Property Value enabled True id a863ddc fa48e2a35981c7e1f 8 name swift tenantid 確認管理者角色存在? De-Yu Wang CSIE CYUT 130

138 13.2. SWIFT 安裝 CHAPTER 13. 物件儲存 SWIFT ~(keystone_admin)]# keystone role-list grep admin 2 91f1e2429c614bac8efe19cef39e8e7d admin 5. 如果不存在, 則必須產生管理者角色 [root@kvm4 ~]# keystone role-create --name admin 6. 確認租戶 services 存在? 1 [root@kvm4 ~(keystone_admin)]# keystone tenant-list grep services [root@kvm4 ~(keystone_admin)]# 7. 如果不存在, 必須產生租戶 services [root@kvm4 ~(keystone_admin)]# keystone tenant-create --name services Property Value description 6 enabled True id da7fe21aa92743f9baa51fd4368e name services 以管理者 admin 角色新增用戶 swift 到 services 租戶 1 [root@kvm4 ~(keystone_admin)]# keystone user-role-add \ --role admin --tenant services --user swift 9. 確認 object store 服務是否存在? [root@kvm4 ~(keystone_admin)]# keystone service-list id name type description De-Yu Wang CSIE CYUT 131

139 13.3. 建立 SWIFT STORAGE NODE CHAPTER 13. 物件儲存 SWIFT d5875de1a fb7b68c718e8 keystone identity Keystone Identity Service 如果 object store 服務不存在, 則必須產生並記下其 id, 做為建立 swift endpoint 用 [root@kvm4 ~(keystone_admin)]# keystone service-create \ 2 --name swift --type object-store --description "Swift Storage Service " Property Value description Swift Storage Service id 8133b2b0b179450c9e177f617a228f7a 8 name swift type object-store 建立 swift 服務的 endpoint [root@kvm4 ~(keystone_admin)]# keystone endpoint-create \ 2 --service-id 8133b2b0b179450c9e177f617a228f7a \ --publicurl " \ 4 --adminurl " \ --internalurl " Property Value adminurl 10 id 7fd689cdf42d4dbda513f1fd3ca25244 internalurl 12 publicurl region regionone 14 service_id 8133b2b0b179450c9e177f617a228f7a 建立 Swift Storage Node 1. 查看 /dev/vdb 及 /dev/vdc 各有 104 M, 並已切割好 /dev/vdb1 及 /dev/vdc1 如果沒切割好分的分割區, 就先執行 fdisk -uc 進入分割 De-Yu Wang CSIE CYUT 132

140 13.3. 建立 SWIFT STORAGE NODE CHAPTER 13. 物件儲存 SWIFT 1 [root@kvm4 ~]# fdisk -luc /dev/vdb /dev/vdc 3 Disk /dev/vdb: 104 MB, bytes 15 heads, 14 sectors/track, 975 cylinders, total sectors 5 Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes 7 I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d217a56 9 Device Boot Start End Blocks Id System 11 /dev/vdb Linux 13 Disk /dev/vdc: 104 MB, bytes 15 heads, 14 sectors/track, 975 cylinders, total sectors 15 Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes 17 I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1eb6d3a5 19 Device Boot Start End Blocks Id System 21 /dev/vdc Linux 2. 將 /dev/vdb1 格式成 ext4 1 [root@kvm4 ~]# mkfs.ext4 /dev/vdb1 mke2fs (17-May-2010) 3 Filesystem label= OS type: Linux 5 Block size=1024 (log=0) Fragment size=1024 (log=0) 7 Stride=0 blocks, Stripe width=0 blocks inodes, blocks blocks (5.00%) reserved for the super user First data block=1 11 Maximum filesystem blocks= block groups blocks per group, 8192 fragments per group 1920 inodes per group 15 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, Writing inode tables: done 19 Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done 21 This filesystem will be automatically checked every 33 mounts or days, whichever comes first. Use tune2fs -c or -i to override. 3. 將 /dev/vdc1 格式成 ext4 De-Yu Wang CSIE CYUT 133

141 13.3. 建立 SWIFT STORAGE NODE CHAPTER 13. 物件儲存 SWIFT 1 [root@kvm4 ~]# mkfs.ext4 /dev/vdc1 mke2fs (17-May-2010) 3 Filesystem label= OS type: Linux 5 Block size=1024 (log=0) Fragment size=1024 (log=0) 7 Stride=0 blocks, Stripe width=0 blocks inodes, blocks blocks (5.00%) reserved for the super user First data block=1 11 Maximum filesystem blocks= block groups blocks per group, 8192 fragments per group 1920 inodes per group 15 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, Writing inode tables: done 19 Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done 21 This filesystem will be automatically checked every 25 mounts or days, whichever comes first. Use tune2fs -c or -i to override. 4. 建立掛載點, 並設定開機自動掛載, 注意掛載參數必須包含 acl 及 user_xattr 1 [root@kvm4 ~]# mkdir -p /srv/node/z{1,2}d1 [root@kvm4 ~]# cp /etc/fstab /etc/fstab.orig 3 [root@kvm4 ~]# echo "/dev/vdb1 /srv/node/z1d1 ext4 acl,user_xattr 0 0" >> /etc/fstab [root@kvm4 ~]# echo "/dev/vdc1 /srv/node/z2d1 ext4 acl,user_xattr 0 0" >> /etc/fstab 5. 重新掛載 /etc/fstab 中的掛載點, 沒有訊息表示一切正常 [root@kvm4 ~]# mount -a 2 [root@kvm4 ~]# 6. 改變目錄 /srv/node 的擁有者及群組皆為 swift [root@kvm4 ~]# chown -R swift:swift /srv/node/ 2 [root@kvm4 ~]# ll -d /srv/node/ De-Yu Wang CSIE CYUT 134

142 13.3. 建立 SWIFT STORAGE NODE CHAPTER 13. 物件儲存 SWIFT drwxr-xr-x. 4 swift swift 4096 Jan 24 22:51 /srv/node/ 7. 重置目錄 /srv 的 SELinux context 1 [root@kvm4 ~]# restorecon -Rv /srv/ restorecon reset /srv/node context unconfined_u:object_r:var_t:s0-> system_u:object_r:swift_data_t:s0 3 restorecon reset /srv/node/z1d1 context system_u:object_r:file_t:s0-> system_u:object_r:swift_data_t:s0 restorecon reset /srv/node/z1d1/lost+found context system_u:object_r: file_t:s0->system_u:object_r:swift_data_t:s0 5 restorecon reset /srv/node/z2d1 context system_u:object_r:file_t:s0-> system_u:object_r:swift_data_t:s0 restorecon reset /srv/node/z2d1/lost+found context system_u:object_r: file_t:s0->system_u:object_r:swift_data_t:s0 8. 先備份 swift 主設定檔及服務 account, container 及 object 設定檔 [root@kvm4 ~]# cp /etc/swift/swift.conf /etc/swift/swift.conf.orig 2 [root@kvm4 ~]# cp /etc/swift/account-server.conf /etc/swift/accountserver.conf.orig [root@kvm4 ~]# cp /etc/swift/container-server.conf /etc/swift/ container-server.conf.orig 4 [root@kvm4 ~]# cp /etc/swift/object-server.conf /etc/swift/objectserver.conf.orig 9. 設定 swift 主設定檔 /etc/swift/swift.conf, 檔案存放位置的字首 (prefix) 及字尾 (suffix) 的 hash [root@kvm4 ~]# openstack-config --set /etc/swift/swift.conf swifthash swift_hash_path_suffix $(openssl rand -hex 10) 2 [root@kvm4 ~]# openstack-config --set /etc/swift/swift.conf swifthash swift_hash_path_prefix $(openssl rand -hex 10) 10. 設定 account, container 及 object 連結 ip [root@kvm4 ~]# openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip [root@kvm4 ~]# openstack-config --set /etc/swift/container-server. conf DEFAULT bind_ip De-Yu Wang CSIE CYUT 135

143 13.4. 設定 SWIFT SERVICE RING CHAPTER 13. 物件儲存 SWIFT ~]# openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 啟動 account, container 及 object 服務並設定開機啟動 1 [root@kvm4 ~]# /etc/init.d/openstack-swift-account start Starting openstack-swift-account: [ OK ] 3 [root@kvm4 ~]# chkconfig openstack-swift-account on [root@kvm4 ~]# /etc/init.d/openstack-swift-container start 5 Starting openstack-swift-container: [ OK ] [root@kvm4 ~]# chkconfig openstack-swift-container on 7 [root@kvm4 ~]# /etc/init.d/openstack-swift-object start Starting openstack-swift-object: [ OK ] 9 [root@kvm4 ~]# chkconfig openstack-swift-object on 12. 若變更設定後要重新啟動 account, container 及 object 服務, 可使用 swift-init 一次重新啟動所有服務 1 [root@kvm4 ~(keystone_admin)]$ swift-init all restart 13.4 設定 Swift Service Ring 1. 載入 keystone 管理者 admin 環境變數 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# 2. 使用 swift-ring-builder 工具建立每個服務的 ring,partitions=2^9=512, replicas=2, min_part_hours=1 執行工具就會出現參數說明 [root@kvm4 ~(keystone_admin)]# swift-ring-builder 2 swift-ring-builder <builder_file> create <part_power> <replicas> < min_part_hours> [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/account. builder create [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder create De-Yu Wang CSIE CYUT 136

144 13.4. 設定 SWIFT SERVICE RING CHAPTER 13. 物件儲存 SWIFT ~(keystone_admin)]# swift-ring-builder /etc/swift/object. builder create 將裝置加入到 account 服務, 如果做錯了要修改, 可參考下節 Swift Ring 配置錯誤修正 13.5 執行命令就會出現參數說明 1 [root@kvm4 ~(keystone_admin)]# swift-ring-builder swift-ring-builder <builder_file> add 3 [r<region>]z<zone>-<ip>:<port>/<device_name>_<meta> <weight> 5 Adds devices to the ring with the given information. No partitions will be assigned to the new device until after running rebalance. This is so you 7 can make multiple device changes and rebalance them all just once. 9 [root@kvm4 ~(keystone_admin)]# for i in 1 2; do > swift-ring-builder /etc/swift/account.builder add z${i } :6002/z${i}d > done WARNING: No region specified for z :6002/z1d1. Defaulting to region Device r1z :6002/z1d1_"" with weight got id 0 WARNING: No region specified for z :6002/z2d1. Defaulting to region Device r1z :6002/z2d1_"" with weight got id 4. 將裝置加入到 container 服務 1 [root@kvm4 ~(keystone_admin)]# for i in 1 2; do swift-ring-builder / etc/swift/container.builder add z${i} :6001/z${i}d1 100; done WARNING: No region specified for z :6001/z1d1. Defaulting to region 1. 3 Device r1z :6001/z1d1_"" with weight got id 0 WARNING: No region specified for z :6001/z2d1. Defaulting to region 1. 5 Device r1z :6001/z2d1_"" with weight got id 1 5. 將裝置加入到 object 服務 1 [root@kvm4 ~(keystone_admin)]# for i in 1 2; do swift-ring-builder / etc/swift/object.builder add z${i} :6000/z${i}d1 100; done De-Yu Wang CSIE CYUT 137

145 13.5. SWIFT RING 配置錯誤修正 CHAPTER 13. 物件儲存 SWIFT WARNING: No region specified for z :6000/z1d1. Defaulting to region 1. 3 Device r1z :6000/z1d1_"" with weight got id 0 WARNING: No region specified for z :6000/z2d1. Defaulting to region 1. 5 Device r1z :6000/z2d1_"" with weight got id 1 6. rebalance 剛剛產生的 rings 1 [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/account. builder rebalance Reassigned 512 (100.00%) partitions. Balance is now [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder rebalance Reassigned 512 (100.00%) partitions. Balance is now [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/object. builder rebalance Reassigned 512 (100.00%) partitions. Balance is now 確認 ring files 是否成功產生? [root@kvm4 ~(keystone_admin)]# ll /etc/swift/*gz 2 -rw-r--r--. 1 root root 445 Jan 25 00:35 /etc/swift/account.ring.gz -rw-r--r--. 1 root root 448 Jan 25 00:35 /etc/swift/container.ring.gz 4 -rw-r--r--. 1 root root 447 Jan 25 00:35 /etc/swift/object.ring.gz 8. 改變目錄 /etc/swift 的群組為 swift [root@kvm4 ~(keystone_admin)]# chown -R root:swift /etc/swift 2 [root@kvm4 ~(keystone_admin)]# ll -d /etc/swift/ drwxr-xr-x. 7 root swift 4096 Jan 25 00:35 /etc/swift/ 13.5 Swift Ring 配置錯誤修正 1. 查看 container.builder, 其中前二項 ip address 錯誤為 [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder /etc/swift/container.builder, build version 4 De-Yu Wang CSIE CYUT 138

146 13.5. SWIFT RING 配置錯誤修正 CHAPTER 13. 物件儲存 SWIFT partitions, replicas, 1 regions, 2 zones, 4 devices, balance The minimum number of hours before a partition can be reassigned is 1 5 Devices: id region zone ip address port name weight partitions balance meta z1d z2d z1d z2d 刪除錯誤的兩項, 移除時系統提醒要做 rebalance 才生效 1 [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder remove /z1d1 d0r1z :6001/z1d1_"" marked for removal and will be removed next rebalance. 3 [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder remove /z2d1 d1r1z :6001/z2d1_"" marked for removal and will be removed next rebalance. 3. 進行 container.builde 的 rebalance [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder rebalance 2 Reassigned 512 (100.00%) partitions. Balance is now 再次查看 container.builde, 已剩下正確的兩項, 且 partitions 也修正 [root@kvm4 ~(keystone_admin)]# swift-ring-builder /etc/swift/ container.builder 2 /etc/swift/container.builder, build version partitions, replicas, 1 regions, 2 zones, 2 devices, 0.00 balance 4 The minimum number of hours before a partition can be reassigned is 1 Devices: id region zone ip address port name weight partitions balance meta z1d z2d De-Yu Wang CSIE CYUT 139

147 13.6. SWIFT 除錯 CHAPTER 13. 物件儲存 SWIFT 5. 使用以下方式直接刪除 account 的相關檔案, 就可以重新再做 1 [root@kvm4 ~(keystone_admin)]# rm /etc/swift/account.builder [root@kvm4 ~(keystone_admin)]# rm /etc/swift/account.ring.gz 3 [root@kvm4 ~(keystone_admin)]# rm /etc/swift/backups/*account.builder [root@kvm4 ~(keystone_admin)]# rm /etc/swift/backups/*account.ring.gz 13.6 Swift 除錯 1. swift list 出現以下錯誤訊息 : [root@kvm4 ~(keystone_admin)]$ swift list 2 Account GET failed: AUTH_bfa67652fc31431ab574a09f3db9a852?format=json \ 503 Internal Server Error [first 60 chars of response] \ 4 <html><h1>service Unavailable</h1><p>The server is currently 2. 到 /var/log/message 查詢與 swift 要存取的目錄 srv 有關的訊息, 發現拒絕存取 [root@kvm4 ~(keystone_admin)]$ grep srv /var/log/messages 2... [Errno 13] Permission denied: /srv/node/z2d1/objects 3. 查看此目錄的用戶為 root [root@kvm4 ~(keystone_admin)]$ ll -d /srv/node/z1d1/ 2 drwxr-xr-x. 6 root root 1024 Apr 11 11:15 /srv/node/z1d1/ 4. 改變整目錄以下的用戶及群組為 swift [root@kvm4 ~(keystone_admin)]$ chown swift.swift -R /srv/node 5. 再查看此目錄的用戶及群組為 swift De-Yu Wang CSIE CYUT 140

148 13.7. 配置 SWIFT OBJECT STORAGE PROXY 服務 CHAPTER 13. 物件儲存 SWIFT 1 [root@kvm4 ~(keystone_admin)]$ ll -d /srv/node/z1d1/ drwxr-xr-x. 6 swift swift 1024 Apr 11 11:15 /srv/node/z1d1/ 6. swift list 正常輸出, 目前為空的 [root@kvm4 ~(keystone_admin)]$ swift list 2 [root@kvm4 ~(keystone_admin)]$ 13.7 配置 Swift Object Storage Proxy 服務 1. 設定檔更新 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/swift/ proxy-server.conf \ 2 filter:authtoken admin_tenant_name services [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/swift/ proxy-server.conf \ 4 filter:authtoken admin_host [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/swift/ proxy-server.conf \ 6 filter:authtoken admin_user swift [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/swift/ proxy-server.conf \ 8 filter:authtoken admin_password 123qwe 2. 啟動服務並設定開機自動啟動 [root@kvm4 ~(keystone_admin)]# /etc/init.d/memcached start 2 Starting memcached: [ OK ] [root@kvm4 ~(keystone_admin)]# chkconfig memcached on 4 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-swift-proxy start Starting openstack-swift-proxy: [ OK ] 6 [root@kvm4 ~(keystone_admin)]# chkconfig openstack-swift-proxy on 13.8 確認 Swift Storage 安裝成功 1. 列出目前 swift 是空的 De-Yu Wang CSIE CYUT 141

149 13.8. 確認 SWIFT STORAGE 安裝成功 CHAPTER 13. 物件儲存 SWIFT ~(keystone_admin)]# swift list 2 [root@kvm4 ~(keystone_admin)]# 2. 產生 3 個 512 個隨機字元的檔案 [root@kvm4 ~(keystone_admin)]# head -c 512 /dev/urandom > data1.file 2 [root@kvm4 ~(keystone_admin)]# head -c 512 /dev/urandom > data2.file [root@kvm4 ~(keystone_admin)]# head -c 512 /dev/urandom > data3.file 3. 上傳檔案至 containers 1 [root@kvm4 ~(keystone_admin)]# swift --help Usage: swift <command> [options] [args] 3 Commands: 5 stat [container] [object] Displays information for the account, container, or object depending on the 7 args given (if any). list [options] [container] 9 Lists the containers for the account or the objects for a container. -p or --prefix is an option that will only list items beginning with that prefix. 11 -d or --delimiter is option (for container listings only) that will roll up items with the given delimiter (see Cloud Files general documentation for 13 what this means). upload [options] container file_or_directory [file_or_directory] [...] 15 [root@kvm4 ~(keystone_admin)]# swift upload c1 data1.file 17 data1.file [root@kvm4 ~(keystone_admin)]# swift upload c1 data2.file 19 data2.file [root@kvm4 ~(keystone_admin)]# swift upload c2 data3.file 21 data3.file 4. 再列出目前 swift 有兩個 containers c1 及 c2 1 [root@kvm4 ~(keystone_admin)]# swift list c1 3 c2 De-Yu Wang CSIE CYUT 142

150 13.8. 確認 SWIFT STORAGE 安裝成功 CHAPTER 13. 物件儲存 SWIFT 5. 列出 containers 中的內容, 是剛剛上傳的檔案 1 [root@kvm4 ~(keystone_admin)]# swift list c1 data1.file 3 data2.file [root@kvm4 ~(keystone_admin)]# swift list c2 5 data3.file 6. 找出上傳檔案儲存狀況, 發現每個檔案各有一個副本, 且存在不同的 partition, 與先前設定的副本數一致 1 [root@kvm4 ~(keystone_admin)]# find /srv/node/ -type f -name "*data" /srv/node/z1d1/objects/100/f0e/323b355ca91af8f729f7dc88e779cf0e / data 3 /srv/node/z1d1/objects/45/a62/16a1247b40242e3c4e4bb5b39072fa62 / data /srv/node/z1d1/objects/74/485/253ba7238aa b73f00cb30485 / data 5 /srv/node/z2d1/objects/45/a62/16a1247b40242e3c4e4bb5b39072fa62 / data /srv/node/z2d1/objects/74/485/253ba7238aa b73f00cb30485 / data 7 /srv/node/z2d1/objects/100/f0e/323b355ca91af8f729f7dc88e779cf0e / data De-Yu Wang CSIE CYUT 143

151 13.8. 確認 SWIFT STORAGE 安裝成功 CHAPTER 13. 物件儲存 SWIFT De-Yu Wang CSIE CYUT 144

152 CHAPTER 14. IMAGE 服務 GLANCE Chapter 14 Image 服務 Glance 14.1 認識 Glance 1. Glance 服務提供虛擬機映像檔使用 註冊及取得服務 2. 使用 MySQL 儲存 images 的 metadata 資訊 3. Glance 支援的 images 格式有 : (a) raw (b) vhd (c) vdi (d) iso (e) qcow2 (f) aki,ari and ami 4. Glance 支援的 container 格式有 : (a) bare (b) ovf (c) aki, ari and ami 14.2 Glance 安裝 1. 安裝套件 1 [root@kvm4 ~]# yum install -y openstack-glance 2. 使用 glance 原始設定檔 De-Yu Wang CSIE CYUT 145

153 14.2. GLANCE 安裝 CHAPTER 14. IMAGE 服務 GLANCE 1 [root@kvm4 ~]# cp /usr/share/glance/glance-registry-dist.conf /etc/ glance/glance-registry.conf cp: overwrite /etc/glance/glance-registry.conf? y 3. 初使化 glance 資料庫 [root@kvm4 ~]# openstack-db --init --service glance --password 123qwe --rootpw 123qwe 2 Creating glance database. Updating glance database password in /etc/glance/glance-registry. conf /etc/glance/glance-api.conf 4 Initializing the glance database, please wait... Complete! 4. 使用 openstack-config 工具設定 glance-api.conf 1 [root@kvm4 ~]# openstack-config --set /etc/glance/glance-api.conf \ paste_deploy flavor keystone 3 [root@kvm4 ~]# openstack-config --set /etc/glance/glance-api.conf \ keystone_authtoken admin_tenant_name admin 5 [root@kvm4 ~]# openstack-config --set /etc/glance/glance-api.conf \ keystone_authtoken admin_user admin 7 [root@kvm4 ~]# openstack-config --set /etc/glance/glance-api.conf \ keystone_authtoken admin_password 123qwe 5. 使用 openstack-config 工具設定 glance-registry.conf [root@kvm4 ~]# openstack-config --set /etc/glance/glance-registry. conf \ 2 paste_deploy flavor keystone [root@kvm4 ~]# openstack-config --set /etc/glance/glance-registry. conf \ 4 keystone_authtoken admin_tenant_name admin [root@kvm4 ~]# openstack-config --set /etc/glance/glance-registry. conf \ 6 keystone_authtoken admin_user admin [root@kvm4 ~]# openstack-config --set /etc/glance/glance-registry. conf \ 8 keystone_authtoken admin_password 123qwe 6. 啟動並設定開機自動啟動 openstack-glance-registry openstack-glance-api 服務 De-Yu Wang CSIE CYUT 146

154 14.2. GLANCE 安裝 CHAPTER 14. IMAGE 服務 GLANCE ~]# /etc/init.d/openstack-glance-registry start 2 Starting openstack-glance-registry: [ OK ] [root@kvm4 ~]# chkconfig openstack-glance-registry on 4 [root@kvm4 ~]# /etc/init.d/openstack-glance-api start Starting openstack-glance-api: [ OK ] 6 [root@kvm4 ~]# chkconfig openstack-glance-api on 7. 查看 glance 紀錄檔, 錯誤訊息不影響往後實作, 原因待說明 [root@kvm4 ~]# tail /var/log/glance/* 2 ==> /var/log/glance/api.log <== :03: WARNING glance.store.base [-] \ 4 Failed to configure store correctly: Store s3 could not be configured correctly. \ Reason: Could not find s3_store_host in configuration options. Disabling add method :03: ERROR glance.store.swift [-] \ Could not find swift_store_auth_address in configuration options :03: WARNING glance.store.base [-] \ Failed to configure store correctly: Store swift could not be configured correctly. \ 10 Reason: Could not find swift_store_auth_address in configuration options. Disabling add method. 12 ==> /var/log/glance/registry.log <== 8. 導入 admin 環境變數 [root@kvm4 ~]# source keystonerc_admin 2 [root@kvm4 ~(keystone_admin)]$ 9. 產生 glance 用戶, 並新增其在租用戶 services 為 admin 角色 [root@kvm4 ~(keystone_admin)]# keystone user-create --name glance -- pass 123qwe Property Value enabled True id d019862a71594e47805ae5cc3752f346 8 name glance tenantid De-Yu Wang CSIE CYUT 147

155 14.3. 新增作業系統 IMAGE 到 GLANCE CHAPTER 14. IMAGE 服務 GLANCE 10. 設定用戶 glance 在租戶 services 的角色為 admin ~(keystone_admin)]# keystone user-role-add --user glance --role admin --tenant services 11. 產生 glance 服務, 並記下其 id, 以建立 endpoint 1 [root@kvm4 ~(keystone_admin)]# keystone service-create \ --name glance --type image --description "Glance Image Service" Property Value description Glance Image Service 7 id 67b9bbfd101a43f7843d337ba6dbb585 name glance 9 type image 建立 glance 的 endpoint [root@kvm4 ~(keystone_admin)]# keystone endpoint-create \ 2 --service-id 67b9bbfd101a43f7843d337ba6dbb585 \ --publicurl \ 4 --adminurl \ --internalurl Property Value adminurl 10 id ab01fca343c541bcb4ec4e6b0ab3f51c internalurl 12 publicurl region regionone 14 service_id 67b9bbfd101a43f7843d337ba6dbb 新增作業系統 image 到 Glance 1. 導入 admin 環境變數 De-Yu Wang CSIE CYUT 148

156 14.3. 新增作業系統 IMAGE 到 GLANCE CHAPTER 14. IMAGE 服務 GLANCE 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# 2. 上傳作業系統 image, 目前 glance 只接受 http, 不支援 ftp [root@kvm4 ~(keystone_admin)]# glance image-create \ 2 --name "minkvm" \ --is-public True \ 4 --disk-format qcow2 \ --container-format bare \ 6 --copy-from Property Value checksum None container_format bare 12 created_at T19:29:18 deleted False 14 deleted_at None disk_format qcow2 16 id 7d4b5a3d-2ce5-4d6d-b35d-c a95 is_public True 18 min_disk 0 min_ram 0 20 name minkvm owner a57da7d8a5f944c9b6258d0d91b8dee2 22 protected False size 0 24 status queued updated_at T19:29: 列出可用的 image, 其狀態為存檔中 [root@kvm4 ~(keystone_admin)]# glance image-list ID Name Disk Format Container Format Size Status e088a9-0d8c-40fc-b5d cc891d2 minkvm qcow2 bare saving De-Yu Wang CSIE CYUT 149

157 14.3. 新增作業系統 IMAGE 到 GLANCE CHAPTER 14. IMAGE 服務 GLANCE 4. 列出可用的 image minkvm 細部資訊 ~(keystone_admin)]# glance image-show minkvm Property Value checksum b01710b20d305795d3a834cd3e01d62b 6 container_format bare created_at T19:31:49 8 deleted False disk_format qcow2 10 id 79e088a9-0d8c-40fc-b5d cc891d2 is_public True 12 min_disk 0 min_ram 0 14 name minkvm owner a57da7d8a5f944c9b6258d0d91b8dee2 16 protected False size status active updated_at T19:32: 經過一段時間, 再列出 image minkvm, 狀態已變為 active [root@kvm4 ~(keystone_admin)]# glance image-list ID Name Disk Format Container Format Size Status e088a9-0d8c-40fc-b5d cc891d2 minkvm qcow2 bare active De-Yu Wang CSIE CYUT 150

158 CHAPTER 15. 區塊儲存 CINDER Chapter 15 區塊儲存 Cinder 15.1 認識 Cinder 1. Cinder 提供虛擬機資料儲存空間 2. Cinder 的 Block storage 區塊儲存功能有三種服務 : (a) openstack-cinder-api,api 服務 : 提供 http endpoint 並處理區塊儲存的請求 (b) openstack-cinder-scheduler,scheduler 服務 : 從訊息排隊列 (queue) 讀取服務請求, 並處理區塊儲存的的請求 (c) openstack-cinder-volume,volume 服務 : 處理來自 scheduler 服務的請求,volume 服務依據請求進行 volume 的產生 刪除及變更 15.2 Cinder 安裝 1. 安裝套件 [root@kvm4 ~]# yum install -y openstack-cinder 2. 使用 cinder 原始設定檔 1 [root@kvm4 ~]# cp /usr/share/cinder/cinder-dist.conf /etc/cinder/ cinder.conf cp: overwrite /etc/cinder/cinder.conf? y 3. 載入管理者環境變數 [root@kvm4 ~]# source keystonerc_admin 2 [root@kvm4 ~(keystone_admin)]# De-Yu Wang CSIE CYUT 151

159 15.2. CINDER 安裝 CHAPTER 15. 區塊儲存 CINDER 4. 初使化 cinder 資料庫 ~(keystone_admin)]# openstack-db --init \ 2 --service cinder --password 123qwe --rootpw 123qwe Verified connectivity to MySQL. 4 Creating cinder database. Updating cinder database password in /etc/cinder/cinder.conf 6 Initializing the cinder database, please wait... Complete! 5. 產生用戶 cinder 1 [root@kvm4 ~(keystone_admin)]# keystone user-create --name cinder -- pass 123qwe Property Value enabled True 7 id eb9ca7e7be3e42e081c e1ada5 name cinder 9 tenantid 增加用戶 cinder 在租戶 services 的角色為 admin 前, 注意 role admin, tenant services 都必須存在, 否則當然無法成功增加 role, 可先使用 keystone role-list, keystone tenant-list 檢查是否存在, 尤其是 tenant service 是在第 13 章建立物件儲存 Swift 時新增的, 若沒有做 swift, 此時就必須先產生 如果不存在, 必須產生租戶 services [root@kvm4 ~(keystone_admin)]# keystone tenant-create --name services Property Value description 6 enabled True id da7fe21aa92743f9baa51fd4368e name services De-Yu Wang CSIE CYUT 152

160 15.2. CINDER 安裝 CHAPTER 15. 區塊儲存 CINDER 7. 增加用戶 cinder 在租戶 services 的角色為 admin 1 [root@kvm4 ~(keystone_admin)]# keystone user-role-add \ --user cinder --role admin --tenant services 8. 增加 cinder service, 並記其 id, 以建立此服務的 endpoint [root@kvm4 ~(keystone_admin)]# keystone service-create \ 2 --name=cinder --type=volume --description="openstack Block Storage Service" Property Value description Openstack Block Storage Service id 3b211e63665d432ca5c5bf488b0ae6fb 8 name cinder type volume 建立 cinder service 的 endpoint [root@kvm4 ~(keystone_admin)]# keystone endpoint-create \ 2 --service-id 3b211e63665d432ca5c5bf488b0ae6fb \ --publicurl \ 4 --adminurl \ --internalurl Property Value adminurl 10 id b5430f44463b4d99b7760df709c53c59 internalurl 12 publicurl region regionone 14 service_id 3b211e63665d432ca5c5bf488b0ae6fb 使用 openstack-config 工具設定 ciner.conf 1 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ keystone_authtoken admin_tenant_name services 3 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ De-Yu Wang CSIE CYUT 153

161 15.3. CINDER 服務啟動 CHAPTER 15. 區塊儲存 CINDER keystone_authtoken admin_user cinder 5 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ keystone_authtoken admin_password 123qwe 7 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ DEFAULT qpid_username qpidauth 9 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ DEFAULT qpid_password 123qwe 11 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/cinder/ cinder.conf \ DEFAULT qpid_protocol ssl 15.3 Cinder 服務啟動 1. 啟動並設定開機自動啟動 openstack-cinder-scheduler, openstack-cinder-api 及 openstack-cinder-volume 服務 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-scheduler start 2 Starting openstack-cinder-scheduler: [ OK ] [root@kvm4 ~(keystone_admin)]# chkconfig openstack-cinder-scheduler on 4 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-api start Starting openstack-cinder-api: [ OK ] 6 [root@kvm4 ~(keystone_admin)]# chkconfig openstack-cinder-api on [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-volume start 8 Starting openstack-cinder-volume: [ OK ] [root@kvm4 ~(keystone_admin)]# chkconfig openstack-cinder-volume on 2. 查看 cinder 服務的錯誤訊息 1 [root@kvm4 ~(keystone_admin)]# tail /var/log/cinder/* ==> /var/log/cinder/api.log <== 3 ==> /var/log/cinder/cinder-manage.log <== 5 ==> /var/log/cinder/scheduler.log <== 7 ==> /var/log/cinder/volume.log <== 9 self.driver.check_for_setup_error() File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py ", line 76, in check_for_setup_error 11 run_as_root=true) De-Yu Wang CSIE CYUT 154

162 15.3. CINDER 服務啟動 CHAPTER 15. 區塊儲存 CINDER File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 190, in execute 13 cmd=.join(cmd)) ProcessExecutionError: Unexpected error while running command. 15 Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf vgs -- noheadings -o name Exit code: 1 17 Stdout: Stderr: sudo: sorry, you must have a tty to run sudo\n 3. 以上錯誤訊息為用戶 cinder 無法使用 sudo 執行 cinder-rootwrap, 必須 visudo 增加其權限 [root@kvm4 ~(keystone_admin)]# visudo 2 Defaults:cinder!requiretty cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap /etc/cinder/ rootwrap.conf * 4. 其實 cinder, nova, quantum 等套件安裝時, 已將必須以 sudo 執行的命令權限寫在 /etc/sudoers.d 目錄下, 但因 /etc/sudoers 沒有將此目錄包含進去, 造成執行失敗 執行 visudo 在最後面加入 #includedir /etc/sudoers.d, 注意最前面的符號 # 不是註解, 一定要存在 1 [root@kvm4 ~(keystone_admin)]# ll /etc/sudoers.d/ total r--r root root 111 Jan 25 15:36 cinder 5 [root@kvm4 ~(keystone_admin)]# visudo [root@kvm4 ~(keystone_admin)]# grep includedir /etc/sudoers 7 #includedir /etc/sudoers.d 5. 刪除紀錄檔, 再重新啟動 openstack-cinder-scheduler, openstack-cinder-api 及 openstack-cinder-volume 服務 1 [root@kvm4 ~(keystone_admin)]# rm -f /var/log/cinder/* [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-scheduler restart 3 Stopping openstack-cinder-scheduler: [ OK ] Starting openstack-cinder-scheduler: [ OK ] 5 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-api restart Stopping openstack-cinder-api: [ OK ] 7 Starting openstack-cinder-api: [ OK ] De-Yu Wang CSIE CYUT 155

163 15.3. CINDER 服務啟動 CHAPTER 15. 區塊儲存 CINDER ~(keystone_admin)]# /etc/init.d/openstack-cinder-volume restart 9 Stopping openstack-cinder-volume: [FAILED] Starting openstack-cinder-volume: [ OK ] 6. 再查看 cinder 服務的紀錄, 已無錯誤訊息 [root@kvm4 ~(keystone_admin)]# tail /var/log/cinder/* 2 ==> /var/log/cinder/api.log <== 4 ==> /var/log/cinder/scheduler.log <== :07:01 CRITICAL [cinder] need more than 0 values to unpack 6 ==> /var/log/cinder/volume.log <== 7. 編輯 /etc/tgt/targets.conf, 設定 ISCSI 包含 cinder volumes 1 [root@kvm4 ~(keystone_admin)]# echo include /etc/cinder/volumes/* >> /etc/tgt/targets.conf 8. 啟動 tgtd 服務, 並設定開機自動啟動 1 [root@kvm4 ~(keystone_admin)]# /etc/init.d/tgtd start Starting SCSI target daemon: [ OK ] 3 [root@kvm4 ~(keystone_admin)]# chkconfig tgtd on 9. 檢查是否有錯? 1 [root@kvm4 ~(keystone_admin)]# tail /var/log/messages Jan 25 06:53:36 kvm4 tgtd: semkey 0x6102fbaf 3 Jan 25 06:53:36 kvm4 tgtd: tgtd daemon started, pid:24650 Jan 25 06:53:36 kvm4 tgtd: tgtd logger started, pid:24653 debug:0 5 Jan 25 06:53:36 kvm4 tgtd: work_timer_start(146) use timer_fd based scheduler Jan 25 06:53:36 kvm4 tgtd: bs_init(313) use signalfd notification 10. 檢查 openstack 服務的狀態 De-Yu Wang CSIE CYUT 156

164 15.4. 建立 CINDER-VOLUMES GROUP CHAPTER 15. 區塊儲存 CINDER ~(keystone_admin)]# openstack-status 2 == Glance services == openstack-glance-api: active 4 openstack-glance-registry: active == Keystone service == 6 openstack-keystone: active == Swift services == 8 openstack-swift-proxy: active openstack-swift-account: active 10 openstack-swift-container: active openstack-swift-object: active 12 == Cinder services == openstack-cinder-api: active 14 openstack-cinder-scheduler: active openstack-cinder-volume: active 16 == Support services == mysqld: active 18 tgtd: active qpidd: active 20 memcached: active 15.4 建立 cinder-volumes group 1. 啟動 openstack-cinder-volume 服務, 應該會自動產生 cinder-volumes 的 vg [root@deyu ~]# source keystonerc_myuser 2 [root@kvm4 ~(keystone_myuser)]# vgscan Reading all physical volumes. This may take a while... 4 Found volume group "cinder-volumes" using metadata type lvm2 Found volume group "vg_os" using metadata type lvm2 2. 如果但查詢紀錄檔出現以下錯誤訊息, 表示 vg cinder-volumes 不存在, 必須自行產生 1 [root@kvm4 ~(keystone_myuser)]# /etc/init.d/openstack-cinder-volume restart [root@deyu ~(keystone_myuser)]# tail /var/log/cinder/* 3 ==> /var/log/cinder/api.log <== 5 ==> /var/log/cinder/scheduler.log <== 7 ==> /var/log/cinder/volume.log <== launcher.run_server(server) 9 File "/usr/lib/python2.6/site-packages/cinder/service.py", line 95, in run_server server.start() De-Yu Wang CSIE CYUT 157

165 15.4. 建立 CINDER-VOLUMES GROUP CHAPTER 15. 區塊儲存 CINDER 11 File "/usr/lib/python2.6/site-packages/cinder/service.py", line 355, in start self.manager.init_host() 13 File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 143, in init_host self.driver.check_for_setup_error() 15 File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py ", line 81, in check_for_setup_error raise exception.volumebackendapiexception(data=exception_message) 17 VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: volume group cinder-volumes doesn t exist 3. 新增分割區,type 設為 LVM 1 [root@deyu ~(keystone_myuser)]# fdisk -uc /dev/vda 3 Command (m for help): p 5 Disk /dev/vda: GB, bytes 255 heads, 63 sectors/track, cylinders, total sectors 7 Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes 9 I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b Device Boot Start End Blocks Id System 13 /dev/vda1 * Linux /dev/vda e Linux LVM 15 Command (m for help): n 17 Command action e extended 19 p primary partition (1-4) p 21 Partition number (1-4): 3 First sector ( , default ): 23 Using default value Last sector, +sectors or +size{k,m,g} ( , default ): +5G 25 Command (m for help): t 27 Partition number (1-4): 3 Hex code (type L to list codes): 8e 29 Changed system type of partition 3 to 8e (Linux LVM) 31 Command (m for help): p 33 Disk /dev/vda: GB, bytes 255 heads, 63 sectors/track, cylinders, total sectors 35 Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes De-Yu Wang CSIE CYUT 158

166 15.4. 建立 CINDER-VOLUMES GROUP CHAPTER 15. 區塊儲存 CINDER 37 I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b Device Boot Start End Blocks Id System 41 /dev/vda1 * Linux /dev/vda e Linux LVM 43 /dev/vda e Linux LVM 45 Command (m for help): w The partition table has been altered! 47 Calling ioctl() to re-read partition table. 49 WARNING: Re-reading the partition table failed with error 16: Device or resource busy. 51 The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) 53 Syncing disks. 4. 加入新分割區的 mapping 1 [root@deyu ~(keystone_myuser)]# partx -va /dev/vda device /dev/vda: 3 start 0 size gpt: 0 slices 5 dos: 4 slices # 1: ( sectors, 524 MB) 7 # 2: ( sectors, MB) # 3: ( sectors, 5368 MB) 9 # 4: ( 0 sectors, 0 MB) BLKPG: Device or resource busy 11 error adding partition 1 BLKPG: Device or resource busy 13 error adding partition 2 BLKPG: Device or resource busy 15 error adding partition 3 5. 建立 pv 1 [root@deyu ~(keystone_myuser)]# pvcreate /dev/vda3 Physical volume "/dev/vda3" successfully created 6. 建立 vg cinder-volumes De-Yu Wang CSIE CYUT 159

167 15.5. 新增 LVM CINDER VOLUME CHAPTER 15. 區塊儲存 CINDER ~(keystone_myuser)]# vgcreate -s 32M cinder-volumes /dev/ vda3 2 Volume group "cinder-volumes" successfully created 7. 刪除紀錄檔, 再重新啟動 openstack-cinder-scheduler, openstack-cinder-api 及 openstack-cinder-volume 服務 [root@kvm4 ~(keystone_admin)]# rm -f /var/log/cinder/* 2 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-scheduler restart Stopping openstack-cinder-scheduler: [ OK ] 4 Starting openstack-cinder-scheduler: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-api restart 6 Stopping openstack-cinder-api: [ OK ] Starting openstack-cinder-api: [ OK ] 8 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-cinder-volume restart Stopping openstack-cinder-volume: [FAILED] 10 Starting openstack-cinder-volume: [ OK ] 8. 再查看 cinder 服務的紀錄, 已無錯誤訊息 [root@kvm4 ~(keystone_admin)]# tail /var/log/cinder/* 2 ==> /var/log/cinder/api.log <== 4 ==> /var/log/cinder/scheduler.log <== :07:01 CRITICAL [cinder] need more than 0 values to unpack 6 ==> /var/log/cinder/volume.log <== 15.5 新增 LVM Cinder volume 1. 備份 /etc/cinder/cinder.conf 1 [root@kvm4 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf. orig 2. 設定 /etc/cinder/cinder.conf De-Yu Wang CSIE CYUT 160

168 15.5. 新增 LVM CINDER VOLUME CHAPTER 15. 區塊儲存 CINDER 1 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ DEFAULT enabled_backends glusterfs,lvm 3 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_group cinder-volumes 5 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_driver cinder.volume.drivers.lvm.lvmiscsidriver 7 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_backend_name LVM 3. 編輯 /etc/tgt/targets.conf, 設定 ISCSI 包含 cinder volumes [root@kvm4 ~(keystone_admin)]# echo include /etc/cinder/volumes/* >> /etc/tgt/targets.conf 4. cinder 產生 lvm 的 type 1 [root@kvm4 ~(keystone_admin)]# cinder type-create lvm ID Name e55b910-bba5-416c-af84-d419f4d60b97 lvm 設定 type lvm 的後台名稱為 LVM [root@kvm4 ~(keystone_admin)]# cinder type-key lvm set volume_backend_name=lvm 6. 改變環境變數為一般使用者 myuser 1 [root@kvm4 ~(keystone_admin)]# source keystonerc_myuser [root@kvm4 ~(keystone_myuser)]# 7. 產生一個 1G 名為 vol1 的 volume [root@kvm4 ~(keystone_myuser)]# cinder create --volume-type lvm -- display-name vol1 1 De-Yu Wang CSIE CYUT 161

169 15.5. 新增 LVM CINDER VOLUME CHAPTER 15. 區塊儲存 CINDER Property Value attachments [] 6 availability_zone nova bootable false 8 created_at T23:35: display_description None 10 display_name vol1 id 6b8c7db a57-a c76a50d0 12 metadata {} size 1 14 snapshot_id None source_volid None 16 status creating volume_type lvm 查看產生的 volume vol1 [root@kvm4 ~(keystone_myuser)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to b8c7db a57-a c76a50d0 available vol1 1 lvm false 使用 LVM 命令 vgs 查看,VG 必須有 cinder-volumes [root@kvm4 ~(keystone_myuser)]# vgs 2 VG #PV #LV #SN Attr VSize VFree cinder-volumes wz--n- 4.88g 3.88g 4 vg_os wz--n g 3.86g 10. 使用 LVM 命令 lvs 查看,VG cinder-volumes 中有一 LV, 大小為 1G [root@kvm4 ~(keystone_myuser)]# lvs 2 LV VG Attr LSize Origin Snap% Move Log Copy% Convert De-Yu Wang CSIE CYUT 162

170 15.5. 新增 LVM CINDER VOLUME CHAPTER 15. 區塊儲存 CINDER volume-6b8c7db a57-a c76a50d0 cinder-volumes -wi-ao 1.00g 4 root vg_os -wi-ao 3.91g swap vg_os -wi-ao 2.00g 6 var vg_os -wi-ao 9.77g 11. 如果要刪除 vol1, 使用下列指令 [root@kvm4 ~(keystone_myuser)]# cinder delete vol1 12. 再次使用 LVM 命令 lvs 查看, 已無 VG cinder-volumes 的 LV 1 [root@kvm4 ~(keystone_myuser)]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert 3 root vg_os -wi-ao 3.91g swap vg_os -wi-ao 2.00g 5 var vg_os -wi-ao 9.77g De-Yu Wang CSIE CYUT 163

171 15.5. 新增 LVM CINDER VOLUME CHAPTER 15. 區塊儲存 CINDER De-Yu Wang CSIE CYUT 164

172 CHAPTER 16. GLUSTERFS 檔案系統 Chapter 16 Glusterfs 檔案系統 16.1 認識 Glusterfs 1. Glusterfs 為分散式檔案系統 2. glusterfs 可以將許多不同的儲存空間整合在一起, 變成一個分散式的虛擬儲存空間 3. glusterfs 提供複製功能, 可將同一個檔案存放在兩個以上的儲存空間 4. glusterfs 提供分條功能, 可將一個檔案打散在多個不同的儲存空間 5. 架設一台 glusterfs 的服務端, 另外一台則作為 glusterfs 的用戶端 但本實作只為實現其功能, 伺服器及用戶端都架設在同一台主機 16.2 建置 SAMBA Server 1. 安裝 samba server 套件 1 [root@kvm4 ~]# yum install -y samba 2. 建立 samba 分享目錄 1 [root@kvm4 ~]# mkdir -p /mnt/samba/volume4 3. 設定 samba 分享目錄 1 [root@kvm4 ~]# cat >> /etc/samba/smb.conf << EOF > [gluster-volume4] 3 > comment=for samba export of volume volume4 > path=/mnt/samba/volume4 De-Yu Wang CSIE CYUT 165

173 16.3. 建置 GLUSTERFS SERVER CHAPTER 16. GLUSTERFS 檔案系統 5 > read only=no > guest ok=yes 7 > EOF 9 [root@deyu ~(keystone_admin)]# grep ^\\[gluster /etc/samba/smb.conf - A5 [gluster-volume4] 11 comment=for samba export of volume volume4 path=/mnt/samba/volume4 13 read only=no guest ok=yes 4. 啟動 smb 服務並設定開機自動啟動 [root@kvm4 ~]# /etc/init.d/smb start 2 Starting SMB services: [ OK ] [root@kvm4 ~]# chkconfig smb on 16.3 建置 Glusterfs Server 1. 安裝 gluster server 套件 1 [root@kvm4 ~]# yum install -y glusterfs-server 2. 啟動 glusterd 服務並設定開機自動啟動 1 [root@kvm4 ~]# /etc/init.d/glusterd start Starting glusterd: [ OK ] 3 [root@kvm4 ~]# chkconfig glusterd on 3. 使用上節分享的 samba 目錄, 建立 gluster volume 1 [root@kvm4 ~]# gluster volume create volume4 kvm4.deyu.wang:/mnt/ samba/volume4 Creation of volume volume4 has been successful. Please start the volume to access data. De-Yu Wang CSIE CYUT 166

174 16.4. GLUSTER CLIENT 新增 CINDER VOLUME CHAPTER 16. GLUSTERFS 檔案系統 4. 啟動 gluster volume volume4 ~]# gluster volume start volume4 2 Starting volume volume4 has been successful 16.4 Gluster Client 新增 cinder volume 1. 安裝 gluster-fuse 套件 [root@kvm4 ~]# yum install -y glusterfs-fuse 2. 備份 /etc/cinder/cinder.conf 1 [root@kvm4 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf. orig 3. 設定 /etc/cinder/cinder.conf 1 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ DEFAULT enabled_backends glusterfs,lvm 3 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_group cinder-volumes 5 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_driver cinder.volume.drivers.lvm.lvmiscsidriver 7 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ lvm volume_backend_name LVM 9 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ glusterfs volume_driver cinder.volume.drivers.glusterfs. GlusterfsDriver 11 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ glusterfs glusterfs_shares_config /etc/cinder/shares.conf 13 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ glusterfs glusterfs_sparsed_volumes false 15 [root@kvm4 ~]# openstack-config --set /etc/cinder/cinder.conf \ glusterfs volume_backend_name RHS 4. 建立分享 volume 設定檔, 加入 volume4 De-Yu Wang CSIE CYUT 167

175 16.5. 使用 GLUSTERFS VOLUME CHAPTER 16. GLUSTERFS 檔案系統 ~]# echo "kvm4.deyu.wang:volume4" >> /etc/cinder/shares. conf 5. 重新啟動 openstack-cinder-schduler 及 openstack-cinder-volume 服務 1 [root@kvm4 ~]# /etc/init.d/openstack-cinder-scheduler restart Stopping openstack-cinder-scheduler: [ OK ] 3 Starting openstack-cinder-scheduler: [ OK ] [root@kvm4 ~]# /etc/init.d/openstack-cinder-volume restart 5 Stopping openstack-cinder-volume: [ OK ] Starting openstack-cinder-volume: [ OK ] 6. 查看啟動情況, 沒訊息表示啟動正常 [root@kvm4 ~]# tail /var/log/cinder/volume.log 7. 查看掛載狀況,volume4 已掛載 1 [root@kvm4 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on 3 /dev/mapper/vg_os-root % / 5 tmpfs % /dev/shm /dev/vda % /boot 7 /dev/mapper/vg_os-var % /var 9 /dev/vdb % /srv/node/ z1d1 /dev/vdc % /srv/node/ z2d1 11 kvm4.deyu.wang:volume % /var/lib/ cinder/mnt/528 c07a5e99cd72c63f4bdf80da8dcde 16.5 使用 Glusterfs volume 1. 載入管理者環境變數 De-Yu Wang CSIE CYUT 168

176 16.5. 使用 GLUSTERFS VOLUME CHAPTER 16. GLUSTERFS 檔案系統 ~]# source keystonerc_admin 2 [root@kvm4 ~(keystone_admin)]# 2. 查看 cinder type [root@kvm4 ~(keystone_admin)]# cinder type-list ID Name d6b9ab23-819f-41fd b1a c lvm 如果 lvm 不存在, 就產生 cinder volume 儲存後端,type 為 lvm [root@kvm4 ~(keystone_admin)]# cinder type-create lvm ID Name d6b9ab23-819f-41fd b1a c lvm 設定 type lvm 的後台名稱為 LVM [root@kvm4 ~(keystone_admin)]# cinder type-key lvm set volume_backend_name=lvm 5. 產生 cinder volume 儲存後端,type 為 glusterfs 1 [root@kvm4 ~(keystone_admin)]# cinder type-create glusterfs ID Name b2ca-9bed-4aee-860b-856a3a366f00 glusterfs 設定 type glusterfs 的後台名稱為 RHS De-Yu Wang CSIE CYUT 169

177 16.5. 使用 GLUSTERFS VOLUME CHAPTER 16. GLUSTERFS 檔案系統 ~(keystone_admin)]# cinder type-key glusterfs set volume_backend_name=rhs 7. 確認 cinder type 包含 lvm 及 glusterfs 1 [root@kvm4 ~(keystone_admin)]# cinder type-list ID Name b2ca-9bed-4aee-860b-856a3a366f00 glusterfs d6b9ab23-819f-41fd b1a c lvm 在 lvm 儲存後端, 產生 1G 空間, 名稱 vol2 1 [root@kvm4 ~(keystone_admin)]# cinder create --volume-type lvm -- display-name vol Property Value attachments [] availability_zone nova 7 bootable false created_at T02:12: display_description None display_name vol2 11 id bd1a2e7f-ca81-46ae-b8c4-e20269b53370 metadata {} 13 size 1 snapshot_id None 15 source_volid None status creating 17 volume_type lvm 確認 vol2 [root@kvm4 ~(keystone_admin)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to De-Yu Wang CSIE CYUT 170

178 16.5. 使用 GLUSTERFS VOLUME CHAPTER 16. GLUSTERFS 檔案系統 bd1a2e7f-ca81-46ae-b8c4-e20269b53370 available vol2 1 lvm false 在 glusterfs 儲存後端, 產生 1G 空間, 名稱 vol3 [root@kvm4 ~(keystone_admin)]# cinder create --volume-type glusterfs --display-name vol Property Value attachments [] 6 availability_zone nova bootable false 8 created_at T02:14: display_description None 10 display_name vol3 id a5f7637f-780d e2-4cfa2c396c35 12 metadata {} size 1 14 snapshot_id None source_volid None 16 status creating volume_type glusterfs 確認 vol3 [root@kvm4 ~(keystone_admin)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to a5f7637f-780d e2-4cfa2c396c35 available vol3 1 glusterfs false 6 bd1a2e7f-ca81-46ae-b8c4-e20269b53370 available vol2 1 lvm false 移除 vol2 及 vol3 De-Yu Wang CSIE CYUT 171

179 16.5. 使用 GLUSTERFS VOLUME CHAPTER 16. GLUSTERFS 檔案系統 1 [root@kvm4 ~(keystone_admin)]# cinder delete vol2 [root@kvm4 ~(keystone_admin)]# cinder delete vol3 13. 列出 cinder 為空 [root@kvm4 ~(keystone_admin)]# cinder list 2 [root@kvm4 ~(keystone_admin)]# De-Yu Wang CSIE CYUT 172

180 CHAPTER 17. QUANTUM 網路服務 Chapter 17 Quantum 網路服務 17.1 認識 Openstack Networking 1. Openstack Networking 服務是一個虛擬的網路服務 2. 可以設定豐富的網路拓樸 (topologies), 建構網路及子網路 3. 做為 Openstack 各服務間的連結 4. Openstack Networking 允許租戶設定自己的私有網路 5. Openstack 網路套件 Quantum 17.2 建立 Openstack-Quantum 用戶 1. 開始網路服務架設與設定前, 請先安裝 kernel openstack, 否則網路會有很多問題 ~]# vim /etc/yum.repo.d/openstack.repo 2 [openstack] name=openstack 4 baseurl= gpgcheck=0 6 enabled=1 8 [root@kvm4 ~]# yum install -y kernel iproute --enablerepo=openstack [root@kvm4 ~]# reboot 2. 載入管理者環境變數 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# De-Yu Wang CSIE CYUT 173

181 17.2. 建立 OPENSTACK-QUANTUM 用戶 CHAPTER 17. QUANTUM 網路服務 3. 在 keystone 產生名稱為 quantum 的網路服務, 並記下其 id, 以建立此服務的 endpoint ~(keystone_admin)]# keystone service-create \ 2 --name quantum \ --type network \ 4 --description Openstack Networking Service Property Value description Openstack Networking Service id eeac3353b8a44562b e271aece 10 name quantum type network 建立 quantum 的 endpoint [root@kvm4 ~(keystone_admin)]# keystone endpoint-create \ 2 --service-id eeac3353b8a44562b e271aece \ --publicurl \ 4 --adminurl \ --internalurl Property Value adminurl 10 id b86b7c0c90d1445d96c5b1bbc54f6c9c internalurl 12 publicurl region regionone 14 service_id eeac3353b8a44562b e271aece 產生網路服務用戶, 名稱 quantum, 密碼 123qwe 1 [root@kvm4 ~(keystone_admin)]# keystone user-create \ --name quantum --pass 123qwe Property Value enabled True id 3ba95de7669c4979acbe7b5b6eb4f651 9 name quantum tenantid De-Yu Wang CSIE CYUT 174

182 17.3. 安裝 OPENSTACK-QUANTUM CHAPTER 17. QUANTUM 網路服務 6. 設定用戶 quantum 在租戶 services 的角色為 admin 1 [root@kvm4 ~(keystone_admin)]# keystone user-role-add \ --user quantum --role admin --tenant services 7. 列出用戶角色, 發現 user_id 並不是 quantum, 原因是目前的環境變數為 admin [root@kvm4 ~(keystone_admin)]# keystone user-role-list id name user_id tenant_id f1e2429c614bac8efe19cef39e8e7d admin b36f501472abd93caa822477e90 a57da7d8a5f944c9b6258d0d91b8dee 使用 quantum 環境變數列出角色, 比對 user_id, 這才是 quantum 的角色 [root@kvm4 ~(keystone_admin)]# keystone --os-username quantum \ 2 --os-password 123qwe \ --os-tenant-name services user-role-list id name user_id tenant_id f1e2429c614bac8efe19cef39e8e7d admin 3 ba95de7669c4979acbe7b5b6eb4f651 da7fe21aa92743f9baa51fd4368e 安裝 Openstack-Quantum 1. 安裝 quantum 套件 De-Yu Wang CSIE CYUT 175

183 17.3. 安裝 OPENSTACK-QUANTUM CHAPTER 17. QUANTUM 網路服務 ~(keystone_admin)]# yum install -y openstack-quantum openstack-quantum-openvswitch 2. Openstack Networking 服務必須連結 qpidd, 先確認此服務正常運作 1 [root@kvm4 ~(keystone_admin)]# /etc/init.d/qpidd status qpidd (pid 17255) is running 設定 Openstack Networking [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 2 DEFAULT rpc_backend quantum.openstack.common.rpc.impl_qpid [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 4 DEFAULT qpid_hostname [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 6 DEFAULT qpid_username qpidauth [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 8 DEFAULT qpid_password 123qwe [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 10 DEFAULT qpid_protocol ssl [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 12 keystone_authtoken admin_tenant_name services [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 14 keystone_authtoken admin_user quantum [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/quantum/ quantum.conf \ 16 keystone_authtoken admin_password 123qwe 4. 建立用戶 quantum 環境變數載入腳本 [root@kvm4 ~(keystone_admin)]# cat > keystonerc_quantum << EOF 2 > if [ "\$1" == "" -o "\$1" == export ];then > export OS_USERNAME=quantum 4 > export OS_TENANT_NAME=services > export OS_PASSWORD=123qwe De-Yu Wang CSIE CYUT 176

184 17.3. 安裝 OPENSTACK-QUANTUM CHAPTER 17. QUANTUM 網路服務 6 > export OS_AUTH_URL= > export PS1= [\u@\h \W(keystone_quantum)]\\$ 8 > else > unset OS_USERNAME 10 > unset OS_TENANT_NAME > unset OS_PASSWORD 12 > unset OS_AUTH_URL > export PS1= [\u@\h \W]\\$ 14 > fi > EOF 5. 變換環境變數為用戶 quantum 1 [root@kvm4 ~(keystone_admin)]# source keystonerc_quantum [root@kvm4 ~(keystone_quantum)]# 6. 網路設定前一定要先確定本機的主機名稱及對應 ip [root@kvm4 ~(keystone_quantum)]# hostname 2 kvm4.deyu.wang [root@kvm4 ~(keystone_quantum)]# host kvm4.deyu.wang 4 kvm4.deyu.wang has address Openstack Networking 設定會改變設定檔 /etc/nova/nova.conf, 故先安裝包含此設定檔的套件 openstack-nova-common, 其設定及使用請參考第?? [root@kvm4 ~(keystone_quantum)]# yum install -y openstack-nova-common 8. 設定 openstack networking 使用外掛 openvswitch 1 [root@kvm4 ~(keystone_quantum)]# quantum-server-setup --yes --rootpw 123qwe --plugin openvswitch Quantum plugin: openvswitch 3 Plugin: openvswitch => Database: ovs_quantum Verified connectivity to MySQL. 5 Configuration updates complete! 9. 啟動 quantum-server, 並設定開機自動啟動 De-Yu Wang CSIE CYUT 177

185 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 1 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-server start Starting quantum-server: [ OK ] 3 [root@kvm4 ~(keystone_quantum)]# chkconfig quantum-server on 10. 查看啟動紀錄, 是否有錯誤? 沒有訊息表示正常 1 [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/server.log 11. 查看 openstack 狀況, 注意 quantum, 目前只有 quantum-server 啟動 1 [root@kvm4 ~(keystone_quantum)]# openstack-status... 3 == Quantum services == quantum-server: active 5 quantum-dhcp-agent: inactive (disabled on boot) quantum-l3-agent: inactive (disabled on boot) 7 quantum-linuxbridge-agent: dead (disabled on boot) quantum-openvswitch-agent: inactive (disabled on boot) 9 openvswitch: dead 設定 openvswitch 1. 設定 openvswitch 主機 ip [root@kvm4 ~(keystone_quantum)]# quantum-node-setup --plugin openvswitch --qhost Quantum plugin: openvswitch Would you like to update the nova configuration files? (y/n): 4 y Configuration updates complete! 2. 啟動 openvswitch, 並設定開機自動啟動 1 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/openvswitch start /etc/openvswitch/conf.db does not exist... (warning). 3 Creating empty database /etc/openvswitch/conf.db [ OK ] Starting ovsdb-server [ OK ] De-Yu Wang CSIE CYUT 178

186 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 5 Configuring Open vswitch system IDs [ OK ] Inserting openvswitch module [ OK ] 7 Starting ovs-vswitchd [ OK ] [root@kvm4 ~(keystone_quantum)]# chkconfig openvswitch on 3. 檢查啟動是否正常? 沒有訊息表示正常 [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ openvswitch/* 4. 產生名為 br-int 的 OpenvSwitch 橋接器, 指定 instance ( 也就是虛擬機 ) 的 interface 1 [root@kvm4 ~(keystone_quantum)]# ovs-vsctl add-br br-int [root@kvm4 ~(keystone_quantum)]# ovs-vsctl show 3 1ff32c6e-03e fe-e8a726830ff0 Bridge br-int 5 Port br-int Interface br-int 7 type: internal ovs_version: "1.9.0" 5. 設定 br-int 為 integration bridge [root@kvm4 ~(keystone_quantum)]# openstack-config --set \ 2 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini OVS integration_bridge br-int 6. 啟動 quantum-openvswitch-agent, 並設定開機自動啟動 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-openvswitchagent start 2 Starting quantum-openvswitch-agent: [ OK ] [root@kvm4 ~(keystone_quantum)]# chkconfig quantum-openvswitch-agent on 7. 檢查啟動是否正常? 出現無法執行指令的訊息, 此為用戶 quantum 無法使用 sudo De-Yu Wang CSIE CYUT 179

187 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 1 [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/openvswitch-agent.log :10:58 ERROR [quantum.agent.linux.ovs_lib] \ 3 Unable to execute [ ovs-vsctl, --timeout=2, --, --if-exists, \ del-port, br-int, patch-tun ]. Exception: :10:58 ERROR [quantum.agent.linux.ovs_lib] \ Unable to execute [ ovs-ofctl, del-flows, br-int ]. Exception: :10:58 ERROR [quantum.agent.linux.ovs_lib] \ Unable to execute [ ovs-ofctl, add-flow, br-int, \ 9 hard_timeout=0,idle_timeout=0,priority=1,actions=normal ]. Exception : :10:58 ERROR [quantum.agent.linux.ovs_lib] \ 11 Unable to execute [ ovs-vsctl, --timeout=2, list-ports, br-int ]. Exception: 8. visudo 加入用戶 quantum 使用指令權限 1 [root@kvm4 ~(keystone_quantum)]# cat >> /etc/sudoers << EOF > Defaults:quantum!requiretty 3 > quantum ALL = (root) NOPASSWD: /usr/bin/quantum-rootwrap > EOF 9. 刪除 /var/log/quantum/openvswitch-agent.log, 重新啟動 quantum-openvswitchagent, 再檢查已正常 [root@kvm4 ~(keystone_quantum)]# rm -f /var/log/quantum/openvswitchagent.log 2 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-openvswitchagent restart Stopping quantum-openvswitch-agent: [ OK ] 4 Starting quantum-openvswitch-agent: [ OK ] [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/openvswitch-agent.log 10. 設定開機啟動 quantum-ovs-cleanup, 以保證 Openstack Networking agents 可以完全控制網路設備 1 [root@kvm4 ~(keystone_quantum)]# chkconfig quantum-ovs-cleanup on 11. 設定 Openstack Networking DHCP 主機 ip De-Yu Wang CSIE CYUT 180

188 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 1 [root@kvm4 ~(keystone_quantum)]# quantum-dhcp-setup --plugin openvswitch --qhost Quantum plugin: openvswitch 3 Configuration updates complete! 12. 啟動 quantum-dhcp-agent, 並設定開機啟動 1 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-dhcp-agent start Starting quantum-dhcp-agent: [ OK ] 3 [root@kvm4 ~(keystone_quantum)]# chkconfig quantum-dhcp-agent on 13. 檢查啟動是否正常?quantum-dhcp 啟動失敗 1 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-dhcp-agent restart Stopping quantum-dhcp-agent: [ OK ] 3 Starting quantum-dhcp-agent: [ OK ] [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/dhcp-agent.log :35:07 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp :35:15 ERROR [quantum.agent.dhcp_agent] Unable to enable dhcp. 14. 解決方式 : 安裝 kernel openstack, 並重新開機 [root@kvm4 ~(keystone_quantum)]# yum install -y kernel iproute \ 2 --enablerepo=openstack && reboot 15. 再啟動 quantum-dhcp-agent, 並檢查啟動是否正常? 沒有訊息表示正常 [root@kvm4 ~]# source keystonerc_quantum 2 [root@kvm4 ~(keystone_quantum)]# rm /var/log/quantum/dhcp-agent.log rm: remove regular file /var/log/quantum/dhcp-agent.log? y 4 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-dhcp-agent restart Stopping quantum-dhcp-agent: [ OK ] 6 Starting quantum-dhcp-agent: [ OK ] [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/dhcp-agent.log De-Yu Wang CSIE CYUT 181

189 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 8 [root@kvm4 ~(keystone_quantum)]# 16. 產生橋接器 br-ex, 作為連結外部網路用 [root@kvm4 ~(keystone_quantum)]# ovs-vsctl add-br br-ex 17. 備份並複製網卡 eth0 為 br-ex 1 [root@kvm4 ~(keystone_quantum)]# cp /etc/sysconfig/network-scripts/ ifcfg-eth0 /root/ [root@kvm4 ~(keystone_quantum)]# cp /etc/sysconfig/network-scripts/ ifcfg-eth0 \ 3 /etc/sysconfig/network-scripts/ifcfg-br-ex 18. 刪除網卡 eth0 中的設定, 保留以下三行 : 1 [root@kvm4 ~(keystone_quantum)]# vim /etc/sysconfig/network-scripts/ ifcfg-eth0 DEVICE="eth0" 3 HWADDR="52:54:00:0F:C9:09" ONBOOT="yes" 19. 橋接器 br-ex 為對外網路設定, 必須保留原有對外網卡的設定, 只需要修改 DEVICE 為 br-ex 即可 [root@kvm4 ~(keystone_quantum)]# vim /etc/sysconfig/network-scripts/ ifcfg-br-ex 2 DEVICE="br-ex" BOOTPROTO="static" 4 DNS1=" " GATEWAY=" " 6 IPADDR=" " NETMASK=" " 8 ONBOOT="yes" 20. 增加網卡 eth0 到橋接器 br-ex, 並重新啟動網路 De-Yu Wang CSIE CYUT 182

190 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 ~(keystone_quantum)]# ovs-vsctl add-port br-ex eth0; /etc/ init.d/network restart 2 Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] 4 Bringing up loopback interface: [ OK ] Bringing up interface br-ex: [ OK ] 6 Bringing up interface eth0: [ OK ] 21. 確認網路橋接器 br-ex 的 port 是否為 eth0? [root@kvm4 ~(keystone_quantum)]# ovs-vsctl show 2 1ff32c6e-03e fe-e8a726830ff0 Bridge br-int 4 Port br-int Interface br-int 6 type: internal Bridge br-ex 8 Port br-ex Interface br-ex 10 type: internal Port "eth0" 12 Interface "eth0" ovs_version: "1.9.0" 22. 設定 Openstack Networking L3 Agent 主機 ip 1 [root@kvm4 ~(keystone_quantum)]# quantum-l3-setup --plugin openvswitch --qhost Quantum plugin: openvswitch 3 Configuration updates complete! 23. 啟動 quantum-l3-agent, 並設定開機啟動 1 [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-l3-agent start Starting quantum-l3-agent: [ OK ] 3 [root@kvm4 ~(keystone_quantum)]# chkconfig quantum-l3-agent on 24. 檢查啟動是否正常? 出現 CRITICAL 訊息, 查看紀錄檔 /var/log/quantum/l3- agnet.log, 發現 ip 指定沒有物件 netns, 也就是網路 namespace, 由於 openstack 虛擬網路各裝置皆使用的 UUID, 目前 ifconfig 及 ip 指令不認得這些裝置, 必須使用支援 openstack 的 kernel, 且套件 iproute 也必須更新 De-Yu Wang CSIE CYUT 183

191 17.4. 設定 OPENVSWITCH CHAPTER 17. QUANTUM 網路服務 1 [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/l3-agent.log :51:31 CRITICAL [quantum] 3 [root@kvm4 ~(keystone_quantum)]# tail /var/log/quantum/l3-agent.log 5 output = cls._execute(, netns, ( list,), root_helper= root_helper) File "/usr/lib/python2.6/site-packages/quantum/agent/linux/ip_lib. py", line 58, in _execute 7 root_helper=root_helper) File "/usr/lib/python2.6/site-packages/quantum/agent/linux/utils.py ", line 61, in execute 9 raise RuntimeError(m) RuntimeError: 11 Command: [ sudo, quantum-rootwrap, /etc/quantum/rootwrap.conf, ip, netns, list ] Exit code: Stdout: Stderr: Object "netns" is unknown, try "ip help".\n 15 [root@kvm4 ~(keystone_quantum)]# ip netns 17 Object "netns" is unknown, try "ip help". 25. 安裝支援 openstack 網路的 kernel 及 iproute, 並重新啟動 1 [root@kvm4 ~(keystone_quantum)]# yum install -y kernel openstack.el6.x86_64 iproute [root@kvm4 ~(keystone_quantum)]# rpm -qa grep iproute 3 iproute el6ost.netns.2.x86_64 [root@kvm4 ~(keystone_quantum)]# reboot 26. 重新啟動後, 再導入用戶 quantum 環境變數, 檢查 quantum-l3-agent 啟動是否正常? 沒有訊息表示正常 [root@kvm4 ~]# source keystonerc_quantum 2 [root@kvm4 ~(keystone_quantum)]# rm -f /var/log/quantum/l3-agent.log [root@kvm4 ~(keystone_quantum)]# /etc/init.d/quantum-l3-agent restart 4 Stopping quantum-l3-agent: [ OK ] Starting quantum-l3-agent: [ OK ] 6 [root@kvm4 ~(keystone_quantum)]# egrep ERRORCRITICAL /var/log/ quantum/l3-agent.log 27. 確認 Openstack Networking 狀態, 先前設定的 quantum 相關服務已正常啟動 De-Yu Wang CSIE CYUT 184

192 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 ~(keystone_quantum)]# openstack-status 2... == Quantum services == 4 quantum-server: active quantum-dhcp-agent: active 6 quantum-l3-agent: active quantum-linuxbridge-agent: dead (disabled on boot) 8 quantum-openvswitch-agent: active openvswitch: active 17.5 設定用戶網路 1. 使用一般用戶 myuser, 設定私有網路 [root@kvm4 ~(keystone_quantum)]# source keystonerc_myuser 2 [root@kvm4 ~(keystone_myuser)]# 2. 產生名為 router1 的路由 [root@kvm4 ~(keystone_myuser)]# quantum router-create router1 2 Created a new router: Field Value admin_state_up True external_gateway_info 8 id 8ab14c8e-c533-47f6-ab63-fc277fada633 name router1 10 status ACTIVE tenant_id 6c26d132e38f4b83a1d9f5a777beeb 產生名為 private 的網路 [root@kvm4 ~(keystone_myuser)]# quantum net-create private 2 Created a new network: Field Value admin_state_up True id 48e9c08e-158d-4c6a-9623-b4eeaea5716c 8 name private De-Yu Wang CSIE CYUT 185

193 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 router:external False 10 shared False status ACTIVE 12 subnets tenant_id 6c26d132e38f4b83a1d9f5a777beeb 產生網路 private 的子網路, 名為 subpriv, 網域為 /24 ~(keystone_myuser)]# quantum subnet-create --name subpriv private /24 2 Created a new subnet: Field Value allocation_pools {"start": " ", "end": " "} cidr /24 8 dns_nameservers enable_dhcp True 10 gateway_ip host_routes 12 id af75410d-6cf3-457c-a e563db ip_version 4 14 name subpriv network_id 48e9c08e-158d-4c6a-9623-b4eeaea5716c 16 tenant_id 6c26d132e38f4b83a1d9f5a777beeb 設定子網路 subpriv 的路由為 router1 1 [root@kvm4 ~(keystone_myuser)]# quantum router-interface-add router1 subpriv Added interface to router router1 De-Yu Wang CSIE CYUT 186

194 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 6. 確認 interface ~(keystone_myuser)]# quantum port-list id name mac_address fixed_ips dc85-3b21-4aec-b88f-b4865fab4b4d fa:16:3e:b4:ca:2d { "subnet_id": "af75410d-6cf3-457c-a e563db", "ip_address" : " "} 為產生對外的公開網路, 先匯入管理者 admin 的環境變數 [root@kvm4 ~(keystone_myuser)]# source keystonerc_admin 2 [root@kvm4 ~(keystone_admin)]# 8. 產生名為 public 的公開網路 [root@kvm4 ~(keystone_admin)]# quantum net-create \ 2 --tenant-id services public -- --router:external=true Created a new network: Field Value admin_state_up True 8 id f b-4a9c-b dc9a5a9b4 name public 10 provider:network_type local provider:physical_network 12 provider:segmentation_id router:external True 14 shared False status ACTIVE 16 subnets tenant_id services De-Yu Wang CSIE CYUT 187

195 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 9. 產生網路 public 的子網路, 名為 subpub, 網域必須為目前對外網域, 故為 /24,gateway 要設定正確, 否則無法連接外部網路 目前 /24 網域已存在 dhcp, 所以此子網路關閉 dhcp, 且 ip 開始至結束的範圍也要避開已正在使用的 ip ~(keystone_admin)]# quantum subnet-create \ 2 --tenant-id services \ --allocation-pool start= ,end= \ 4 --gateway \ --disable-dhcp \ 6 --name subpub public /24 Created a new subnet: Field Value allocation_pools {"start": " ", "end": " "} 12 cidr /24 dns_nameservers 14 enable_dhcp False gateway_ip host_routes id 76055bb3-ebb3-4e d90c5d8af6 18 ip_version 4 name subpub 20 network_id f b-4a9c-b dc9a5a9b4 tenant_id services 設定網路 public, 作為路由 router1 的 gateway [root@kvm4 ~(keystone_admin)]# quantum router-gateway-set router1 public 2 Set gateway for router router1 De-Yu Wang CSIE CYUT 188

196 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 11. 列出 port, 發現網路 public 的路由已被指定 ip 為 ~(keystone_admin)]# quantum port-list id name mac_address fixed_ips e1526f-523b-41ba-bc9b-e8d0804bacd7 fa:16:3e:97:53:5a { "subnet_id": "69336db ef-a2c8-acb4a29ed224", "ip_address" : " "} 6 c66c16a b-b ae7b fa:16:3e:d5:97:7c { "subnet_id": "6973d14c-ae1a-4759-af7a-3ee3c4eb63c2", "ip_address" : " "} 匯入一般使用者 myuser 環境變數 1 [root@kvm4 ~(keystone_admin)]# source keystonerc_myuser [root@kvm4 ~(keystone_myuser)]# 13. 在網路 public 中產生浮動 ip [root@kvm4 ~(keystone_myuser)]# quantum floatingip-create public 2 Created a new floatingip: Field Value fixed_ip_address floating_ip_address floating_network_id f b-4a9c-b dc9a5a9b4 id 9a01e37b b6e-9615df28eb0f 10 port_id router_id 12 tenant_id 6c26d132e38f4b83a1d9f5a777beeb 列出浮動 ip 為 De-Yu Wang CSIE CYUT 189

197 17.5. 設定用戶網路 CHAPTER 17. QUANTUM 網路服務 1 [root@kvm4 ~(keystone_myuser)]# quantum floatingip-list id fixed_ip_address floating_ip_address port_id 5 9a01e37b b6e-9615df28eb0f De-Yu Wang CSIE CYUT 190

198 CHAPTER 18. NOVA COMPUTE AND CONTROLLER Chapter 18 Nova Compute and Controller 18.1 認識 Nova 1. Nova 又稱 OpenStack Compute, 提供雲端運算 2. Nova Compute node 執行虛擬化軟體, 以開啟並管理 instances 3. instances 是 openstack 中的虛擬機, 可指定浮動 ip, 與外部網路連結 18.2 Nova 安裝 1. 安裝 nova 套件 ~]# yum install -y openstack-nova openstack-novanovncproxy 2. 載入管理者環境變數 1 [root@kvm4 ~]# source keystonerc_admin [root@kvm4 ~(keystone_admin)]# 3. 改變 nova 管理紀錄檔的用戶與群組為 nova [root@kvm4 ~(keystone_admin)]# chown nova:nova /var/log/nova/novamanage.log 4. 初使化 nova 資料庫 De-Yu Wang CSIE CYUT 191

199 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER 1 [root@kvm4 ~(keystone_admin)]# openstack-db --init --service nova \ --password 123qwe --rootpw 123qwe 3 Verified connectivity to MySQL. Creating nova database. 5 Updating nova database password in /etc/nova/nova.conf Initializing the nova database, please wait... 7 Complete! 5. 產生用戶 nova 1 [root@kvm4 ~(keystone_admin)]# keystone user-create --name nova -- pass 123qwe Property Value enabled True 7 id 490e3d3086dd4b628ca c3face name nova 9 tenantid 增加用戶 nova 在租戶 services 的角色為 admin [root@kvm4 ~(keystone_admin)]# keystone user-role-add \ 2 --user nova --role admin --tenant services 7. 增加 nova service, 並記其 id, 以建立此服務的 endpoint [root@kvm4 ~(keystone_admin)]# keystone service-create \ 2 --name nova --type compute --description "Openstack Compute Service" Property Value description Openstack Compute Service id fa2efe6481eb45898f3b0ce dd 8 name nova type compute 建立 nova service 的 endpoint De-Yu Wang CSIE CYUT 192

200 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER ~(keystone_admin)]# keystone endpoint-create \ 2 --service-id fa2efe6481eb45898f3b0ce dd \ --publicurl \ 4 --adminurl \ --internalurl Property Value adminurl 10 id b460744fb0ba494e8408b5a5bf301f5d internalurl 12 publicurl region regionone 14 service_id fa2efe6481eb45898f3b0ce dd 使用 openstack-config 工具設定 nova 1 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/apipaste.ini \ filter:authtoken admin_tenant_name services 3 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/apipaste.ini \ filter:authtoken admin_user nova 5 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/apipaste.ini \ filter:authtoken admin_password 123qwe 7 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/apipaste.ini \ filter:authtoken admin_host [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT qpid_username qpidauth 11 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT qpid_password 123qwe 13 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT qpid_protocol ssl 15 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT novncproxy_base_url 17 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT vncserver_listen [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT vncserver_proxyclient_address [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT libvirt_vif_driver nova.virt.libvirt.vif. De-Yu Wang CSIE CYUT 193

201 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER LibvirtHybridOVSBridgeDriver 23 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT auth_strategy keystone 25 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT libvirt_type qemu 27 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT libvirt_cpu_mode none 29 [root@kvm4 ~(keystone_admin)]# openstack-config --set /etc/nova/nova. conf \ DEFAULT verbose true 10. 啟動並設定開機啟動 nova 相關的服務 [root@kvm4 ~(keystone_admin)]# /etc/init.d/libvirtd start 2 Starting libvirtd daemon: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-scheduler start 4 Starting openstack-nova-scheduler: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-api start 6 Starting openstack-nova-api: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-compute start 8 Starting openstack-nova-compute: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-conductor start 10 Starting openstack-nova-conductor: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-consoleauth start 12 Starting openstack-nova-consoleauth: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-novncproxy start 14 Starting openstack-nova-novncproxy: [ OK ] [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-scheduler on 16 [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-api on [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-compute on 18 [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-conductor on [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-consoleauth on 20 [root@kvm4 ~(keystone_admin)]# chkconfig openstack-nova-novncproxy on 11. 查看 nova 相關服務的錯誤訊息, 發現 nova 無法 sudo 執行 nova-rootwrap 指令 [root@kvm4 ~(keystone_admin)]# tail /var/log/nova/* 2 ==> /var/log/nova/api.log <== :24: TRACE nova.service \ De-Yu Wang CSIE CYUT 194

202 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER 4 cmd=.join(cmd)) :24: TRACE nova.service \ 6 ProcessExecutionError: Unexpected error while running command :24: TRACE nova.service \ 8 Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c :24: TRACE nova.service Exit code: :24: TRACE nova.service Stdout: :24: TRACE nova.service \ 12 Stderr: sudo: sorry, you must have a tty to run sudo\n :24: TRACE nova.service :24: INFO nova.wsgi [-] \ Stopping WSGI server :24: INFO nova.service [-] \ Child exited with status :24: INFO nova.service [-] \ Started child ==> /var/log/nova/compute.log <== :23: AUDIT nova.compute.resource_tracker \ [-] Free ram (MB): :23: AUDIT nova.compute.resource_tracker \ [-] Free disk (GB): :23: AUDIT nova.compute.resource_tracker \ [-] Free VCPUS: :23: INFO nova.compute.resource_tracker \ [-] Compute_service record created for kvm4.deyu.wang:kvm4.deyu.wang :24: AUDIT nova.compute.resource_tracker \ [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] \ 32 Auditing locally available compute resources :24: AUDIT nova.compute.resource_tracker \ 34 [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] Free ram (MB): :24: AUDIT nova.compute.resource_tracker \ 36 [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] Free disk (GB): :24: AUDIT nova.compute.resource_tracker \ 38 [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] Free VCPUS: :24: INFO nova.compute.resource_tracker \ 40 [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] \ Compute_service record updated for kvm4.deyu.wang:kvm4.deyu.wang :24: INFO nova.compute.manager \ [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] \ 44 Updating bandwidth usage cache 46 ==> /var/log/nova/conductor.log <== :23: AUDIT nova.service [-] \ 48 Starting conductor node (version el6ost) :23: INFO nova.openstack.common.rpc.impl_qpid \ 50 [req e5-2ea3-488c-bfbf-14b5f0cfeaae None None] \ Connected to AMQP server on localhost: :23: INFO nova.openstack.common.rpc.impl_qpid \ [req-54f88c9d-cfcd-4c77-8aa1-d06f5ddf0e04 None None] \ 54 Connected to AMQP server on localhost: :24: INFO nova.openstack.common.rpc.impl_qpid \ 56 [req-6c8e9e6f-5dc0-4b0e-a0f4-60bd87234d9a None None] \ De-Yu Wang CSIE CYUT 195

203 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER Connected to AMQP server on localhost: :24: INFO nova.openstack.common.rpc.impl_qpid \ [req f1-0cce-4fe4-a55d-f8e08881f5fc None None] \ 60 Connected to AMQP server on localhost: ==> /var/log/nova/consoleauth.log <== :23: AUDIT nova.service [-] \ 64 Starting consoleauth node (version el6ost) :23: INFO nova.openstack.common.rpc.impl_qpid \ 66 [req-4d503f6c-8e85-4f70-9acd-77cea9b42e8d None None] \ Connected to AMQP server on localhost: ==> /var/log/nova/nova-manage.log <== :53: CRITICAL nova \ [req-cad77826-aa a5c7-b23dac8866c9 None None] ( OperationalError) \ 72 (1045, "Access denied for user localhost (using password: YES)") None None :14: CRITICAL nova \ 74 [req-bdea1867-ecb2-4ddc-93ad-61e82feabb31 None None] ( OperationalError) \ (1045, "Access denied for user localhost (using password: YES)") None None 76 ==> /var/log/nova/scheduler.log <== :23: AUDIT nova.service [-] \ Starting scheduler node (version el6ost) :23: INFO nova.openstack.common.rpc.impl_qpid \ [req-d0bd6e4c-1a3e-4cc8-abcc-68bbb4c14661 None None] \ 82 Connected to AMQP server on localhost: :23: INFO nova.openstack.common.rpc.impl_qpid \ 84 [req-d0bd6e4c-1a3e-4cc8-abcc-68bbb4c14661 None None] \ Connected to AMQP server on localhost: 以上錯誤訊息為用戶 cinder 無法使用 sudo 執行 cinder-rootwrap, 必須 visudo 增加其權限 1 [root@kvm4 ~(keystone_admin)]# visudo Defaults:cinder!requiretty 3 cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap /etc/cinder/ rootwrap.conf * 13. cinder, nova, quantum 等套件安裝時, 已將必須以 sudo 執行的命令權限寫在 /etc/sudoers.d 目錄下, 但因 /etc/sudoers 沒有將此目錄包含進去, 造成執行失敗 執行 visudo 加入 #includedir /etc/sudoers.d, 注意最前面的符號 # 不是註解, 一定要存在 1 [root@kvm4 ~(keystone_admin)]# ll /etc/sudoers.d/ De-Yu Wang CSIE CYUT 196

204 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER total r--r root root 111 Jan 25 15:36 cinder -r--r root root 103 Jan 25 15:39 nova 5 -r--r root root 95 Jul quantum 7 [root@kvm4 ~(keystone_admin)]# visudo [root@kvm4 ~(keystone_admin)]# grep includedir /etc/sudoers 9 #includedir /etc/sudoers.d 14. 刪除紀錄檔, 再重新啟動 nova 相關服務 1 [root@kvm4 ~(keystone_admin)]# rm /var/log/nova/* -f [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-scheduler restart 3 Stopping openstack-nova-scheduler: [ OK ] Starting openstack-nova-scheduler: [ OK ] 5 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-api restart Stopping openstack-nova-api: [ OK ] 7 Starting openstack-nova-api: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-compute restart 9 Stopping openstack-nova-compute: [ OK ] Starting openstack-nova-compute: [ OK ] 11 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-conductor restart Stopping openstack-nova-conductor: [ OK ] 13 Starting openstack-nova-conductor: [ OK ] [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-consoleauth restart 15 Stopping openstack-nova-consoleauth: [ OK ] Starting openstack-nova-consoleauth: [ OK ] 17 [root@kvm4 ~(keystone_admin)]# /etc/init.d/openstack-nova-novncproxy restart Stopping openstack-nova-novncproxy: [ OK ] 19 Starting openstack-nova-novncproxy: [ OK ] 15. 再查看 cinder 服務的紀錄, 已無錯誤訊息 1 [root@kvm4 ~(keystone_admin)]# tail /var/log/nova/* ==> /var/log/nova/api.log <== :56: INFO nova.service [-] Starting 1 workers :56: INFO nova.service [-] Started child :56: INFO nova.network.driver \ [-] Loading network driver nova.network.linux_net :56: INFO nova.osapi_compute.wsgi.server \ [-] (6988) wsgi starting up on 9 ==> /var/log/nova/compute.log <== De-Yu Wang CSIE CYUT 197

205 18.2. NOVA 安裝 CHAPTER 18. NOVA COMPUTE AND CONTROLLER :56: INFO nova.service \ [-] Caught SIGTERM, exiting :56: INFO nova.manager \ [-] Skipping periodic task _periodic_update_dns because its interval is negative :56: INFO nova.virt.driver \ [-] Loading compute driver libvirt.libvirtdriver :56: INFO nova.openstack.common.rpc.impl_qpid \ [req-9df6f81e-414c-4943-b8cb-6cbf8f5fe73c None None] \ 19 Connected to AMQP server on localhost: :56: INFO nova.openstack.common.rpc.impl_qpid \ 21 [req-9df6f81e-414c-4943-b8cb-6cbf8f5fe73c None None] \ Connected to AMQP server on localhost: ==> /var/log/nova/conductor.log <== :56: INFO nova.service \ [-] Caught SIGTERM, exiting :56: AUDIT nova.service \ [-] Starting conductor node (version el6ost) :56: INFO nova.openstack.common.rpc.impl_qpid \ [req-b127355e f-a851-20fd40e9bc7a None None] \ 31 Connected to AMQP server on localhost: ==> /var/log/nova/consoleauth.log <== :56: INFO nova.service \ 35 [-] Caught SIGTERM, exiting :56: AUDIT nova.service \ 37 [-] Starting consoleauth node (version el6ost) :56: INFO nova.openstack.common.rpc.impl_qpid \ 39 [req-17daa1bf-db06-49a0-84f9-c208dbc9a5f7 None None] \ Connected to AMQP server on localhost: ==> /var/log/nova/scheduler.log <== :56: INFO nova.service \ [-] Caught SIGTERM, exiting :56: AUDIT nova.service \ [-] Starting scheduler node (version el6ost) :56: INFO nova.openstack.common.rpc.impl_qpid \ [req-07ec083e b a6a3f49 None None] \ 49 Connected to AMQP server on localhost: :56: INFO nova.openstack.common.rpc.impl_qpid \ 51 [req-07ec083e b a6a3f49 None None] \ Connected to AMQP server on localhost: 檢查 openstack 中 nova 相關服務的狀態 [root@kvm4 ~(keystone_admin)]# openstack-status 2 == Nova services == openstack-nova-api: active 4 openstack-nova-cert: inactive (disabled on boot) openstack-nova-compute: active 6 openstack-nova-network: inactive (disabled on boot) De-Yu Wang CSIE CYUT 198

206 18.3. 建立 INSTANCES CHAPTER 18. NOVA COMPUTE AND CONTROLLER openstack-nova-scheduler: active 8 openstack-nova-volume: dead (disabled on boot) openstack-nova-conductor: active == Support services == 12 mysqld: active libvirtd: active 14 tgtd: active qpidd: active 16 memcached: active 18.3 建立 Instances 1. 改變環境變數為一般使用者 myuser [root@kvm4 ~(keystone_admin)]# source keystonerc_myuser 2 [root@kvm4 ~(keystone_myuser)]# 2. 複製終端機訊息提示檔至 /tmp 目錄, 並加入一行提示此 instance 是由 nova 命令安裝 [root@kvm4 ~(keystone_myuser)]# cp /etc/issue /tmp/ 2 [root@kvm4 ~(keystone_myuser)]# echo "Installed using nova commanline" >> /tmp/issue 3. 產生一組新的 SSH 登入鑰匙, 名為 key1, 存於 /root 目錄下 [root@kvm4 ~(keystone_myuser)]# nova keypair-add key1 > /root/key1. pem 2 [root@kvm4 ~(keystone_myuser)]# ll /root/key1.pem -rw-r--r--. 1 root root 1676 Jan 25 16:11 /root/key1.pem 4. 產生新的 security group, 名為 mysecgroup 1 [root@kvm4 ~(keystone_myuser)]# nova secgroup-create mysecgroup "SSH" Name Description mysecgroup SSH De-Yu Wang CSIE CYUT 199

207 18.3. 建立 INSTANCES CHAPTER 18. NOVA COMPUTE AND CONTROLLER 5. mysecgroup 允許 /0 以 TCP/22 port 連結, 也就是所有 ip 皆可以 TCP 協定連接 22 port ~(keystone_myuser)]# nova secgroup-add-rule mysecgroup tcp / IP Protocol From Port To Port IP Range Source Group tcp / 建立 instance 必須知道使用的 image, flavor, net 等 id 或名稱 查詢 image id [root@kvm4 ~(keystone_myuser)]# nova image-list ID Name Status Server e088a9-0d8c-40fc-b5d cc891d2 minkvm ACTIVE 查詢 flavor 名稱 [root@kvm4 ~(keystone_myuser)]# nova flavor-list ID Name Memory_MB Disk Ephemeral Swap VCPUs RXTX_Factor Is_Public extra_specs m1.tiny True {} 6 2 m1.small True {} 3 m1.medium True {} 8 4 m1.large True {} 5 m1.xlarge True {} De-Yu Wang CSIE CYUT 200

208 18.3. 建立 INSTANCES CHAPTER 18. NOVA COMPUTE AND CONTROLLER 8. 查詢網路 id ~(keystone_myuser)]# quantum net-list id name subnets e9c08e-158d-4c6a-9623-b4eeaea5716c private af75410d-6cf3-457c -a e563db /24 6 f b-4a9c-b dc9a5a9b4 public 76055bb3-ebb3-4e d90c5d8af 使用 m1.tiny flavor, minkvm image, key1 key, mysecgroup security group, private network ID and /tmp/issue file 產生 instance, 名為 minkvm 1 [root@kvm4 ~(keystone_myuser)]# nova boot --flavor m1.tiny \ --image minkvm --key-name key1 --security-group mysecgroup \ 3 --nic net-id=48e9c08e-158d-4c6a-9623-b4eeaea5716c \ --file /etc/issue=/tmp/issue minkvm Property Value status BUILD 9 updated T23:00:51Z OS-EXT-STS:task_state scheduling 11 key_name key1 image minkvm 13 hostid OS-EXT-STS:vm_state building 15 flavor m1.tiny id 97b5d1bd b2a9-bd16d5373cb7 17 security_groups [{u name : u mysecgroup }] user_id 9904d73f9a1841fa89381fccd2bc59df De-Yu Wang CSIE CYUT 201

209 18.3. 建立 INSTANCES CHAPTER 18. NOVA COMPUTE AND CONTROLLER 19 name minkvm adminpass u7fqwfdcudk5 21 tenant_id 6c26d132e38f4b83a1d9f5a777beeb52 created T23:00:50Z 23 OS-DCF:diskConfig MANUAL metadata {} 25 accessipv4 accessipv6 27 progress 0 OS-EXT-STS:power_state 0 29 OS-EXT-AZ:availability_zone nova config_drive 使用 nova list 查看 instance 狀態, 目前為建置中 (BUILD) 1 [root@kvm4 ~(keystone_myuser)]# nova list ID Name Status Networks b5d1bd b2a9-bd16d5373cb7 minkvm BUILD 經過一段時間再查看 instance 已建置完成, 狀態為啟動 (ACTIVE) [root@kvm4 ~(keystone_myuser)]# nova list ID Name Status Networks b5d1bd b2a9-bd16d5373cb7 minkvm ACTIVE private = De-Yu Wang CSIE CYUT 202

210 18.3. 建立 INSTANCES CHAPTER 18. NOVA COMPUTE AND CONTROLLER 若 instance 狀態為錯誤 (ERROR), 必須查看建置過程紀錄檔 /var/log/nova/- compute.log, 錯誤的可能原因 : (a) 自行產生的 image 需要較大的 RAM, 但指定使用的 flavor 不大 (b) 自行設計的作業系統, 也就是匯入的 image 硬碟分割使用兩個 vg, 但其中一個無法使用 (c) 並不是任意的 image 檔都可使用, 詳細請參考 自訂虛擬機 Images instance minkvm 指定的 ip 為 , 使用 quantum port-list 找到連結此 ip 的 port ID [root@kvm4 ~(keystone_myuser)]# quantum port-list id name mac_address fixed_ips dd275e-7ac9-4e1c-82f6-f49d9409d014 fa:16:3e:93:3b:37 { "subnet_id": "af75410d-6cf3-457c-a e563db", "ip_address" : " "} 6 8febcb98-132d-4bf dcf7184a15de fa:16:3e:91:dc:9a { "subnet_id": "af75410d-6cf3-457c-a e563db", "ip_address" : " "} 找到浮動 ip 的 ID 1 [root@kvm4 ~(keystone_myuser)]# quantum floatingip-list id fixed_ip_address floating_ip_address port_id 5 9a01e37b b6e-9615df28eb0f De-Yu Wang CSIE CYUT 203

211 18.4. 測試 INSTANCES 虛擬機 CHAPTER 18. NOVA COMPUTE AND CONTROLLER 15. 連結 的 port 到指定的浮動 ip, 讓 instance 可以連接外部網路 ~(keystone_myuser)]# quantum floatingip-associate \ 2 9a01e37b b6e-9615df28eb0f 8febcb98-132d-4bf dcf7184a15de Associated floatingip 9a01e37b b6e-9615df28eb0f 16. 再次查看浮動 ip 已連結到 的 port 1 [root@kvm4 ~(keystone_myuser)]# quantum floatingip-list id fixed_ip_address floating_ip_address port_id a01e37b b6e-9615df28eb0f febcb98-132d-4bf dcf7184a15de 以 ssh 使用 key1.pem 登入 由於虛擬機開機較慢且自行製作的 image, 有可能開機有問題, 因而無法登入 建議先使用下一章的 dashboard 開啟網頁中的虛擬機 console, 先看是否正常開機, 且網路沒問題 [root@kvm4 ~(keystone_myuser)]# chmod 600 /root/key1.pem 2 [root@kvm4 ~(keystone_myuser)]# ssh -i /root/key1.pem root@ The authenticity of host ( ) can t be established. 4 RSA key fingerprint is 14:9f:cd:45:1e:fe:cd:15:98:57:76:c9 :18:99:77:49. Are you sure you want to continue connecting (yes/no)? yes 6 Warning: Permanently added (RSA) to the list of known hosts. Last login: Sun Jan 26 21:28: [root@kvm3 ~]# hostname kvm3.deyu.wang 10 [root@kvm3 ~]# 18.4 測試 Instances 虛擬機 1. 遠端登入虛擬機 De-Yu Wang CSIE CYUT 204

212 18.4. 測試 INSTANCES 虛擬機 CHAPTER 18. NOVA COMPUTE AND CONTROLLER ~]# ssh The authenticity of host ( ) can t be established. RSA key fingerprint is 14:9f:cd:45:1e:fe:cd:15:98:57:76:c9 :18:99:77:49. 4 Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added (RSA) to the list of known hosts. 6 root@ s password: Warning: No xauth data; using fake authentication data for X11 forwarding. 8 Last login: Mon Jan 27 07:22: from [root@kvm3 ~]# 2. 虛擬機可連上 internet 1 [root@kvm3 ~]# ping -c PING ( ) 56(84) bytes of data bytes from : icmp_seq=1 ttl=53 time=140 ms ping statistics packets transmitted, 1 received, 66% packet loss, time 3005ms 7 rtt min/avg/max/mdev = / / /0.000 ms 3. 設定 DNS 1 [root@kvm3 ~]# vi /etc/resolv.conf ; generated by /sbin/dhclient-script 3 search openstacklocal nameserver nameserver 虛擬機的安全性由 openstack 主機控制, 故關閉本身防火牆 1 [root@kvm3 ~]# chkconfig iptables off [root@kvm3 ~]# /etc/init.d/iptables stop 3 iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] 5 iptables: Unloading modules: 5. 安裝 httpd De-Yu Wang CSIE CYUT 205

213 18.4. 測試 INSTANCES 虛擬機 CHAPTER 18. NOVA COMPUTE AND CONTROLLER 1 [root@kvm3 ~]# yum -y install httpd 6. 啟動網頁服務 httpd, 並設定開機啟動 1 [root@kvm3 ~]# /etc/init.d/httpd start Starting httpd: [ OK ] 3 [root@kvm3 ~]# chkconfig httpd on 7. 產生 index.html 1 [root@kvm3 ~]# echo Instance kvm3 web test. >> /var/www/html/index. html 8. 到 openstack 主機, 改變環境變數為一般使用者 myuser 1 [root@kvm4 ~(keystone_admin)]# source keystonerc_myuser [root@kvm4 ~(keystone_myuser)]# 9. 增加 mysecgroup icmp port, 讓遠端可以 ping 虛擬機 [root@kvm4 ~(keystone_myuser)]# nova secgroup-add-rule mysecgroup icmp / IP Protocol From Port To Port IP Range Source Group icmp / 增加 mysecgroup 80 port [root@kvm4 ~(keystone_myuser)]# nova secgroup-add-rule mysecgroup tcp / IP Protocol From Port To Port IP Range Source Group tcp / De-Yu Wang CSIE CYUT 206

214 18.4. 測試 INSTANCES 虛擬機 CHAPTER 18. NOVA COMPUTE AND CONTROLLER 11. 列出 mysecgroup 開放的 ports ~(keystone_myuser)]# nova secgroup-list-rules mysecgroup IP Protocol From Port To Port IP Range Source Group icmp /0 6 tcp /0 tcp / ping 虛擬機 [root@kvm4 ~(keystone_myuser)]# ping -c PING ( ) 56(84) bytes of data. 64 bytes from : icmp_seq=1 ttl=63 time=2.24 ms 4 64 bytes from : icmp_seq=2 ttl=63 time=0.348 ms 64 bytes from : icmp_seq=3 ttl=63 time=0.393 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.348/0.993/2.240/0.882 ms 13. 測試虛擬機網頁 1 [root@kvm4 ~(keystone_myuser)]# wget :44: Connecting to :80... connected. HTTP request sent, awaiting response OK 5 Length: 24 [text/html] Saving to: \index."html 7 100%[======================================>] K/s in 0s :44:44 (5.53 MB/s) - \index."html saved [24/24] 11 [root@kvm4 ~(keystone_myuser)]# cat index.html 13 Instance kvm3 web test. De-Yu Wang CSIE CYUT 207

215 18.5. INSTANCE 管理 CINDER CHAPTER 空間 18. NOVA COMPUTE AND CONTROLLER 18.5 Instance 管理 cinder 空間 1. 載入一般使用者 myuser 環境變數 1 [root@kvm4 ~]# source keystonerc_myuser [root@kvm4 ~(keystone_myuser)]# 3 \item 產生一個 1,Gtype 為, 名為 lvm vol1 的空間 \\\begin{myverbatim} 5 [root@kvm4 ~(keystone_myuser)]# cinder create --volume-type lvm -- display-name vol Property Value attachments [] availability_zone nova 11 bootable false created_at T11:43: display_description None display_name vol1 15 id b8e18b4a f5e-9aa7-a bc39 metadata {} 17 size 1 snapshot_id None 19 source_volid None status creating 21 volume_type lvm 列出 cinder, 有一個名為 vol1 的 1G 空間可用, 記下其 id [root@kvm4 ~(keystone_myuser)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to b8e18b4a f5e-9aa7-a bc39 available vol1 1 lvm false 列出虛擬機, 有一台名為 minkvm 可用 [root@kvm4 ~(keystone_myuser)]# nova list De-Yu Wang CSIE CYUT 208

216 18.5. INSTANCE 管理 CINDER CHAPTER 空間 18. NOVA COMPUTE AND CONTROLLER ID Name Status Networks b5d1bd b2a9-bd16d5373cb7 minkvm ACTIVE private = , 附上 vol1 的空間給虛擬機 minkvm [root@kvm4 ~(keystone_myuser)]# nova volume-attach minkvm b8e18b4a f5e-9aa7-a bc39 auto Property Value device /dev/vdb 6 serverid 97b5d1bd b2a9-bd16d5373cb7 id b8e18b4a f5e-9aa7-a bc39 8 volumeid b8e18b4a f5e-9aa7-a bc 列出 cinder, 可以看到 vol1 正在加入 instance minkvm 中 1 [root@kvm4 ~(keystone_myuser)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to b8e18b4a f5e-9aa7-a bc39 attaching vol1 1 None false 過一段時間再次列出 cinder, 可以看到 vol1 已加入 instance minkvm 中 [root@kvm4 ~(keystone_myuser)]# cinder list ID Status Display Name Size Volume Type Bootable Attached to De-Yu Wang CSIE CYUT 209

217 18.6. 自訂虛擬機 IMAGES CHAPTER 18. NOVA COMPUTE AND CONTROLLER b8e18b4a f5e-9aa7-a bc39 in-use vol1 1 lvm false 97b5d1bd b2a9-bd16d5373cb 登入 instance minkvm [root@kvm4 ~(keystone_myuser)]# ssh -i /root/key1.pem root@ Last login: Mon Jan 27 07:51: from [root@kvm3 ~]# 8. 看到有一顆 1G 的硬碟 /dev/vdb 可用 1 [root@kvm3 ~]# fdisk -luc /dev/vdb 3 Disk /dev/vdb: 1073 MB, bytes 16 heads, 63 sectors/track, 2080 cylinders, total sectors 5 Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes 7 I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x 自訂虛擬機 Images 1. 虛擬機 images 的產生有一定的要求, 詳如 Openstack docs 2. 為確認 openstack nova 沒有問題, 可先下載 cirros-xxx.img 或 small.img 3. 虛擬機 image 製作重點 : (a) 不需要分割 /boot 及 swap (b) 先只建立一個 partition 安裝系統,lvm 或 linux 皆可 實作時曾分割一個 partition /boot 及兩個 lvm vg, nova boot 時出現 error 是因其中有一個 vg 的 lv 無法掛載 這部分可參考 Openstack docs, 做進一步的測試 (c) 開機關閉 iptables 防火牆服務 chkconfig iptables off De-Yu Wang CSIE CYUT 210

218 18.6. 自訂虛擬機 IMAGES CHAPTER 18. NOVA COMPUTE AND CONTROLLER (d) 記得開機啟動 sshd 服務, 否則無法連線 1 chkconfig sshd on (e) 由於建立 instance 虛擬機, 網卡硬體位址會不同, 一定要清除 eth0 網卡中的硬體位址 HWADD, 且清除網卡產生檔案, 否則虛擬機開機無法啟動網路, 也就無法連線 建立虛擬機時清除網卡指令如下, 不要使用連結 ln -sf /dev/null 方式, 否則雲端虛擬機無法開機 1 sed -i s/^hwaddr=.*$// /etc/sysconfig/network-scripts/ifcfgeth0 echo > /etc/udev/rules.d/70-persistent-net.rules 3 echo > /lib/udev/rules.d/75-persistent-net-generator.rules 4. 產生本本實作中使用的 image minkvm.qcow2, 使用的自動安裝 kickstart 檔 minkvm-ks.cfg 5. 以 virt-install 安裝完成的 Image 檔 minkvm.qcow2 下載 De-Yu Wang CSIE CYUT 211

219 18.6. 自訂虛擬機 IMAGES CHAPTER 18. NOVA COMPUTE AND CONTROLLER De-Yu Wang CSIE CYUT 212

220 CHAPTER 19. DASHBOARD Chapter 19 Dashboard 19.1 認識 Dashboard 1. Dashboard 是 OpenStack 中的網頁管理系統 2. Dashboard 使用 Django 設計, 程式語言為 python 3. Dashboard 幫助管理者管理 openstack, 就如 phpmyadmin 幫忙管理者管理 MySQL 19.2 Dashboard 安裝 1. 安裝 Dashboard 套件 1 [root@kvm7 ~]# yum install -y mod_wsgi httpd mod_ssl openstackdashboard memcached python-memcached 2. 設定 dashboard 1 [root@kvm7 ~]# vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST=" " 3 CACHE_BACKEND = /memcached:// :11211/ 3. Dashboard 需要名為 member 的 keystone 角色, 所以導入 admin 環境變數, 查看有沒有 member 角色, 如果沒有增加這個角色 1 [root@kvm7 ~]# source keystonerc_admin [root@kvm7 ~(keystone_admin)]# keystone role-list id name De-Yu Wang CSIE CYUT 213

221 19.2. DASHBOARD 安裝 CHAPTER 19. DASHBOARD 9fe2ff9ee4384b1894a90878d3e92bab _member_ 7 91f1e2429c614bac8efe19cef39e8e7d admin 1b5a478342bc4f67a83484ce60e0f322 member [root@kvm7 ~(keystone_admin)]# 11 [root@kvm7 ~(keystone_admin)]# keystone role-create --name member 4. 修改 SELinux 政策, 允許從網頁連結 openstack 1 [root@kvm7 ~(keystone_admin)]# setsebool -P httpd_can_network_connect on 5. 啟動 httpd 服務, 並設定開機啟動 1 [root@kvm7 ~(keystone_admin)]# /etc/init.d/httpd start Starting httpd: [ OK ] 3 [root@kvm7 ~(keystone_admin)]# chkconfig httpd on 6. iptables 暫時開啟 https 443 port 1 [root@kvm7 ~(keystone_myuser)]# iptables -I INPUT -p tcp \ -m state --state NEW -m tcp --dport 443 -j ACCEPT 7. iptables 永久開啟 https 443 port, 先將目前的防火牆設定寫入 /etc/sysconfig/iptables [root@kvm7 ~]# /etc/init.d/iptables save 2 iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] 8. iptables 開啟 https 443 port 的規則寫入 /etc/sysconfig/iptables [root@kvm7 ~]# vim /etc/sysconfig/iptables 2 [root@kvm7 ~]# grep 443 -C2 /etc/sysconfig/iptables -A INPUT -i lo -j ACCEPT 4 -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT 6 -A INPUT -j REJECT --reject-with icmp-host-prohibited De-Yu Wang CSIE CYUT 214

222 19.3. DASHBOARD 使用 CHAPTER 19. DASHBOARD -A FORWARD -j quantum-filter-top 9. 重新啟動 iptables 1 [root@kvm7 ~]# /etc/init.d/iptables restart iptables: Flushing firewall rules: [ OK ] 3 iptables: Setting chains to policy ACCEPT: mangle nat filte[ OK ] iptables: Unloading modules: iptable_nat iptable_filter ip[failed]t iptable_filter ip_tables 5 iptables: Applying firewall rules: [ OK ] 19.3 Dashboard 使用 1. 此節畫面為 kvm7.deyu.wang, 與本文的 kvm4.deyu.wang 有出入, 但不影響成果展示 2. 連結 出現登入畫面 3. admin 管理者登入畫面 De-Yu Wang CSIE CYUT 215

223 19.3. DASHBOARD 使用 CHAPTER 19. DASHBOARD 4. 以帳號 myuser, 密碼 123qwe 登入, 點選 instances -> minkvm -> console 若出現無法連線, 應該是防火牆阻擋 5. 防火牆開啟 6080 port 1 [root@kvm7 ~(keystone_myuser)]# iptables -I INPUT -p tcp \ -m multiport --dports m comment \ 3 --comment "console incoming" -j ACCEPT 6. 以帳號 myuser, 密碼 123qwe 登入, 再點選 instances -> minkvm -> console 已可操作此虛擬機 De-Yu Wang CSIE CYUT 216

224 19.3. DASHBOARD 使用 CHAPTER 19. DASHBOARD 7. Nova instance 狀態為 SHUTOFF, 可以使用 Soft Reboot Instance 或 Hard Reboot Instance 重新啟動 8. Nova instance 狀態為 SHUTOFF, 也可以使用文字界面指令,Soft Reboot Instance 或 Hard Reboot Instance 重新啟動 1 [root@kvm7 ~(keystone_myuser)]# nova list ID Name Status Networks b5d1bd b2a9-bd16d5373cb7 minkvm ACTIVE private = , De-Yu Wang CSIE CYUT 217

225 19.4. KVM 調整 CHAPTER 19. DASHBOARD [root@kvm7 ~(keystone_myuser)]# nova reboot --hard minkvm 19.4 KVM 調整 1. 因本實作環境 Openstack 架在 KVM 虛擬機上, 虛擬機裡還要執行虛擬機, 如果 RAM 不夠大或 CPU CORE 數太小, 虛擬機內的擬虛機, 也就是所謂的 instance, 開機會很慢或開機開到當掉, 尤其是 CPU core 數太小, 在執行 instance 時 CPU 使用率接近 100%, 速度非常慢 查看目前虛擬機 osusb 的 CPU core 數量為 2, 必須加大 1 [root@kvm7 ~]# nproc 2 2. 以 virsh edit 指令編輯 osusb 虛擬機的 XML 設定檔, 找到 vcpu 參數, 改成 4, 找到 memory 改成 , 也就是 3G [root@dywh ~]# virsh edit osusb 2 Domain osusb XML configuration edited. 4 [root@dywh ~]# grep vcpu /etc/libvirt/qemu/osusb.xml <vcpu placement= static >4</vcpu> 6 [root@dywh ~]# grep memory /etc/libvirt/qemu/osusb.xml 8 <memory unit= KiB > </memory> 3. 關閉 osusb 虛擬機 [root@dywh ~]# virsh destroy osusb 2 Domain osusb destroyed 4. 啟動 osusb 虛擬機 [root@dywh ~]# virsh start osusb 2 Domain osusb started De-Yu Wang CSIE CYUT 218

226 19.5. INSTANCE 螢幕保護 CHAPTER 19. DASHBOARD 5. 再重新登入 osusb 虛擬機, 檢查 CPU core 數量已變為 4 [root@dywh ~]# ssh kvm7.deyu.wang 2 root@kvm7.deyu.wang s password: Last login: Wed Dec 31 09:55: from [root@kvm7 ~]# nproc 4 6. 修改後以指令 nova reboot 重新啟動 openstack 內的虛擬機 minkvm, 並以 top 觀察 CPU 使用率仍然接近 100%, 但 minkvm 重開機速度已加快許多 1 [root@kvm7 ~]# source keystonerc_myuser [root@kvm7 ~(keystone_myuser)]# nova reboot --hard minkvm 19.5 Instance 螢幕保護 1. Openstack 進入 instance console 畫面, 在此畫面 console 不接受鍵盤訊號, 也就是無法操作 2. 要操作 instance, 必須點選 Click here to show only console 進入只有 console 的頁面才可以 De-Yu Wang CSIE CYUT 219

227 19.5. INSTANCE 螢幕保護 CHAPTER 19. DASHBOARD 3. 當一段時間沒動作, 遠端機器會進入螢幕保護狀態, 文字界面的機器出現全黑畫面, 必須按任何鍵喚醒 建議開閉 instance 的螢幕保護 4. 查詢目前的 instance, 待機 600 秒主機會進入螢幕保謢狀態, 必須將其改成 0, 也就是關閉 ~]# cat /sys/module/kernel/parameters/consoleblank 修改開機選單, 在 kernel 參數最後加入 consoleblank=0 [root@kvm3 ~]# vim /boot/grub/grub.conf 2 [root@kvm3 ~]# grep blank /boot/grub/grub.conf kernel /boot/vmlinuz openstack.el6.x86_64 ro \ 4 root=uuid=28ce3d82-17eb-4b70-a878-feb970e09f7d \ rd_no_luks rd_no_lvm rd_no_md rd_no_dm \ 6 LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 \ KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto \ 8 consoleblank=0 6. 不要在 instance 內下 reboot 指令重新啟動, 只會關機卻不會開機 [root@kvm3 ~]# reboot 7. 先登出回到 kvm7 主機, 以 myuser 環境重新啟動 instance De-Yu Wang CSIE CYUT 220

~# nano /etc/my.cnf [mysqld] character_set_server=utf8 character_set_client=utf8 max_heap_table_size=90m tmp_table_size=64m join_buffer_size=64m innod

~# nano /etc/my.cnf [mysqld] character_set_server=utf8 character_set_client=utf8 max_heap_table_size=90m tmp_table_size=64m join_buffer_size=64m innod Cacti 1.1.x CentOS7 安裝手冊 201708 台大網路組游子興 安裝 nano 文字編輯器 : ~# yum install nano VM Player 時間修正 : ~# date ~# vmware-toolbox-cmd timesync enable ~# date 關閉 SELINUX: ~# nano /etc/selinux/config SELINUX=disabled

More information

ebook140-9

ebook140-9 9 VPN VPN Novell BorderManager Windows NT PPTP V P N L A V P N V N P I n t e r n e t V P N 9.1 V P N Windows 98 Windows PPTP VPN Novell BorderManager T M I P s e c Wi n d o w s I n t e r n e t I S P I

More information

Microsoft Word - template.doc

Microsoft Word - template.doc HGC efax Service User Guide I. Getting Started Page 1 II. Fax Forward Page 2 4 III. Web Viewing Page 5 7 IV. General Management Page 8 12 V. Help Desk Page 13 VI. Logout Page 13 Page 0 I. Getting Started

More information

ebook 185-6

ebook 185-6 6 Red Hat Linux DB2 Universal Database 6.1 D B 2 Red Hat D B 2 Control Center D B 2 D B 2 D B 2 6.1 DB2 Universal Database [DB2]6.1 D B 2 O LT P O L A P D B 2 I B M P C We e k D B 2 D B 2 L i n u x Windows

More information

Marketing

Marketing OpenStack 雲平台介紹與應用 迎棧科技 陳彥勝 SAM Email:sam.c@inwinstack.com Web Site:https://samopenstack.hackpad.com/ SAM 從 2010 年開始在網頁公司工作 負責前後端程式開發與資料庫應用 2012 年加入雲端新創公司迎棧科技 inwinstack 擔任資深架構師與講師, 協助客戶專案導入與內外教育訓練 經常活動於

More information

WebSphere Studio Application Developer IBM Portal Toolkit... 2/21 1. WebSphere Portal Portal WebSphere Application Server stopserver.bat -configfile..

WebSphere Studio Application Developer IBM Portal Toolkit... 2/21 1. WebSphere Portal Portal WebSphere Application Server stopserver.bat -configfile.. WebSphere Studio Application Developer IBM Portal Toolkit... 1/21 WebSphere Studio Application Developer IBM Portal Toolkit Portlet Doug Phillips (dougep@us.ibm.com),, IBM Developer Technical Support Center

More information

ebook140-8

ebook140-8 8 Microsoft VPN Windows NT 4 V P N Windows 98 Client 7 Vintage Air V P N 7 Wi n d o w s NT V P N 7 VPN ( ) 7 Novell NetWare VPN 8.1 PPTP NT4 VPN Q 154091 M i c r o s o f t Windows NT RAS [ ] Windows NT4

More information

A9RF716.tmp

A9RF716.tmp 1 PART I 1 2 3 4 5 6 7 8 Docker Docker Image Container Repository Docker le Docker Docker 8 1 Docker Linux 2 Docker Docker 3 5 Docker 6 Docker volume 7 8 Docker le Docker le 1 C H A P T E R 1 CPU Data

More information

Chapter 2

Chapter 2 2 (Setup) ETAP PowerStation ETAP ETAP PowerStation PowerStation PowerPlot ODBC SQL Server Oracle SQL Server Oracle Windows SQL Server Oracle PowerStation PowerStation PowerStation PowerStation ETAP PowerStation

More information

OpenStack Nova安装说明

OpenStack Nova安装说明 OpenStack 完整安装手册 (all-in-one) CentOS 6.x 基于 RPM 包 作者 : yz 联系方式 : QQ: 949587200 日期 : 2012-7-18 版本 : Essex Release 目录 实验环境... 4 架构部署... 4 服务器系统安装... 5 安装... 5 前提工作... 5 NTP 时钟服务安装... 5 MYSQL 数据库服务安装... 6

More information

IP505SM_manual_cn.doc

IP505SM_manual_cn.doc IP505SM 1 Introduction 1...4...4...4...5 LAN...5...5...6...6...7 LED...7...7 2...9...9...9 3...11...11...12...12...12...14...18 LAN...19 DHCP...20...21 4 PC...22...22 Windows...22 TCP/IP -...22 TCP/IP

More information

Microsoft PowerPoint - 03.IPv6_Linux.ppt [相容模式]

Microsoft PowerPoint - 03.IPv6_Linux.ppt [相容模式] IPv6 Linux (Cent OS 5.x) IPV6 2 IPv6 IPv6 IPv6 IPv6 IPv4 IPv6 (RFC 2460) Dual Stack Tunnel 3 4 IPv6 Native IP IPv6, DHCPv6 IPv6 IP IPv6 Tunnel Broker IPv4, Tunnel IPv6 Tunnel Broker Client IPv6 ( ) IPv6

More information

AL-M200 Series

AL-M200 Series NPD4754-00 TC ( ) Windows 7 1. [Start ( )] [Control Panel ()] [Network and Internet ( )] 2. [Network and Sharing Center ( )] 3. [Change adapter settings ( )] 4. 3 Windows XP 1. [Start ( )] [Control Panel

More information

Basic System Administration

Basic System Administration 基 本 系 统 管 理 ESX Server 3.5 ESX Server 3i 版 本 3.5 Virtual Center 2.5 基 本 管 理 指 南 基 本 管 理 指 南 修 订 时 间 :20080410 项 目 :VI-CHS-Q208-490 我 们 的 网 站 提 供 最 新 的 技 术 文 档, 网 址 为 : http://www.vmware.com/cn/support/

More information

PowerPoint Presentation

PowerPoint Presentation TOEFL Practice Online User Guide Revised September 2009 In This Guide General Tips for Using TOEFL Practice Online Directions for New Users Directions for Returning Users 2 General Tips To use TOEFL Practice

More information

RunPC2_.doc

RunPC2_.doc PowerBuilder 8 (5) PowerBuilder Client/Server Jaguar Server Jaguar Server Connection Cache Thin Client Internet Connection Pooling EAServer Connection Cache Connection Cache Connection Cache Connection

More information

ansoft_setup21.doc

ansoft_setup21.doc Cadence Cadence Cadence 1000 (1) (2) CIC (3).. CIC Cadence (a) CIC license license server license CIC license CIC license (b) 2000 Cadence license 92 1 1 license server CIC 92 1 1 Cadence license licenser

More information

スライド 1

スライド 1 LPIC 304 2014 7 27 ( ) 13:30 16:30 LPI-Japan LPI-Japan 2009. All rights reserved. LPI-Japan 2009. All rights reserved. 2 Linux Linus Torvalds Carl ) in LinuxConJapan http://www.lpi.or.jp/news/event/page/20130529_02_report/

More information

投影片 1

投影片 1 FreeBSD A 95/10/11 19:00~21:00 95/10/11 FreeBSD 練 1 Services Setup SSH, lighttpd, PHP, MySQL, FTP, Postfix, phpmyadmin, Blog, Gallery 95/10/11 FreeBSD 練 2 1. 2. # FreeBSD # 3. vi ee joe nano etc 95/10/11

More information

Microsoft Word - PS2_linux_guide_cn.doc

Microsoft Word - PS2_linux_guide_cn.doc Linux For $ONY PlayStatioin2 Unofficall General Guide Language: Simplified Chinese First Write By Beter Hans v0.1 Mail: hansb@citiz.net Version: 0.1 本 人 是 菜 鸟 + 小 白 欢 迎 指 正 错 误 之 处, 如 果 您 有 其 他 使 用 心 得

More information

05_資源分享-NFS及NIS.doc

05_資源分享-NFS及NIS.doc 5 NFS NFS Server NFS Client NIS NIS 5-0 (Network File System, NFS) Unix NFS mount NFS... Network Information Service NIS Linux NIS NIS NIS / / /etc/passwd /etc/group NFS NIS 5-1 NFS 5-1-1 NFS NFS Network

More information

Symantec™ Sygate Enterprise Protection 防护代理安装使用指南

Symantec™ Sygate Enterprise Protection 防护代理安装使用指南 Symantec Sygate Enterprise Protection 防 护 代 理 安 装 使 用 指 南 5.1 版 版 权 信 息 Copyright 2005 Symantec Corporation. 2005 年 Symantec Corporation 版 权 所 有 All rights reserved. 保 留 所 有 权 利 Symantec Symantec 徽 标 Sygate

More information

1 IT IT IT IT Virtual Machine, VM VM VM VM Operating Systems, OS IT

1 IT IT IT IT Virtual Machine, VM VM VM VM Operating Systems, OS IT 1 IT IT IT IT Virtual Machine, VM VM VM VM Operating Systems, OS IT Chapter 1 了解虛擬化技術種類 硬體 / 平台 / 伺服器虛擬化 VM VM VM CPU Hypervisor VMM Virtual Machine Manager VM Host OS VM VM Guest OS Host OS CPU VM Hyper-V

More information

スライド 1

スライド 1 LPIC 304 2015 1 18 ( ) 13:30 16:30 LPI-Japan LPI-Japan 2009. All rights reserved. LPI-Japan 2009. All rights reserved. 2 Linux Linus Torvalds Carl ) in LinuxConJapan nginx Igor Sysoev in Nginx LPI-Japan

More information

IBM Rational ClearQuest Client for Eclipse 1/ IBM Rational ClearQuest Client for Ecl

IBM Rational ClearQuest Client for Eclipse   1/ IBM Rational ClearQuest Client for Ecl 1/39 Balaji Krish,, IBM Nam LeIBM 2005 4 15 IBM Rational ClearQuest ClearQuest Eclipse Rational ClearQuest / Eclipse Clien Rational ClearQuest Rational ClearQuest Windows Web Rational ClearQuest Client

More information

RAID RAID 0 RAID 1 RAID 5 RAID * ( -1)* ( /2)* No Yes Yes Yes A. B. BIOS SATA C. RAID BIOS RAID ( ) D. SATA RAID/AHCI ( ) SATA M.2 SSD ( )

RAID RAID 0 RAID 1 RAID 5 RAID * ( -1)* ( /2)* No Yes Yes Yes A. B. BIOS SATA C. RAID BIOS RAID ( ) D. SATA RAID/AHCI ( ) SATA M.2 SSD ( ) RAID RAID 0 RAID 1 RAID 5 RAID 10 2 2 3 4 * (-1)* (/2)* No Yes Yes Yes A. B. BIOS SATA C. RAID BIOS RAID ( ) D. SATA RAID/AHCI ( ) SATA M.2 SSD ( ) ( ) ( ) Windows USB 1 SATA A. SATASATAIntel SATA (SATA3

More information

1 SQL Server 2005 SQL Server Microsoft Windows Server 2003NTFS NTFS SQL Server 2000 Randy Dyess DBA SQL Server SQL Server DBA SQL Server SQL Se

1 SQL Server 2005 SQL Server Microsoft Windows Server 2003NTFS NTFS SQL Server 2000 Randy Dyess DBA SQL Server SQL Server DBA SQL Server SQL Se 1 SQL Server 2005 DBA Microsoft SQL Server SQL ServerSQL Server SQL Server SQL Server SQL Server SQL Server 2005 SQL Server 2005 SQL Server 2005 o o o SQL Server 2005 1 SQL Server 2005... 3 2 SQL Server

More information

ebook71-13

ebook71-13 13 I S P Internet 13. 2. 1 k p p p P P P 13. 2. 2 1 3. 2. 3 k p p p 1 3. 2. 4 l i n u x c o n f P P P 13. 2. 5 p p p s e t u p 13. 2. 6 p p p s e t u p P P P 13. 2. 7 1 3. 2. 8 C a l d e r a G U I 13.

More information

自由軟體社群發展經驗與 Linux認證介紹

自由軟體社群發展經驗與  Linux認證介紹 -- (http://linux.vbird.org) 2011/08/12 1 -- -- 不 理 便 了 來 連 ( ) ( ) 論 ~ ~ 2 復 理 3 4 復 數 量 復 離 來 ~ @_@ 5 - 年 Linux windows virtualbox 不 理 Linux Xen 立 4 4GB 了 30 xen 來 sudo xm 來 Linux I/O 例 yum 6 - 年 Windows

More information

SL2511 SR Plus 操作手冊_單面.doc

SL2511 SR Plus 操作手冊_單面.doc IEEE 802.11b SL-2511 SR Plus SENAO INTERNATIONAL CO., LTD www.senao.com - 1 - - 2 - .5 1-1...5 1-2...6 1-3...6 1-4...7.9 2-1...9 2-2 IE...11 SL-2511 SR Plus....13 3-1...13 3-2...14 3-3...15 3-4...16-3

More information

未命名 -1

未命名 -1 BV8188M 使 用 说 明 INSTRUCTIONS 使 用 之 前 请 仔 细 阅 读 此 手 册 Please read before using this manual 深 圳 市 碧 维 视 科 技 有 限 公 司 2013 年 碧 维 视 印 刷, 版 权 所 有, 翻 版 必 究, 本 手 册 内 所 有 图 文, 未 经 授 权, 严 谨 与 任 何 方 式 之 全 面 或 部 分

More information

Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1-1MRS755673

Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1-1MRS755673 Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1 - Contents MicroSCADA Pro Portal Marketing and sales Ordering MicroSCADA Pro Partners Club 2005 ABB - 2 - MicroSCADA Pro - Portal Imagine that

More information

IC-900W Wireless Pan & Tilt Wireless Pan & Tilt Remote Control / Night Vision FCC ID:RUJ-LR802UWG

IC-900W Wireless Pan & Tilt Wireless Pan & Tilt Remote Control / Night Vision FCC ID:RUJ-LR802UWG IC-900W Wireless Pan & Tilt Wireless Pan & Tilt Remote Control / Night Vision FCC ID:RUJ-LR802UWG --------------------------------------------TABLE OF CONTENTS------------------------------------------

More information

untitled

untitled Parent zone named.conf.options ( Root) shell script shell script 2 Child zone named.conf.options ( ) ( ) ( ) ( ) ( ) ( parent zone) 3 Parent zone named.conf.options $ vi /etc/bind/named.conf.options options

More information

ebook140-11

ebook140-11 11 VPN Windows NT4 B o r d e r M a n a g e r VPN VPN V P N V P N V P V P N V P N TCP/IP 11.1 V P N V P N / ( ) 11.1.1 11 V P N 285 2 3 1. L A N LAN V P N 10MB 100MB L A N VPN V P N V P N Microsoft PPTP

More information

untitled

untitled MySQL DBMS under Win32 Editor: Jung Yi Lin, Database Lab, CS, NCTU, 2005/09/16 MySQL 料 理 MySQL 兩 Commercial License 利 GPL MySQL http://www.mysql.com Developer Zone http://www.mysql.com Download 連 連 MySQL

More information

1

1 DOCUMENTATION FOR FAW-VW Auto Co., Ltd. Sales & Service Architecture Concept () () Version 1.0.0.1 Documentation FAW-VW 1 61 1...4 1.1...4 2...4 3...4 3.1...4 3.2...5 3.3...5 4...5 4.1 IP...5 4.2 DNSDNS...6

More information

PowerPoint Presentation

PowerPoint Presentation 立 97 年度 SNMG 練 DNS & BIND enc1215@gmail.com DNS BIND Resolver Named 理 Named 更 DNS DNS Reference 2 DNS DNS 料 domain ip DNS server DNS server 理 DNS server DNS DNS 狀. root name server 理 3 DNS 狀 DNS (2). com

More information

1.ai

1.ai HDMI camera ARTRAY CO,. LTD Introduction Thank you for purchasing the ARTCAM HDMI camera series. This manual shows the direction how to use the viewer software. Please refer other instructions or contact

More information

untitled

untitled ArcGIS Server Web services Web services Application Web services Web Catalog ArcGIS Server Web services 6-2 Web services? Internet (SOAP) :, : Credit card authentication, shopping carts GIS:, locator services,

More information

快 速 入 门 (Linux) 概 述 文 档 目 的 本 文 档 介 绍 了 如 何 快 速 创 建 Linux 系 统 实 例 远 程 连 接 实 例 部 署 环 境 等 旨 在 引 导 您 一 站 式 完 成 实 例 的 创 建 登 录 和 快 速 环 境 部 署 云 服 务 器 ECS 实

快 速 入 门 (Linux) 概 述 文 档 目 的 本 文 档 介 绍 了 如 何 快 速 创 建 Linux 系 统 实 例 远 程 连 接 实 例 部 署 环 境 等 旨 在 引 导 您 一 站 式 完 成 实 例 的 创 建 登 录 和 快 速 环 境 部 署 云 服 务 器 ECS 实 云 服 务 器 ECS 快 速 入 门 (Linux) 快 速 入 门 (Linux) 概 述 文 档 目 的 本 文 档 介 绍 了 如 何 快 速 创 建 Linux 系 统 实 例 远 程 连 接 实 例 部 署 环 境 等 旨 在 引 导 您 一 站 式 完 成 实 例 的 创 建 登 录 和 快 速 环 境 部 署 云 服 务 器 ECS 实 例, 有 时 候 也 被 称 为 阿 里 云

More information

AL-MX200 Series

AL-MX200 Series PostScript Level3 Compatible NPD4760-00 TC Seiko Epson Corporation Seiko Epson Corporation ( ) Seiko Epson Corporation Seiko Epson Corporation Epson Seiko Epson Corporation Apple Bonjour ColorSync Macintosh

More information

epub83-1

epub83-1 C++Builder 1 C + + B u i l d e r C + + B u i l d e r C + + B u i l d e r C + + B u i l d e r 1.1 1.1.1 1-1 1. 1-1 1 2. 1-1 2 A c c e s s P a r a d o x Visual FoxPro 3. / C / S 2 C + + B u i l d e r / C

More information

TX-NR3030_BAS_Cs_ indd

TX-NR3030_BAS_Cs_ indd TX-NR3030 http://www.onkyo.com/manual/txnr3030/adv/cs.html Cs 1 2 3 Speaker Cable 2 HDMI OUT HDMI IN HDMI OUT HDMI OUT HDMI OUT HDMI OUT 1 DIGITAL OPTICAL OUT AUDIO OUT TV 3 1 5 4 6 1 2 3 3 2 2 4 3 2 5

More information

Microsoft Word - Functional_Notes_3.90_CN.doc

Microsoft Word - Functional_Notes_3.90_CN.doc GeO-iPlatform Functional Notes GeO Excel Version 3.90 Release Date: December 2008 Copyrights 2007-2008. iplatform Corporation. All rights reserved. No part of this manual may be reproduced in any form

More information

.. 3 N

.. 3 N 1 .. 3 N9.. 4 5.. 6 7.. 8 20.. 21 23.. 24.. 25 26.. 27.. 28.. 29 2 (Cyber Café) Linux (LAN) Linux Public Home 3 K12LTSP K12LTSPFedora Core 4 (Linux)LTSP Linux (command line interface) (Graphical User Interface,

More information

Windows 2000 Server for T100

Windows 2000 Server for T100 2 1 Windows 95/98 Windows 2000 3.5 Windows NT Server 4.0 2 Windows DOS 3.5 T200 2002 RAID RAID RAID 5.1 Windows 2000 Server T200 2002 Windows 2000 Server Windows 2000 Server Windows 2000 Server 3.5 for

More information

Kubenetes 系列列公开课 2 每周四晚 8 点档 1. Kubernetes 初探 2. 上 手 Kubernetes 3. Kubernetes 的资源调度 4. Kubernetes 的运 行行时 5. Kubernetes 的 网络管理理 6. Kubernetes 的存储管理理 7.

Kubenetes 系列列公开课 2 每周四晚 8 点档 1. Kubernetes 初探 2. 上 手 Kubernetes 3. Kubernetes 的资源调度 4. Kubernetes 的运 行行时 5. Kubernetes 的 网络管理理 6. Kubernetes 的存储管理理 7. Kubernetes 包管理理 工具 Helm 蔺礼强 Kubenetes 系列列公开课 2 每周四晚 8 点档 1. Kubernetes 初探 2. 上 手 Kubernetes 3. Kubernetes 的资源调度 4. Kubernetes 的运 行行时 5. Kubernetes 的 网络管理理 6. Kubernetes 的存储管理理 7. Kubernetes

More information

Windows XP

Windows XP Windows XP What is Windows XP Windows is an Operating System An Operating System is the program that controls the hardware of your computer, and gives you an interface that allows you and other programs

More information

PL600 IPPBX 用户手册_V2.0_.doc

PL600 IPPBX 用户手册_V2.0_.doc VoIP 网 络 交 换 机 PL-600 IPPBX 用 户 手 册 深 圳 普 联 讯 电 子 科 技 有 限 公 司 版 权 所 有 2009 深 圳 市 普 联 讯 电 子 科 技 有 限 公 司 第 1 共 1 目 录 1. 前 言...3 2. 安 装 前 准 备...3 3. 硬 件 安 装...4 4. 登 陆 及 一 般 操 作 介 绍...4 5. 基 本 配 置...6 6.

More information

幻灯片 1

幻灯片 1 SDN 和 分 布 式 网 络 虚 拟 化 设 计 QQ : 67278439 新 浪 微 博 :@ 盛 科 张 卫 峰 Topic SDN 对 云 计 算 网 络 的 价 值 SDN 网 络 虚 拟 化 方 案 一 览 从 OVN 看 分 布 式 网 络 虚 拟 化 设 计 基 于 硬 件 SDN 交 换 机 的 网 络 虚 拟 化 SDN 是 一 种 思 想 SDN 不 是 一 种 具 体 的

More information

软件概述

软件概述 Cobra DocGuard BEIJING E-SAFENET SCIENCE & TECHNOLOGY CO.,LTD. 2003 3 20 35 1002 010-82332490 http://www.esafenet.com Cobra DocGuard White Book 1 1....4 1.1...4 1.2 CDG...4 1.3 CDG...4 1.4 CDG...5 1.5

More information

畢業專題結案報告書格式

畢業專題結案報告書格式 元 培 科 技 大 學 資 訊 工 程 系 專 題 期 末 報 告 使 用 Game maker 製 作 多 人 連 線 遊 戲 Making multiplayer games using game maker 姓 名 : 0981412016 周 宣 佑 0981412003 蔡 程 翔 0981412005 韋 梓 健 0981412015 沈 永 崑 0981412051 洪 仕 軒 指 導

More information

Windows RTEMS 1 Danilliu MMI TCP/IP QEMU i386 QEMU ARM POWERPC i386 IPC PC104 uc/os-ii uc/os MMI TCP/IP i386 PORT Linux ecos Linux ecos ecos eco

Windows RTEMS 1 Danilliu MMI TCP/IP QEMU i386 QEMU ARM POWERPC i386 IPC PC104 uc/os-ii uc/os MMI TCP/IP i386 PORT Linux ecos Linux ecos ecos eco Windows RTEMS 1 Danilliu MMI TCP/IP 80486 QEMU i386 QEMU ARM POWERPC i386 IPC PC104 uc/os-ii uc/os MMI TCP/IP i386 PORT Linux ecos Linux ecos ecos ecos Email www.rtems.com RTEMS ecos RTEMS RTEMS Windows

More information

電子商業伺服器管理(終極版).doc

電子商業伺服器管理(終極版).doc 2 3 4 5 Chinese Linux Documentation Project / 6 7 8 9 10 #!/bin/sh # # named This shell script takes care of starting and stopping # named (BIND DNS server). # # Source function library.. /etc/rc.d/init.d/functions

More information

epub 61-2

epub 61-2 2 Web Dreamweaver UltraDev Dreamweaver 3 We b We b We Dreamweaver UltraDev We b Dreamweaver UltraDev We b We b 2.1 Web We b We b D r e a m w e a v e r J a v a S c r i p t We b We b 2.1.1 Web We b C C +

More information

P4V88+_BIOS_CN.p65

P4V88+_BIOS_CN.p65 1 Main H/W Monitor Boot Security Exit System Overview System Time System Date [ 17:00:09] [Wed 12/22/2004] BIOS Version : P4V88+ BIOS P1.00 Processor Type : Intel (R) Pentium (R) 4 CPU 2.40 GHz Processor

More information

untitled

untitled BEA WebLogic Server WebLogic Server WebLogic Server Domain Administration Server Managed Server 行 說 Domains Domain Server 1 Server 2 Cluster Server 4 Server 3 Machine A Machine B Machine A 1. Domain Domain

More information

Abstract arm linux tool-chain root NET-Start! 2

Abstract arm linux tool-chain root NET-Start! 2 Lab III - Embedding Linux 1 Abstract arm linux tool-chain root NET-Start! 2 Part 1.4 Step1. tool-chain 4 Step2. PATH 4 Part 2 kernel 5 Step1. 5 Step2... 6 Step3...8 Part 3 root. 8 Step1. 8 Step2. 8 Part

More information

Guide to Install SATA Hard Disks

Guide to Install SATA Hard Disks SATA RAID 1. SATA. 2 1.1 SATA. 2 1.2 SATA 2 2. RAID (RAID 0 / RAID 1 / JBOD).. 4 2.1 RAID. 4 2.2 RAID 5 2.3 RAID 0 6 2.4 RAID 1.. 10 2.5 JBOD.. 16 3. Windows 2000 / Windows XP 20 1. SATA 1.1 SATA Serial

More information

Serial ATA ( Silicon Image SiI3114)...2 (1) SATA... 2 (2) B I O S S A T A... 3 (3) RAID BIOS RAID... 5 (4) S A T A... 8 (5) S A T A... 10

Serial ATA ( Silicon Image SiI3114)...2 (1) SATA... 2 (2) B I O S S A T A... 3 (3) RAID BIOS RAID... 5 (4) S A T A... 8 (5) S A T A... 10 Serial ATA ( Silicon Image SiI3114)...2 (1) SATA... 2 (2) B I O S S A T A... 3 (3) RAID BIOS RAID... 5 (4) S A T A... 8 (5) S A T A... 10 Ác Åé å Serial ATA ( Silicon Image SiI3114) S A T A (1) SATA (2)

More information

Sophos Central 快速安裝手冊

Sophos Central 快速安裝手冊 Sophos Central 快速安裝手冊 1 1. Sophos Central...5 2....9 3....13 3.1. Enduser Protection...13 3.2. Intercept X...21 3.3....28 3.4....36 3.5....45 3.5.1...45 3.5.2...50 3.5.3...54 3.5.4...57 3.5.5...60 3.6...63

More information

一.NETGEAR VPN防火墙产品介绍

一.NETGEAR VPN防火墙产品介绍 NETGEAR VPN NETGEAR 6 http://www.netgear.com.cn - 1 - NETGEAR VPN... 4 1.1 VPN...4 1.2 Dynamic Domain Name Service...4 1.3 Netgear VPN...4 Netgear VPN... 6 2.1 FVS318 to FVS318 IKE Main...7 2.1.1 A VPN

More information

K7VT2_QIG_v3

K7VT2_QIG_v3 ............ 1 2 3 4 5 [R] : Enter Raid setup utility 6 Press[A]keytocreateRAID RAID Type: JBOD RAID 0 RAID 1: 2 7 RAID 0 Auto Create Manual Create: 2 RAID 0 Block Size: 16K 32K

More information

Get Started产品文档

Get Started产品文档 腾讯云 CDB for MySQL Get Started 产品文档 版权声明 2015-2016 腾讯云版权所有 本文档著作权归腾讯云单独所有, 未经腾讯云事先书面许可, 任何主体不得以任何形式复制 修改 抄袭 传 播全部或部分本文档内容 商标声明 及其它腾讯云服务相关的商标均为腾讯云计算 ( 北京 ) 有限责任公司及其关联公司所有 本文档涉及的第三方 主体的商标, 依法由权利人所有 服务声明 本文档意在向客户介绍腾讯云全部或部分产品

More information

http://panweizeng.com http://meituan.com http://meituan.com hosts http://meituan.com hosts localhost 127.0.0.1 /etc/nsswitch.conf /etc/hosts /etc/resolv.conf Mail Client Web Browser cache 1-30mins Clients

More information

f2.eps

f2.eps 前 言, 目 录 产 品 概 况 1 SICAM PAS SICAM 电 力 自 动 化 系 统 配 置 和 使 用 说 明 配 置 2 操 作 3 实 时 数 据 4 人 机 界 面 5 SINAUT LSA 转 换 器 6 状 态 与 控 制 信 息 A 版 本 号 : 08.03.05 附 录, 索 引 安 全 标 识 由 于 对 设 备 的 特 殊 操 作 往 往 需 要 一 些 特 殊 的

More information

入學考試網上報名指南

入學考試網上報名指南 入 學 考 試 網 上 報 名 指 南 On-line Application Guide for Admission Examination 16/01/2015 University of Macau Table of Contents Table of Contents... 1 A. 新 申 請 網 上 登 記 帳 戶 /Register for New Account... 2 B. 填

More information

untitled

untitled V3049A-EXD IP-SAN/NAS Infinova Infinova Infinova Infinova www.infinova.com.cn Infinova Infinova Infinova 1 2 1 2 V3049A-EXD-R16 V3049A-EXD-R24 ... 1 1.1... 1 1.2... 1 1.3... 1... 2 2.1... 2 2.2... 3...

More information

穨IC-1000

穨IC-1000 IC-1000 LEDOMARS Information Coporation :(02)27913828 :(02)27945895 (04)2610628 (04)2650852 (07)3897016 (07)3897165 http://www.ledomars.com.tw 1 1. IC-1000 2. IC-1000 LED : ERROR LNK/ACT PWR TEST PWR(Power)

More information

NT 4

NT 4 NT 4.0 Windows 2003 : Microsoft Windows NT Server 4.0 2004 12 31 Microsoft Windows 2003 Microsoft Windows Server 2003 : 1. 2. 3. 4. Total Cost of Ownership 5. 6. 7. XML Web Services Microsoft Certified

More information

目 錄 第 一 章 weberp 簡 介... 6 第 一 節 概 述... 6 第 二 節 安 全 性... 7 第 三 節 功 能... 7 一 銷 售 及 訂 單... 7 二 稅... 8 三 應 收 帳 款... 8 四 存 貨... 8 五 購 買... 9 六 應 付 帳 款... 9

目 錄 第 一 章 weberp 簡 介... 6 第 一 節 概 述... 6 第 二 節 安 全 性... 7 第 三 節 功 能... 7 一 銷 售 及 訂 單... 7 二 稅... 8 三 應 收 帳 款... 8 四 存 貨... 8 五 購 買... 9 六 應 付 帳 款... 9 東 吳 大 學 企 研 所 資 訊 管 理 期 末 報 告 weberp 使 用 說 明 書 指 導 教 授 : 尚 榮 安 教 授 第 一 組 童 偉 哲 01353025 劉 彥 澧 01353028 史 璦 禎 01353031 吳 采 紋 98153143 1 目 錄 第 一 章 weberp 簡 介... 6 第 一 節 概 述... 6 第 二 節 安 全 性... 7 第 三 節 功

More information

IP Access Lists IP Access Lists IP Access Lists

IP Access Lists IP Access Lists IP Access Lists Chapter 10 Access Lists IP Access Lists IP Access Lists IP Access Lists Security) IP Access Lists Access Lists (Network router For example, RouterA can use an access list to deny access from Network 4

More information

Microsoft Word - SupplyIT manual 3_cn_david.doc

Microsoft Word - SupplyIT manual 3_cn_david.doc MR PRICE Supply IT Lynette Rajiah 1 3 2 4 3 5 4 7 4.1 8 4.2 8 4.3 8 5 9 6 10 6.1 16 6.2 17 6.3 18 7 21 7.1 24 7.2 25 7.3 26 7.4 27 7.5 28 7.6 29 7.7 30 7.8 31 7.9 32 7.10 32 7.11 33 7.12 34 1 7.13 35 7.14

More information

P4VM800_BIOS_CN.p65

P4VM800_BIOS_CN.p65 1 Main H/W Monitor Boot Security Exit System Overview System Time System Date [ 17:00:09] [Fri 02/25/2005] BIOS Version : P4VM800 BIOS P1.00 Processor Type : Intel (R) Pentium (R) 4 CPU 2.40 GHz Processor

More information

HOL-CHG-1695

HOL-CHG-1695 Table of Contents 练 习 概 述 - - vsphere 挑 战 练 习... 2 练 习 指 导... 3 第 1 单 元 : 在 实 践 中 学 习 (15 分 钟 )... 5 剪 贴 板 复 制 和 粘 贴 功 能 无 法 使 用?... 6 虚 拟 机 性 能 不 佳... 17 第 2 单 元 : 基 本 运 维 挑 战 (30 分 钟 )... 32 无 法 登 录

More information

untitled

untitled ArcSDE ESRI ( ) High availability Backup & recovery Clustering Replication Mirroring Standby servers ArcSDE % 95% 99.9% 99.99% 99.999% 99.9999% 18.25 / 8.7 / 52.5 / 5.25 / 31.8 / Spatial Geodatabase

More information

untitled

untitled V3041A-J/V3042A-J IP-SAN/NAS Infinova Infinova Infinova Infinova www.infinova.com.cn Infinova Infinova Infinova 1 2 1 2 V3041A-16R-J V3041A-24R-J V3042A-16R-J V3042A-24R-J V3049-EXD-R16 V3049-EXD-R24 ...

More information

ch08.PDF

ch08.PDF 8-1 CCNA 8.1 CLI 8.1.1 8-2 8-3 8.1.21600 2500 1600 2500 / IOS 8-4 8.2 8.2.1 A 5 IP CLI 1600 2500 8-5 8.1.2-15 Windows 9598NT 2000 HyperTerminal Hilgraeve Microsoft Cisco HyperTerminal Private Edition (PE)

More information

自由軟體教學平台

自由軟體教學平台 NCHC Opensource task force Steven Shiau steven@nchc.gov.tw National Center for High-Performance Computing Sep 10, 2002 1 Outline 1. 2. 3. Service DHCP, TFTP, NFS, NIS 4. 5. 2 DRBL (diskless remote boot

More information

ebook 145-6

ebook 145-6 6 6.1 Jim Lockhart Windows 2000 0 C S D Wo r m. E x p l o r e Z i p z i p p e d _ f i l e s. e x e Wo r m. E x p l o r e Z i p H i Recipient Name! I received your email and I shall send you a reply ASAP.

More information

2004 Sun Microsystems, Inc Network Circle, Santa Clara, CA U.S.A. Sun Sun Berkeley BSD UNIX X/Open Company, Ltd. / SunSun MicrosystemsSun

2004 Sun Microsystems, Inc Network Circle, Santa Clara, CA U.S.A. Sun Sun Berkeley BSD UNIX X/Open Company, Ltd. / SunSun MicrosystemsSun SAP livecache Sun Cluster Solaris OS SPARC Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. : 817 7374 10 2004 4 A 2004 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA

More information

775i65PE_BIOS_CN.p65

775i65PE_BIOS_CN.p65 1 Main H/W Monitor Boot Security Exit System Overview System Time System Date [ 14:00:09] [Wed 10/20/2004] BIOS Version : 775i65PE BIOS P1.00 Processor Type : Intel (R) CPU 3.20 GHz Processor Speed : 3200

More information

OOAD PowerDesigner OOAD Applying PowerDesigner CASE Tool in OOAD PowerDesigner CASE Tool PowerDesigner PowerDesigner CASE To

OOAD PowerDesigner OOAD Applying PowerDesigner CASE Tool in OOAD PowerDesigner CASE Tool PowerDesigner PowerDesigner CASE To PowerDesigner Applying PowerDesigner CASE Tool in OOAD albertchung@mpinfo.com.tw PowerDesigner CASE Tool PowerDesigner PowerDesigner CASE Tool PowerDesigner CASE Tool CASE Tool PowerDesignerUnified ProcessUMLing

More information

Value Chain ~ (E-Business RD / Pre-Sales / Consultant) APS, Advanc

Value Chain ~ (E-Business RD / Pre-Sales / Consultant) APS, Advanc Key @ Value Chain fanchihmin@yahoo.com.tw 1 Key@ValueChain 1994.6 1996.6 2000.6 2000.10 ~ 2004.10 (E- RD / Pre-Sales / Consultant) APS, Advanced Planning & Scheduling CDP, Collaborative Demand Planning

More information

发行说明, 7.0.1 版

发行说明, 7.0.1 版 发 行 说 明 Websense Web Security Websense Web Filter 7.0.1 版 本 版 本 的 新 特 点 Websense Web Security 和 Websense Web Filter 的 7.0.1 版 本 均 已 本 地 化 为 以 下 语 言 : 法 语 德 语 意 大 利 语 日 语 葡 萄 牙 语 简 体 中 文 西 班 牙 语 繁 体 中 文

More information

ASP.NET MVC Visual Studio MVC MVC 範例 1-1 建立第一個 MVC 專案 Visual Studio MVC step 01 Visual Studio Web ASP.NET Web (.NET Framework) step 02 C:\M

ASP.NET MVC Visual Studio MVC MVC 範例 1-1 建立第一個 MVC 專案 Visual Studio MVC step 01 Visual Studio Web ASP.NET Web (.NET Framework) step 02 C:\M ASP.NET MVC Visual Studio 2017 1 1-4 MVC MVC 範例 1-1 建立第一個 MVC 專案 Visual Studio MVC step 01 Visual Studio Web ASP.NET Web (.NET Framework) step 02 C:\MvcExamples firstmvc MVC 1-7 ASP.NET MVC 1-9 ASP.NET

More information

Junos Pulse Mobile Security R1 2012, Juniper Networks, Inc.

Junos Pulse Mobile Security R1 2012, Juniper Networks, Inc. Junos Pulse Mobile Security 4.0 2012 6 R1 2012, Juniper Networks, Inc. Junos Pulse Mobile Security Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 408-745-2000 www.juniper.net

More information

A API Application Programming Interface 见 应 用 程 序 编 程 接 口 ARP Address Resolution Protocol 地 址 解 析 协 议 为 IP 地 址 到 对 应 的 硬 件 地 址 之 间 提 供 动 态 映 射 阿 里 云 内

A API Application Programming Interface 见 应 用 程 序 编 程 接 口 ARP Address Resolution Protocol 地 址 解 析 协 议 为 IP 地 址 到 对 应 的 硬 件 地 址 之 间 提 供 动 态 映 射 阿 里 云 内 A API Application Programming Interface 见 应 用 程 序 编 程 接 口 ARP Address Resolution Protocol 地 址 解 析 协 议 为 IP 地 址 到 对 应 的 硬 件 地 址 之 间 提 供 动 态 映 射 阿 里 云 内 容 分 发 网 络 Alibaba Cloud Content Delivery Network 一

More information

SyncMail AJAX Manual

SyncMail AJAX Manual HKBN Cloud Mail on Mobile How to setup POP3 and IMAP (Version 1.1) 1 Table of Contents HKBN Cloud Mail 用戶設定 Android 手冊 HKBN Cloud Mail Android Setup... 3 Android 2.X... 3 Android 3.x - 4.X... 6 HKBN Cloud

More information

Domain Management产品文档

Domain Management产品文档 腾讯云Content Delivery Network Domain Management 产品文档 版权声明 2015-2016 腾讯云版权所有 本文档著作权归腾讯云单独所有 未经腾讯云事先书面许可 任何主体不得以任何形式复制 修改 抄袭 传 播全部或部分本文档内容 商标声明 及其它腾讯云服务相关的商标均为腾讯云计算 北京 有限责任公司及其关联公司所有 本文档涉及的第三方 主体的商标 依法由权利人所有

More information

RAID RAID 0 RAID 1 RAID 5 RAID * (-1)* (/ 2)* No Yes Yes Yes SATA A. B. BIOS SATA C. RAID BIOS RAID ( ) D. RAID/AHCI ( ) S ATA S S D ( ) (

RAID RAID 0 RAID 1 RAID 5 RAID * (-1)* (/ 2)* No Yes Yes Yes SATA A. B. BIOS SATA C. RAID BIOS RAID ( ) D. RAID/AHCI ( ) S ATA S S D ( ) ( SATA... 2 RAID/AHCI... 16 Intel Optane... 19 Intel Virtual RAID on CPU (Intel VROC)... 21 RAID RAID 0 RAID 1 RAID 5 RAID 10 2 2 3 4 * (-1)* (/ 2)* No Yes Yes Yes SATA A. B. BIOS SATA C. RAID BIOS RAID

More information

资源管理软件TORQUE与作业调度软件Maui的安装、设置及使用

资源管理软件TORQUE与作业调度软件Maui的安装、设置及使用 TORQUE Maui hmli@ustc.edu.cn 2008 1 1 TORQUE 2 1.1 TORQUE........................... 2 1.2 TORQUE...................... 2 1.3 TORQUE.......................... 4 1.4 TORQUE........................... 4

More information

Some experiences in working with Madagascar: installa7on & development Tengfei Wang, Peng Zou Tongji university

Some experiences in working with Madagascar: installa7on & development Tengfei Wang, Peng Zou Tongji university Some experiences in working with Madagascar: installa7on & development Tengfei Wang, Peng Zou Tongji university Map data @ Google Reproducible research in Madagascar How to conduct a successful installation

More information

NAT环境下采用飞塔NGFW

NAT环境下采用飞塔NGFW 版本 V1.0 时间 2016 年 6 月 版本 作者 王祥 状态 反馈 support_cn@fortinet.com 目录 1 OpenStack 简介... 5 2 版本说明... 6 3 部署架构... 7 4 部署 OpenStack... 8 4.1 基本环境... 8 4.1.1 密码设定... 8 4.1.2 防火墙... 9 4.1.3 主机名和 hosts 文件... 9 4.1.4

More information

C3_ppt.PDF

C3_ppt.PDF C03-101 1 , 2 (Packet-filtering Firewall) (stateful Inspection Firewall) (Proxy) (Circuit Level gateway) (application-level gateway) (Hybrid Firewall) 2 IP TCP 10.0.0.x TCP Any High Any 80 80 10.0.0.x

More information

P4Dual-915GL_BIOS_CN.p65

P4Dual-915GL_BIOS_CN.p65 1 Main H/W Monitor Boot Security Exit System Overview System Time System Date Total Memory DIMM 1 DIMM 2 [ 14:00:09] [Wed 01/05/2005] BIOS Version : P4Dual-915GL BIOS P1.00 Processor Type : Intel (R) Pentium

More information