openStack kilo 手动Manual部署随笔记录

  1. 云栖社区>
  2. 博客>
  3. 正文

openStack kilo 手动Manual部署随笔记录

cloud_ruiy 2015-07-03 16:55:00 浏览737
展开阅读全文

一 ,基于neutron网络资源主机(控制节点,网络节点,计算节点)网络规划配置

1, controller.cc 节点

网络配置截图

2, network.cc节点网络配置截图

3, compute.cc节点网络配置截图

4,咯节点本地解析配置

Neutron initial 网络 ip pool Ranger:

External network ip pool: 192.168.199.170 ~ 192.168.199.220

Tenant network ip pool: 192.168.254.0/24

二,安装过程

1,各个节点间ntp,ntp服务器配置(跑在controller.cc)

ntp服务器配置把ntp_server 修改为asia.pool.ntp.org

其他节点将server 指向controller.cc即可

ntp简单验证

2,openstack package ins,to enable openstack online repository

在所有节点执行如下语句

简单验证

3,SQL database

 user libvirt

  • Management of virtual machines, virtual networks and storage

  Zero-conf discovery using Avahi multicast-DNS

libvirt provides

1     Remote management using TLS encryption and x509 certificates
2     Remote management authenticating with Kerberos and SASL
3     Local access control using PolicyKit
4     Zero-conf discovery using Avahi multicast-DNS
5     Management of virtual machines, virtual networks and storage
6     Portable client API for Linux, Solaris and Windows

 ins mariaDB-server python-mysqldb
[mysqld]
bind-address     = controller.cc
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Message queue
ins    apt-get install rabbitmq-server
To configure the message queue service,add the openstack user
rabbitmqctl add_user openstack 321
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Install and configure openStack Identity service
service on controller.cc ntp,SQL,Message queue,identity service-keystone
for performance,this configuration deploys the Apache Http server to handle requests and Memcached to store tokens instead of a SQL database

bafe8e2376ec5fe211c6

Create projects,users,and roles
The identity service provides authentication services for each openstack service,the authentication service uses a combination of domains,projects(tenants),users,and roles

create an administrator administrative project,user,and role for administrative operations in your environment

this guide uses a service project that contains a unique user for each service that you add to your environment

一个user用户可以属于多个project,role
openstack add role --project admin --user {} role

for security reasons,disable the temporary authentication token mechanism

qinRui用户与admin 项目project及admin 角色相关联系起来

rui用户与rui 项目project 及admin 角色管联

openstack image service(glance) enables users to discover,register,and retrieve virtual machine images,offers a REST API enables you to query virtual machine image metadata and retrieve an actual image,you can store virtual machine images made available through the image service in variety of locations,from simple file systems to object-storage systems like object storage

/var/lib/glance/images/

openstack image service includes the following components:
glance-api:    image discovery,retrieval,and storage
glance-registery:    stores,processes,and retrieves metadata about images.Metadata includes items such as size and type

security note,registry is a private internal service meant for use by openstack image service,
database:    stores image metadata and you can choose your database depending on your preference
storage repository for image files: various repository types are supported

on the controller.cc node,for simplicity,this configuration stores images on the local file system

git 使用测试

把新创建的glance用户关联到service项目和admin role角色

create service entity,API endpoint

/etc/glance/glance-api.conf
default_store = file
filesystem_store_datadir = /var/lib/glance/images

从上面截图得出的教训是此处的配置文件every line 开头不能有空白.

libVirt 开发环境

另外的一个任务是:在windows下在一个网卡上绑定多个ip10.0.1.115,测试能否ping通桥接的虚拟机的10.0.1.91的ip虚拟机.

问题发现及解决

Add the compute service
openStack compute
install and configure controller node
install and configure a compute node

再一个工作任务是使用FUEL部署openStack production env

compute core: nova-compute service worker daemon that creates and terminates virtual machine instances through hypervisor APIs,Processing is fairly complex,basically,the daemon accepts actions from the queue and performs a series of system command such as launching a KVM instance and updating its state in the database

nova-scheduler takes a virtual machine instance request from the queue and determines on which compute server host it runs

core compute,networking for VMs nova-network worker daemon similar to the nova-compute service,accepts networking tasks from queue and manipulates the network,performs tasks such as setting up bridging interfaces or changing iptables rules

nova-consoleauth daemon,authorizes token for users that console proxies provide,see nova-novncproxy and nova-xvpnvproxy,
nova-consoleauth daemon: authorizes tokens for users that cnsole proxies provides
nova-novncproxy and  no-xvpnvncpproxy  this service nova-consoleauth must be running for console proxies to workyou can run proxies of ethr type

nova-novncproxy daemon provides a proxy for accessing running instances through a VNC connection,supports browser-based novnc clients

novnc client,html5 client browser-based

nova-xvpvncproxy daemon provides a proxy for accessing running instances through a VNC connection,supports an openstack-specific java client

Popuklate the compute database

 

完整nova controller.cc node 配置文件

[DEFAULT]
verbose = True
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata

rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.199.153
vncserver_listen = 192.168.199.153
vncserver_proxyclient_address = 192.168.199.153

[database]
connection = mysql://nova:321@controller.cc/nova

[oslo_messaging_rabbit]
rabbit_host = controller.cc
rabbit_userid = openstack
rabbit_password = 321

[keystone_authtoken]
auth_uri = http://controller.cc:5000
auth_rul = http://controller.cc:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 321

[glance]
host = controller.cc

[oslo_concurrency]
lock_path = /var/lib/nova/tmp
View Code

install and configure a compute node

server component listens on all ip address and the proxy component only listens on the manaement interface ip address of the compute node the base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node

to finalize installation
determine whether your compute node supports hardware acceleration for virtual machines

configure libVirt to use QEMU instead of KVM

 

list service components to verify successful launch and registration of each process

This output should indicate four service components enabled on the compute node.

List API endpoint in the Identity service to verify connectivity with the Identity service