I was finally able to successfully deploy and run
OpenStack Ubuntu on a single physical server for testing. It was a somewhat
painful process as so many things can go wrong - there are just so many moving
parts. I was lucky enough to get help from my best buddy com rommie Prashant on
OpenStack that really helped.
This post will demonstrate how to go about installing and configuring OpenStack on a single node. At the end you'll be able to setup networking and block storage and create VM's.
Just as a brief overview of OpenStack here are all the parts that I've used:
Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). In this tutorial I'll not be using it, but I'll write a new one that will only deal with Swift, as it's a beast on it's own.
Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute.
Compute (codenamed "Nova") provides virtual servers upon demand - KVM, XEN, LXC, etc.
Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.
Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. One example is OpenVSwitch which I'll use in this setup.
Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service). In the Ubuntu release, both the nova-volume service and the separate volume service are available. I'll use iSCSI over LVM to export a block device.
Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services written in Django. With this web GUI, you can perform some operations on your cloud like launching an instance, assigning IP addresses and setting access controls.
Here's a conceptual diagram for OpenStack Ubuntu and how all the pieces fit together:
This post will demonstrate how to go about installing and configuring OpenStack on a single node. At the end you'll be able to setup networking and block storage and create VM's.
Just as a brief overview of OpenStack here are all the parts that I've used:
Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). In this tutorial I'll not be using it, but I'll write a new one that will only deal with Swift, as it's a beast on it's own.
Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute.
Compute (codenamed "Nova") provides virtual servers upon demand - KVM, XEN, LXC, etc.
Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.
Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. One example is OpenVSwitch which I'll use in this setup.
Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service). In the Ubuntu release, both the nova-volume service and the separate volume service are available. I'll use iSCSI over LVM to export a block device.
Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services written in Django. With this web GUI, you can perform some operations on your cloud like launching an instance, assigning IP addresses and setting access controls.
Here's a conceptual diagram for OpenStack Ubuntu and how all the pieces fit together:
And here's the logical architecture:
For this example deployment I'll be using a single
physical Ubuntu 12.04 server with hvm support enabled in the BIOS.
1. Prerequisites
Make sure you have the correct repository from which to download all OpenStack components:
As root run:
1. Prerequisites
Make sure you have the correct repository from which to download all OpenStack components:
As root run:
[root@Ubuntu:~]# apt-get install ubuntu-cloud-keyring
[root@Ubuntu:~]# echo "deb
http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/Ubuntu
main" >>
/etc/apt/sources.list
[root@Ubuntu:~]# apt-get update && apt-get upgrade
[root@Ubuntu:~]# reboot
|
When the server comes back online execute (replace MY_IP with your IP address):
[root@Ubuntu:~]# useradd -s /bin/bash -m openstack
[root@Ubuntu:~]# echo "%openstack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@Ubuntu:~]# su - openstack
[openstack@Ubuntu:~]$ export
MY_IP=10.177.129.121
|
Preseed the MySQL install
[openstack@Ubuntu:~]$ cat <<EOF | sudo debconf-set-selections
mysql-server-5.1 mysql-server/root_password password notmysql
mysql-server-5.1 mysql-server/root_password_again password notmysql
mysql-server-5.1 mysql-server/start_on_boot boolean true
EOF
|
Install packages and dependencies
[openstack@Ubuntu:~]$ sudo apt-get install -y rabbitmq-server mysql-server python-mysqldb
|
Configure MySQL to listen on all interfaces
[openstack@Ubuntu:~]$ sudo sed -i 's/127.0.0.1/0.0.0.0/g'
/etc/mysql/my.cnf
[openstack@Ubuntu:~]$ sudo service mysql restart
|
Synchronize date
[openstack@Ubuntu:~]$ sudo ntpdate -u ntp.ubuntu.com
|
>
2. Installing the identity service - Keystone
2. Installing the identity service - Keystone
sudo apt-get
install -y keystone
|
Create a database for keystone
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE
DATABASE keystone;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'notkeystone';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'notkeystone';"
|
Configure keystone to use MySQL
[openstack@Ubuntu:~]$ sudo sed -i "s|connection =
sqlite:////var/lib/keystone/keystone.db|connection =
mysql://keystone:notkeystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
|
Restart keystone service
[openstack@Ubuntu:~]$ sudo service keystone restart
|
Verify keystone service successfully restarted
[openstack@Ubuntu:~]$ pgrep -l keystone
|
Initialize the database schema
[openstack@Ubuntu:~]$ sudo -u keystone keystone-manage db_sync
|
Add the 'keystone admin' credentials to .bashrc
[openstack@Ubuntu:~]$ cat >> ~/.bashrc <<EOF
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://$MY_IP:35357/v2.0
EOF
|
Use the 'keystone admin' credentials
[openstack@Ubuntu:~]$ source ~/.bashrc
|
Create new tenants (The services tenant will be used later when configuring services to use keystone)
[openstack@Ubuntu:~]$ TENANT_ID=`keystone tenant-create --name
MyProject | awk '/ id / {
print $4 }'`
[openstack@Ubuntu:~]$ SERVICE_TENANT_ID=`keystone tenant-create --name
Services | awk '/ id / {
print $4 }'`
|
Create new roles
[openstack@Ubuntu:~]$ MEMBER_ROLE_ID=`keystone role-create --name member
| awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ ADMIN_ROLE_ID=`keystone role-create --name admin | awk '/ id / { print $4 }'`
|
Create new users
[openstack@Ubuntu:~]$ MEMBER_USER_ID=`keystone
user-create --tenant-id $TENANT_ID --name myuser --pass mypassword | awk '/
id / { print $4 }'`
[openstack@Ubuntu:~]$ ADMIN_USER_ID=`keystone
user-create --tenant-id $TENANT_ID --name myadmin --pass mypassword | awk '/
id / { print $4 }'`
|
Grant roles to users
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $MEMBER_USER_ID --tenant-id $TENANT_ID --role-id $MEMBER_ROLE_ID
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ADMIN_ROLE_ID
|
List the new tenant, users, roles, and role assigments
[openstack@Ubuntu:~]$ keystone tenant-list
[openstack@Ubuntu:~]$ keystone role-list
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $MEMBER_USER_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $ADMIN_USER_ID
|
Populate the services in the service catalog
[openstack@Ubuntu:~]$ KEYSTONE_SVC_ID=`keystone service-create
--name=keystone --type=identity --description="Keystone Identity
Service" | awk '/ id / {
print $4 }'`
[openstack@Ubuntu:~]$ GLANCE_SVC_ID=`keystone service-create
--name=glance --type=image --description="Glance Image Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ QUANTUM_SVC_ID=`keystone service-create
--name=quantum --type=network --description="Quantum Network Service"
| awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ NOVA_SVC_ID=`keystone service-create
--name=nova --type=compute --description="Nova Compute Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ CINDER_SVC_ID=`keystone service-create --name=cinder
--type=volume --description="Cinder Volume Service" | awk '/ id / { print $4 }'`
|
List the new services
[openstack@Ubuntu:~]$ keystone service-list
|
Populate the endpoints in the service catalog
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$KEYSTONE_SVC_ID --publicurl=http://$MY_IP:5000/v2.0
--internalurl=http://$MY_IP:5000/v2.0
--adminurl=http://$MY_IP:35357/v2.0
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$GLANCE_SVC_ID --publicurl=http://$MY_IP:9292/v1
--internalurl=http://$MY_IP:9292/v1
--adminurl=http://$MY_IP:9292/v1
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$QUANTUM_SVC_ID --publicurl=http://$MY_IP:9696/
--internalurl=http://$MY_IP:9696/
--adminurl=http://$MY_IP:9696/
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$NOVA_SVC_ID --publicurl="http://$MY_IP:8774/v2/%(tenant_id)s" --internalurl="http://$MY_IP:8774/v2/%(tenant_id)s" --adminurl="http://$MY_IP:8774/v2/%(tenant_id)s"
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$CINDER_SVC_ID --publicurl="http://$MY_IP:8776/v1/%(tenant_id)s" --internalurl="http://$MY_IP:8776/v1/%(tenant_id)s" --adminurl="http://$MY_IP:8776/v1/%(tenant_id)s"
|
List the new endpoints
[openstack@Ubuntu:~]$ keystone endpoint-list
|
Verify identity service is functioning
[openstack@Ubuntu:~]$ curl -d '{"auth":
{"tenantName": "MyProject",
"passwordCredentials": {"username": "myuser",
"password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
[openstack@Ubuntu:~]$ curl -d '{"auth":
{"tenantName": "MyProject",
"passwordCredentials": {"username": "myadmin",
"password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
|
Create the 'user' and 'admin' credentials
[openstack@Ubuntu:~]$ mkdir ~/credentials
[openstack@Ubuntu:~]$ cat >> ~/credentials/user <<EOF
export OS_USERNAME=myuser
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
[openstack@Ubuntu:~]$ cat >> ~/credentials/admin <<EOF
export OS_USERNAME=myadmin
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
|
Use the 'user' credentials
[openstack@Ubuntu:~]$ source ~/credentials/user
|
3. Install the image service - Glance
[openstack@Ubuntu:~]$ sudo apt-get install -y glance
|
Create glance service user in the services tenant
[openstack@Ubuntu:~]$ GLANCE_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass notglance | awk '/ id / { print $4 }'`
|
Grant admin role to glance service user
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
|
List the new user and role assigment
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $GLANCE_USER_ID
|
Create a database for glance
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE
DATABASE glance;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'notglance';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON glance.* TO 'glance'@'%' IDENTIFIED BY 'notglance';"
|
Configure the glance-api service
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection =
sqlite:////var/lib/glance/glance.sqlite|sql_connection =
mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host
= $MY_IP/g"
/etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g'
/etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g'
/etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g'
/etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g'
/etc/glance/glance-api.conf
|
Configure the glance-registry service
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection =
sqlite:////var/lib/glance/glance.sqlite|sql_connection =
mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host =
127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g'
/etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g'
/etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g'
/etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g'
/etc/glance/glance-registry.conf
|
Restart glance services
[openstack@Ubuntu:~]$ sudo service glance-registry restart
[openstack@Ubuntu:~]$ sudo service glance-api restart
|
Verify glance services successfully restarted
[openstack@Ubuntu:~]$ pgrep -l glance
|
Initialize the database schema. Ignore the deprecation warning.
[openstack@Ubuntu:~]$ sudo -u glance glance-manage db_sync
|
Download some images
[openstack@Ubuntu:~]$ mkdir ~/images
[openstack@Ubuntu:~]$ wget
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
-O ~/images/cirros-0.3.0-x86_64-disk.img
|
Register a qcow2 image
[openstack@Ubuntu:~]$ IMAGE_ID_1=`glance image-create --name
"cirros-qcow2" --disk-format qcow2 --container-format bare
--is-public True --file ~/images/cirros-0.3.0-x86_64-disk.img
| awk '/ id / { print $4 }'`
|
Verify the images exist in glance
[openstack@Ubuntu:~]$ glance image-list
|
# Examine details of images
[openstack@Ubuntu:~]$ glance image-show $IMAGE_ID_1
|
4. Install the network service - Quantum
Install dependencies
[openstack@Ubuntu:~]$ sudo apt-get install -y openvswitch-switch
|
Install the network service
[openstack@Ubuntu:~]$ sudo apt-get install -y quantum-server quantum-plugin-openvswitch
|
Install the network service agents
[openstack@Ubuntu:~]$ sudo apt-get install -y quantum-plugin-openvswitch-agent
quantum-dhcp-agent quantum-l3-agent
|
Create a database for quantum
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE
DATABASE quantum;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'notquantum';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'notquantum';"
|
Configure the quantum OVS plugin
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection =
sqlite:////var/lib/quantum/ovs.sqlite|sql_connection = mysql://quantum:notquantum@$MY_IP/quantum|g"
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Default: enable_tunneling =
False/enable_tunneling = True/g'
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Example: tenant_network_type =
gre/tenant_network_type = gre/g'
/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Example: tunnel_id_ranges =
1:1000/tunnel_id_ranges = 1:1000/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i "s/# Default: local_ip =
10.0.0.3/local_ip = 192.168.1.$MY_NODE_ID/g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
|
Create quantum service user in the services tenant
[openstack@Ubuntu:~]$ QUANTUM_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name quantum --pass notquantum | awk '/ id / { print $4 }'`
|
Grant admin role to quantum service user
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $QUANTUM_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
|
List the new user and role assigment
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $QUANTUM_USER_ID
|
Configure the quantum service to use keystone
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host =
127.0.0.1/auth_host = $MY_IP/g" /etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g'
/etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g'
/etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g'
/etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# auth_strategy =
keystone/auth_strategy = keystone/g' /etc/quantum/quantum.conf
|
Configure the L3 agent to use keystone
[openstack@Ubuntu:~]$ sudo sed -i "s|auth_url =
http://localhost:35357/v2.0|auth_url = http://$MY_IP:35357/v2.0|g" /etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g'
/etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g'
/etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g'
/etc/quantum/l3_agent.ini
|
Start Open vSwitch
[openstack@Ubuntu:~]$ sudo service openvswitch-switch restart
|
Create the integration and external bridges
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-br br-int
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-br br-ex
|
Restart the quantum services
1234
|
[openstack@Ubuntu:~]$ sudo service quantum-server restart
[openstack@Ubuntu:~]$ sudo service quantum-plugin-openvswitch-agent restart
[openstack@Ubuntu:~]$ sudo service quantum-dhcp-agent restart
[openstack@Ubuntu:~]$ sudo service quantum-l3-agent restart
|
Create a network and subnet
[openstack@Ubuntu:~]$ PRIVATE_NET_ID=`quantum net-create private | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ PRIVATE_SUBNET1_ID=`quantum subnet-create --name
private-subnet1 $PRIVATE_NET_ID 10.0.0.0/29 | awk '/ id / {
print $4 }'`
|
List network and subnet
[openstack@Ubuntu:~]$ quantum net-list
[openstack@Ubuntu:~]$ quantum subnet-list
|
Examine details of network and subnet
[openstack@Ubuntu:~]$ quantum net-show $PRIVATE_NET_ID
[openstack@Ubuntu:~]$ quantum subnet-show $PRIVATE_SUBNET1_ID
|
To add public connectivity to your VM's perform the following:
Bring up eth1
[openstack@Ubuntu:~]$ sudo ip link set dev eth1 up
|
Attach eth1 to br-ex
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-port br-ex eth1
[openstack@Ubuntu:~]$ sudo ovs-vsctl show
|
As the admin user for Quantum create a provider
owned network and subnet and set the MY_PUBLIC_SUBNET_CIDR to your public
CIDR
[openstack@Ubuntu:~]$ source ~/credentials/quantum
[openstack@Ubuntu:~]$ echo $MY_PUBLIC_SUBNET_CIDR
[openstack@Ubuntu:~]$ PUBLIC_NET_ID=`quantum net-create public
--router:external=True | awk '/ id / {
print $4 }'`
[openstack@Ubuntu:~]$ PUBLIC_SUBNET_ID=`quantum subnet-create --name
public-subnet $PUBLIC_NET_ID $MY_PUBLIC_SUBNET_CIDR -- --enable_dhcp=False | awk '/ id / { print $4 }'`
|
Switch back to the 'user' credentials
[openstack@Ubuntu:~]$ source ~/credentials/user
|
Connect the router to the public network
[openstack@Ubuntu:~]$ quantum router-gateway-set $ROUTER_ID $PUBLIC_NET_ID
|
Exmaine details of router
[openstack@Ubuntu:~]$ quantum router-show $ROUTER_ID
|
Get instance ID for MyInstance1
[openstack@Ubuntu:~]$ nova show MyInstance1
INSTANCE_ID=(instance
id of your vm)
|
Find the port id for instance
[openstack@Ubuntu:~]$ INSTANCE_PORT_ID=`quantum port-list -f csv -c id --
--device_id=$INSTANCE_ID | awk
'END{print};{gsub(/[\"\r]/,"")}'`
|
Create a floating IP and attach it to
instance
[openstack@Ubuntu:~]$ quantum floatingip-create --port_id=$INSTANCE_PORT_ID $PUBLIC_NET_ID
|
5. Install the compute service - Nova
[openstack@Ubuntu:~]$ sudo apt-get install -y nova-api nova-scheduler nova-compute
nova-cert nova-consoleauth genisoimage
|
Create nova service user in the services tenant
[openstack@Ubuntu:~]$ NOVA_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass notnova | awk '/ id / { print $4 }'`
|
Grant admin role to nova service user
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
|
List the new user and role assigment
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $NOVA_USER_ID
|
Create a database for nova
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE
DATABASE nova;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'notnova';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON nova.* TO 'nova'@'%' IDENTIFIED BY 'notnova';"
|
Configure nova
[openstack@Ubuntu:~]$ cat <<EOF | sudo tee -a /etc/nova/nova.conf
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://$MY_IP:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=Services
quantum_admin_username=quantum
quantum_admin_password=notquantum
quantum_admin_auth_url=http://$MY_IP:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
sql_connection=mysql://nova:notnova@$MY_IP/nova
auth_strategy=keystone
my_ip=$MY_IP
force_config_drive=True
EOF
|
Disable verbose logging
[openstack@Ubuntu:~]$ sudo sed -i 's/verbose=True/verbose=False/g'
/etc/nova/nova.conf
|
Configure nova to use keystone
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host =
127.0.0.1/auth_host = $MY_IP/g" /etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g'
/etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/nova/g'
/etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notnova/g'
/etc/nova/api-paste.ini
|
Initialize the nova database
[openstack@Ubuntu:~]$ sudo -u nova nova-manage db sync
|
Restart nova services
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart
|
Verify nova services successfully restarted
[openstack@Ubuntu:~]$ pgrep -l nova
|
Verify nova services are functioning
[openstack@Ubuntu:~]$ sudo nova-manage service list
|
List images
[openstack@Ubuntu:~]$ nova image-list
|
List flavors
[openstack@Ubuntu:~]$ nova flavor-list
|
Boot an instance using flavor and image names (if names are unique)
[openstack@Ubuntu:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyFirstInstance
|
Boot an instance using flavor and image IDs
[openstack@Ubuntu:~]$ nova boot --image $IMAGE_ID_1 --flavor 1
MySecondInstance
|
List instances, notice status of instance
[openstack@Ubuntu:~]$ nova list
|
Show details of instance
[openstack@Ubuntu:~]$ nova show MyFirstInstance
|
View console log of instance
[openstack@Ubuntu:~]$ nova console-log MyFirstInstance
|
Get network namespace (ie, qdhcp-5ab46e23-118a-4cad-9ca8-51d56a5b6b8c)
[openstack@Ubuntu:~]$ sudo ip netns
[openstack@Ubuntu:~]$ NETNS_ID=qdhcp-$PRIVATE_NET_ID
|
Ping first instance after status is active
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ping -c 3
10.0.0.3
|
Log into first instance
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh
cirros@10.0.0.3
|
If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
|
Ping second instance from first instance
[openstack@host1:~]$ ping -c 3 10.0.0.4
|
Log into second instance from first instance
[openstack@host1:~]$ ssh cirros@10.0.0.4
|
Log out of second instance
[openstack@host2:~]$ exit
|
Log out of first instance
[openstack@host1:~]$ exit
|
Use virsh to talk directly to libvirt
[openstack@Ubuntu:~]$ sudo virsh list --all
|
Delete instances
[openstack@Ubuntu:~]$ nova delete MyFirstInstance
[openstack@Ubuntu:~]$ nova delete MySecondInstance
|
List instances, notice status of instance
[openstack@Ubuntu:~]$ nova list
|
To start a LXC container do the following:
[openstack@Ubuntu:~]$ sudo apt-get install nova-compute-lxc lxctl
[openstack@Ubuntu:~]$ sudo echo "compute_driver=libvirt.LibvirtDriver" >> /etc/nova/nova.conf
[openstack@Ubuntu:~]$ sudo echo "libvirt_type=lxc" >> /etc/nova/nova.conf
[openstack@Ubuntu:~]$ sudo cat /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=lxc
|
You need to use a raw image:
[openstack@Ubuntu:~]$ wget
http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz
-O images/ubuntu-12.04-server-cloudimg-amd64.tar.gz
[openstack@Ubuntu:~]$ cd images; tar zxfv ubuntu-12.04-server-cloudimg-amd64.tar.gz; cd ..
[openstack@Ubuntu:~]$ glance image-create --name "UbuntuLXC" --disk-format
raw --container-format bare --is-public True --file
images/precise-server-cloudimg-amd64.img
[openstack@Ubuntu:~]$ glance image-update UbuntuLXC --property hypervisor_type=lxc
|
Now you can start the LXC container with
nova:
[openstack@Ubuntu:~]$ nova boot --image UbuntuLXC --flavor m1.tiny LXC
|
The instance files and rootfs will be located in /var/lib/nova/instances.
Logs go to /var/log/nova/nova-compute.log.
VNC does not work with LXC, but the console and ssh does.
6.
Install the dashboard - Horizon
[openstack@Ubuntu:~]$ sudo apt-get install -y memcached novnc
[openstack@Ubuntu:~]$ sudo apt-get install -y --no-install-recommends openstack-dashboard
nova-novncproxy
|
Configure nova for VNC
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
novncproxy_base_url=http://$MY_IP:6080/vnc_auto.html
vncserver_proxyclient_address=$MY_IP
vncserver_listen=0.0.0.0
EOF
|
Set default role
[openstack@Ubuntu:~]$ sudo sed -i 's/OPENSTACK_KEYSTONE_DEFAULT_ROLE
= "Member"/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"/g'
/etc/openstack-dashboard/local_settings.py
|
Restart the nova services
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart
[openstack@Ubuntu:~]$ sudo service nova-novncproxy restart
[openstack@Ubuntu:~]$ sudo service apache2 restart
|
Point your browser to http://$MY_IP/horizon.
The credentials that we've create earlier are myadmin/mypassword.
7. Install the volume service - Cinder
[openstack@Ubuntu:~]$ sudo apt-get install -y cinder-api cinder-scheduler cinder-volume
|
Create cinder service user in the services tenant
[openstack@Ubuntu:~]$ CINDER_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass notcinder | awk '/ id / { print $4 }'`
|
Grant admin role to cinder service user
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
|
List the new user and role assigment
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $CINDER_USER_ID
|
Create a database for cinder
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE
DATABASE cinder;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'notcinder';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL
ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'notcinder';"
|
Configure cinder
[openstack@Ubuntu:~]$ ( cat | sudo tee -a
/etc/cinder/cinder.conf ) <<EOF
sql_connection = mysql://cinder:notcinder@$MY_IP/cinder
my_ip = $MY_IP
EOF
|
Configure cinder-api to use keystone
[openstack@Ubuntu:~]$ sudo sed -i "s/service_host =
127.0.0.1/service_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host =
127.0.0.1/auth_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name
= Services/g' /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_user =
%SERVICE_USER%/admin_user = cinder/g' /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_password = %SERVICE_PASSWORD%/admin_password
= notcinder/g' /etc/cinder/api-paste.ini
|
Initialize the database schema
[openstack@Ubuntu:~]$ sudo -u cinder cinder-manage db sync
|
Configure nova to use cinder
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=nova.volume.cinder.API
enabled_apis=osapi_compute,metadata
EOF
|
Restart nova-api to disable the nova-volume api (osapi_volume)
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart
[openstack@Ubuntu:~]$ sudo service nova-novncproxy restart
|
Configure tgt
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/tgt/targets.conf
) <<EOF
default-driver iscsi
EOF
|
Restart tgt and open-iscsi
[openstack@Ubuntu:~]$ sudo service tgt restart
[openstack@Ubuntu:~]$ sudo service open-iscsi restart
|
Create the volume group
[openstack@Ubuntu:~]$ sudo pvcreate /dev/sda4
[openstack@Ubuntu:~]$ sudo vgcreate cinder-volumes /dev/sda4
|
Verify the volume group
[openstack@Ubuntu:~]$ sudo vgdisplay
|
Restart the volume services
[openstack@Ubuntu:~]$ sudo service cinder-volume restart
[openstack@Ubuntu:~]$ sudo service cinder-scheduler restart
[openstack@Ubuntu:~]$ sudo service cinder-api restart
|
Create a new volume
[openstack@Ubuntu:~]$ cinder create 1 --display-name MyFirstVolume
|
Boot an instance to attach volume to
[openstack@Ubuntu:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyVolumeInstance
|
List instances, notice status of instance
[openstack@Ubuntu:~]$ nova list
|
List volumes, notice status of volume
[openstack@Ubuntu:~]$ cinder list
|
Attach volume to instance after instance is active, and volume is available
[openstack@Ubuntu:~]$ nova volume-attach <instance-id> <volume-id> /dev/vdc
|
Log into first instance
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh
cirros@10.0.0.3
|
If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
|
Make filesystem on volume
[openstack@Ubuntu:~]$ sudo mkfs.ext3 /dev/vdc
|
Create a mountpoint
[openstack@Ubuntu:~]$ sudo mkdir /extraspace
|
Mount the volume at the mountpoint
[openstack@Ubuntu:~]$ sudo mount /dev/vdc /extraspace
|
Create a file on the volume
[openstack@Ubuntu:~]$ sudo touch /extraspace/helloworld.txt
[openstack@Ubuntu:~]$ sudo ls /extraspace
|
Unmount the volume
[openstack@Ubuntu:~]$ sudo umount /extraspace
|
Log out of instance
[openstack@Ubuntu:~]$ exit
|
Detach volume from instance
[openstack@Ubuntu:~]$ nova volume-detach <instance-id> <volume-id>
|
List volumes, notice status of volume
[openstack@Ubuntu:~]$ cinder list
|
Delete instance
[openstack@Ubuntu:~]$ nova delete MyVolumeInstance
|
very needful post. Now I understand the 192.168 stuff. Thank you!
ReplyDeleteIf you could elaborate the examples on how to set a router to the external network, I would appreciate it very much. Or you can post it as a comment here, just to help other people too.