Sunday, 23 November 2014

Quick Logins with ssh Client Keys

When you're an admin on more than a few machines, being able to navigate quickly to a shell on any given server is critical. Having to type "ssh my.server.com" (followed by a password) is not only tedious, but it breaks one's concentration. Suddenly having to shift from "where's the problem?" to "getting there" and back to "what's all this, then?" has led more than one admin to premature senility. It promotes the digital equivalent of "why did I come into this room, anyway?" (In addition, the problem is only made worse by /usr/games/fortune!)

At any rate, more effort spent logging into a machine means less effort spent solving problems. Recent versions of ssh offer a secure alternative to endlessly entering a password: public key exchange.

To use public keys with an ssh server, you'll first need to generate a public/private key pair:

[root@host]# ssh-keygen -t rsa 

You can also use -t dsa for DSA keys, or -t rsa1 if you're using Protocol v1. (And shame on you if you are! Upgrade to v2 as soon as you can!)

After you enter the above command, you should see something like this:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): 

Just hit Enter there. It will then ask you for a pass phrase; just hit enter twice (but read the Security note below). Here's what the results should look like:

Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
a6:5c:c3:eb:18:94:0b:06:a1:a6:29:58:fa:80:0a:bc ubuntu@localhost 

This created two files, ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub. To use this keypair on a server, try this:

[root@host]# ssh server "mkdir .ssh; chmod 0700 .ssh"
[root@host]# scp .ssh/id_rsa.pub server:.ssh/authorized_keys2 

Of course, substitute your server name for server. It should ask for your password both times. Now, simply ssh server and it should log you in automatically without a password. And yes, it will use your shiny new public key for scp, too.

If that didn't work for you, check your file permissions on both ~/.ssh/* and server:~/.ssh/*. Your private key (id_rsa) should be 0600 (and only be present on your local machine), and everything else should be 0655 or better.

Terrific. So you can now ssh server quickly and with a minimum of fuss.

Some consider the use of public keys a potential security risk. After all, one only has to steal a copy of your private key to obtain access to your servers. While this is true, the same is certainly true of passwords.

Ask yourself, how many times a day do you enter a password to gain shell access to a machine (or scp a file)? How frequently is it the same password on many (or all) of those machines? Have you ever used that password in a way that might be questionable (on a web site, on a personal machine that isn't quite up to date, or possibly with an ssh client on a machine that you don't directly control). If any of these possibilities sound familiar, then consider that an ssh key in the same setting would make it virtually impossible for an attacker to later gain unauthorized access (providing, of course, that you keep your private key safe).


Sources:


Back Up with tar over ssh & rsync over ssh


1)
Shuffling files between servers is simple with scp:

[root@host]# scp some-archive.tgz server:/

Or even copying many files at once:

[root@host]# scp server:/usr/local/etc/* .

2)
But scp isn't designed to traverse subdirectories and preserve ownership and permissions. Fortunately, tar is one of the very early (and IMHO, most brilliant) design decisions in ssh to make it behave exactly as any other standard Unix command. When it is used to execute commands without an interactive login session, ssh simply accepts data on STDIN and prints the results to STDOUT. Think of any pipeline involving ssh as an easy portal to the machine you're connecting to. For example, suppose you want to backup all of the home directories on one server to an archive on another:

[root@host]# tar zcvf - /home | ssh server "cat > kk-homes.tgz"

Or even write a compressed archive directly to a tape drive on the remote machine:

[root@host]# tar zcvf - /var/named/data | ssh host "cat > /dev/tape"

Suppose you wanted to just make a copy of a directory structure from one machine directly into the filesystem of another. In this example, we have a working Apache on the local machine but a broken copy on the remote side. Let's get the two in sync:

[root@host]# cd /usr/local
[root@host:/usr/local]# tar zcf - apache/ | ssh pacman "cd /usr/local; mv apache apache.bak; tar zpxvf -"

This moves /usr/local/apache/ on pacman to /usr/local/apache.bak/, then creates an exact copy of /usr/local/apache/ from host, preserving permissions and the entire directory structure. You can experiment with using compression on both ends or not (with the z flag to tar), as performance will depend on the processing speed of both machines, the speed (and utilization) of the network, and whether you're already using compression in ssh.

Finally, let's assume that you have a large archive on the local machine and want to restore it to the remote side without having to copy it there first (suppose it's really huge, and you have enough space for the extracted copy, but not enough for a copy of the archive as well):

[root@host]# ssh kkb "cd /usr/local/pacland; tar zpvxf -" < really-big-archive.tgz

Or alternately, from the other direction:

root@host:/usr/local/pacland]# ssh kkbs "cat really-big-archive.tgz" | tar zpvxf -

If you encounter problems with archives created or extracted on the remote end, check to make sure that nothing is written to the terminal in your ~/.bashrc on the remote machine. If you like to run /usr/games/fortune or some other program that writes to your terminal, it's a better idea to keep it in ~/.bash_profile or ~/.bash_login than in ~/.bashrc, because you're only interested in seeing what fortune has to say when there is an actual human being logging in and definitely not when remote commands are executed as part of a pipeline. You can still set environment variables or run any other command you like in ~/.bashrc, as long as those commands are guaranteed never to print anything to STDOUT or STDERR.

Using ssh keys to eliminate the need for passwords makes slinging around arbitrary chunks of the filesystem even easier (and easily scriptable in cron, if you're so inclined).

3)
While tar over ssh is ideal for making remote copies of parts of a filesystem, rsync is even better suited for keeping the filesystem in sync between two machines. Typically, tar is used for the initial copy, and rsync is used to pick up whatever has changed since the last copy. This is because tar tends to be faster than rsync when none of the destination files exist, but rsync is much faster than tar when there are only a few differences between the two filesystems.

To run an rsync over ssh, pass it the -e switch, like this:

[root@host]# rsync -ave ssh kkraj:/home/ftp/pub/ /home/ftp/pub/

Notice the trailing / on the file spec from the source side (on kkraj.) On the source specification, a trailing / tells rsync to copy the contents of the directory, but not the directory itself. To include the directory as the top level of whatever is being copied, leave off the /:

[root@host]# rsync -ave ssh bkn:/home/six .

This will keep a copy of the ~root/six/ directory on village in sync with whatever is present on bkn:/home/six/.

By default, rsync will only copy files and directories, but not remove them from the destination copy when they are removed from the source. To keep the copies exact, include the -- delete flag:

[root@host]# rsync -ave ssh -- delete kkraj:~one/reports .

Now when old reports are removed from ~one/reports/ on kkraj, they're also removed from ~six/public_html/reports/ on jammer, every time this command is run. If you run a command like this in cron, leave off the v switch. This will keep the output quiet (unless rsync has a problem running, in which case you'll receive an email with the error output).

Using ssh as your transport for rsync traffic has the advantage of encrypting the data over the network and also takes advantage of any trust relationships you already have established using ssh client keys. For keeping large, complex directory structures in sync between two machines (especially when there are only a few differences between them), rsync is a very handy (and fast) tool to have at your disposal.


Installing OpenStack Ubuntu on Ubuntu

I was finally able to successfully deploy and run OpenStack Ubuntu on a single physical server for testing. It was a somewhat painful process as so many things can go wrong - there are just so many moving parts. I was lucky enough to get help from my best buddy com rommie Prashant on OpenStack that really helped.

This post will demonstrate how to go about installing and configuring OpenStack on a single node. At the end you'll be able to setup networking and block storage and create VM's.

Just as a brief overview of OpenStack here are all the parts that I've used: 

Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). In this tutorial I'll not be using it, but I'll write a new one that will only deal with Swift, as it's a beast on it's own.

Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute. 

Compute (codenamed "Nova") provides virtual servers upon demand - KVM, XEN, LXC, etc. 

Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.

Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. One example is OpenVSwitch which I'll use in this setup. 

Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service). In the Ubuntu release, both the nova-volume service and the separate volume service are available. I'll use iSCSI over LVM to export a block device. 

Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services written in Django. With this web GUI, you can perform some operations on your cloud like launching an instance, assigning IP addresses and setting access controls.

Here's a conceptual diagram for OpenStack Ubuntu and how all the pieces fit together:


And here's the logical architecture:



For this example deployment I'll be using a single physical Ubuntu 12.04 server with hvm support enabled in the BIOS.

1. Prerequisites

Make sure you have the correct repository from which to download all OpenStack components:

As root run:
[root@Ubuntu:~]# apt-get install ubuntu-cloud-keyring
[root@Ubuntu:~]# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/Ubuntu main" >> /etc/apt/sources.list
[root@Ubuntu:~]# apt-get update && apt-get upgrade
[root@Ubuntu:~]# reboot

When the server comes back online execute (replace MY_IP with your IP address): 
[root@Ubuntu:~]# useradd -s /bin/bash -m openstack
[root@Ubuntu:~]# echo "%openstack    ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@Ubuntu:~]# su - openstack
[openstack@Ubuntu:~]$ export MY_IP=10.177.129.121

Preseed the MySQL install 
[openstack@Ubuntu:~]$ cat <<EOF | sudo debconf-set-selections
mysql-server-5.1 mysql-server/root_password password notmysql
mysql-server-5.1 mysql-server/root_password_again password notmysql
mysql-server-5.1 mysql-server/start_on_boot boolean true
EOF

Install packages and dependencies 
[openstack@Ubuntu:~]$ sudo apt-get install -y rabbitmq-server mysql-server python-mysqldb

Configure MySQL to listen on all interfaces 
[openstack@Ubuntu:~]$ sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
[openstack@Ubuntu:~]$ sudo service mysql restart

Synchronize date
[openstack@Ubuntu:~]$ sudo ntpdate -u ntp.ubuntu.com

2. Installing the identity service - Keystone 
sudo apt-get install -y keystone

Create a database for keystone 
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE keystone;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'notkeystone';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'notkeystone';"

Configure keystone to use MySQL 
[openstack@Ubuntu:~]$ sudo sed -i "s|connection = sqlite:////var/lib/keystone/keystone.db|connection = mysql://keystone:notkeystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf

Restart keystone service 
[openstack@Ubuntu:~]$ sudo service keystone restart

Verify keystone service successfully restarted 
[openstack@Ubuntu:~]$ pgrep -l keystone

Initialize the database schema 
[openstack@Ubuntu:~]$ sudo -u keystone keystone-manage db_sync

Add the 'keystone admin' credentials to .bashrc 
[openstack@Ubuntu:~]$ cat >> ~/.bashrc <<EOF
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://$MY_IP:35357/v2.0
EOF

Use the 'keystone admin' credentials 
[openstack@Ubuntu:~]$ source ~/.bashrc

Create new tenants (The services tenant will be used later when configuring services to use keystone) 
[openstack@Ubuntu:~]$ TENANT_ID=`keystone tenant-create --name MyProject | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ SERVICE_TENANT_ID=`keystone tenant-create --name Services | awk '/ id / { print $4 }'`

Create new roles 
[openstack@Ubuntu:~]$ MEMBER_ROLE_ID=`keystone role-create --name member | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ ADMIN_ROLE_ID=`keystone role-create --name admin | awk '/ id / { print $4 }'`

Create new users 
[openstack@Ubuntu:~]$ MEMBER_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myuser --pass mypassword | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ ADMIN_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myadmin --pass mypassword | awk '/ id / { print $4 }'`

Grant roles to users 
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $MEMBER_USER_ID --tenant-id $TENANT_ID --role-id $MEMBER_ROLE_ID
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ADMIN_ROLE_ID

List the new tenant, users, roles, and role assigments 
[openstack@Ubuntu:~]$ keystone tenant-list
[openstack@Ubuntu:~]$ keystone role-list
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $MEMBER_USER_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $ADMIN_USER_ID

Populate the services in the service catalog 
[openstack@Ubuntu:~]$ KEYSTONE_SVC_ID=`keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ GLANCE_SVC_ID=`keystone service-create --name=glance --type=image --description="Glance Image Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ QUANTUM_SVC_ID=`keystone service-create --name=quantum --type=network --description="Quantum Network Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ NOVA_SVC_ID=`keystone service-create --name=nova --type=compute --description="Nova Compute Service" | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ CINDER_SVC_ID=`keystone service-create --name=cinder --type=volume --description="Cinder Volume Service" | awk '/ id / { print $4 }'`

List the new services 
[openstack@Ubuntu:~]$ keystone service-list

Populate the endpoints in the service catalog 
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$KEYSTONE_SVC_ID --publicurl=http://$MY_IP:5000/v2.0 --internalurl=http://$MY_IP:5000/v2.0 --adminurl=http://$MY_IP:35357/v2.0
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$GLANCE_SVC_ID --publicurl=http://$MY_IP:9292/v1 --internalurl=http://$MY_IP:9292/v1 --adminurl=http://$MY_IP:9292/v1
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$QUANTUM_SVC_ID --publicurl=http://$MY_IP:9696/ --internalurl=http://$MY_IP:9696/ --adminurl=http://$MY_IP:9696/
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$NOVA_SVC_ID --publicurl="http://$MY_IP:8774/v2/%(tenant_id)s" --internalurl="http://$MY_IP:8774/v2/%(tenant_id)s" --adminurl="http://$MY_IP:8774/v2/%(tenant_id)s"
[openstack@Ubuntu:~]$ keystone endpoint-create --region RegionOne --service-id=$CINDER_SVC_ID --publicurl="http://$MY_IP:8776/v1/%(tenant_id)s" --internalurl="http://$MY_IP:8776/v1/%(tenant_id)s" --adminurl="http://$MY_IP:8776/v1/%(tenant_id)s"

List the new endpoints 
[openstack@Ubuntu:~]$ keystone endpoint-list

Verify identity service is functioning 
[openstack@Ubuntu:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myuser", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
[openstack@Ubuntu:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myadmin", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool

Create the 'user' and 'admin' credentials 
[openstack@Ubuntu:~]$ mkdir ~/credentials
[openstack@Ubuntu:~]$ cat >> ~/credentials/user <<EOF
export OS_USERNAME=myuser
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF

[openstack@Ubuntu:~]$ cat >> ~/credentials/admin <<EOF
export OS_USERNAME=myadmin
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF

Use the 'user' credentials 
[openstack@Ubuntu:~]$ source ~/credentials/user

3. Install the image service - Glance
[openstack@Ubuntu:~]$ sudo apt-get install -y glance

Create glance service user in the services tenant 
[openstack@Ubuntu:~]$ GLANCE_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass notglance | awk '/ id / { print $4 }'`

Grant admin role to glance service user 
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID

List the new user and role assigment 
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $GLANCE_USER_ID

Create a database for glance 
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE glance;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'notglance';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'notglance';"

Configure the glance-api service 
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-api.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-api.conf

Configure the glance-registry service 
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-registry.conf
[openstack@Ubuntu:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-registry.conf

Restart glance services 
[openstack@Ubuntu:~]$ sudo service glance-registry restart
[openstack@Ubuntu:~]$ sudo service glance-api restart

Verify glance services successfully restarted 
[openstack@Ubuntu:~]$ pgrep -l glance

Initialize the database schema. Ignore the deprecation warning. 
[openstack@Ubuntu:~]$ sudo -u glance glance-manage db_sync

Download some images 
[openstack@Ubuntu:~]$ mkdir ~/images
[openstack@Ubuntu:~]$ wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O ~/images/cirros-0.3.0-x86_64-disk.img

Register a qcow2 image 
[openstack@Ubuntu:~]$ IMAGE_ID_1=`glance image-create --name "cirros-qcow2" --disk-format qcow2 --container-format bare --is-public True --file ~/images/cirros-0.3.0-x86_64-disk.img | awk '/ id / { print $4 }'`

Verify the images exist in glance 
[openstack@Ubuntu:~]$ glance image-list

# Examine details of images 
[openstack@Ubuntu:~]$ glance image-show $IMAGE_ID_1

4. Install the network service - Quantum

Install dependencies 
[openstack@Ubuntu:~]$ sudo apt-get install -y openvswitch-switch

Install the network service
[openstack@Ubuntu:~]$ sudo apt-get install -y quantum-server quantum-plugin-openvswitch

Install the network service agents 
[openstack@Ubuntu:~]$ sudo apt-get install -y quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent

Create a database for quantum 
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE quantum;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'notquantum';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'notquantum';"

Configure the quantum OVS plugin 
[openstack@Ubuntu:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/quantum/ovs.sqlite|sql_connection = mysql://quantum:notquantum@$MY_IP/quantum|g"  /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Default: enable_tunneling = False/enable_tunneling = True/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Example: tenant_network_type = gre/tenant_network_type = gre/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# Example: tunnel_id_ranges = 1:1000/tunnel_id_ranges = 1:1000/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@Ubuntu:~]$ sudo sed -i "s/# Default: local_ip = 10.0.0.3/local_ip = 192.168.1.$MY_NODE_ID/g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

Create quantum service user in the services tenant 
[openstack@Ubuntu:~]$ QUANTUM_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name quantum --pass notquantum | awk '/ id / { print $4 }'`

Grant admin role to quantum service user 
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $QUANTUM_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID

List the new user and role assigment 
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $QUANTUM_USER_ID

Configure the quantum service to use keystone 
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/# auth_strategy = keystone/auth_strategy = keystone/g' /etc/quantum/quantum.conf

Configure the L3 agent to use keystone 
[openstack@Ubuntu:~]$ sudo sed -i "s|auth_url = http://localhost:35357/v2.0|auth_url = http://$MY_IP:35357/v2.0|g" /etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/l3_agent.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/l3_agent.ini

Start Open vSwitch 
[openstack@Ubuntu:~]$ sudo service openvswitch-switch restart

Create the integration and external bridges 
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-br br-int
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-br br-ex

Restart the quantum services 
1234
[openstack@Ubuntu:~]$ sudo service quantum-server restart
[openstack@Ubuntu:~]$ sudo service quantum-plugin-openvswitch-agent restart
[openstack@Ubuntu:~]$ sudo service quantum-dhcp-agent restart
[openstack@Ubuntu:~]$ sudo service quantum-l3-agent restart

Create a network and subnet 
[openstack@Ubuntu:~]$ PRIVATE_NET_ID=`quantum net-create private | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ PRIVATE_SUBNET1_ID=`quantum subnet-create --name private-subnet1 $PRIVATE_NET_ID 10.0.0.0/29 | awk '/ id / { print $4 }'`

List network and subnet 
[openstack@Ubuntu:~]$ quantum net-list
[openstack@Ubuntu:~]$ quantum subnet-list

Examine details of network and subnet 
[openstack@Ubuntu:~]$ quantum net-show $PRIVATE_NET_ID
[openstack@Ubuntu:~]$ quantum subnet-show $PRIVATE_SUBNET1_ID

To add public connectivity to your VM's perform the following: 

Bring up eth1 
[openstack@Ubuntu:~]$ sudo ip link set dev eth1 up
Attach eth1 to br-ex 
[openstack@Ubuntu:~]$ sudo ovs-vsctl add-port br-ex eth1
[openstack@Ubuntu:~]$ sudo ovs-vsctl show

As the admin user for Quantum create a provider owned network and subnet and set the MY_PUBLIC_SUBNET_CIDR to your public CIDR 
[openstack@Ubuntu:~]$ source ~/credentials/quantum
[openstack@Ubuntu:~]$ echo $MY_PUBLIC_SUBNET_CIDR
[openstack@Ubuntu:~]$ PUBLIC_NET_ID=`quantum net-create public --router:external=True | awk '/ id / { print $4 }'`
[openstack@Ubuntu:~]$ PUBLIC_SUBNET_ID=`quantum subnet-create --name public-subnet $PUBLIC_NET_ID $MY_PUBLIC_SUBNET_CIDR -- --enable_dhcp=False | awk '/ id / { print $4 }'`

Switch back to the 'user' credentials 
[openstack@Ubuntu:~]$ source ~/credentials/user
Connect the router to the public network 
[openstack@Ubuntu:~]$ quantum router-gateway-set $ROUTER_ID $PUBLIC_NET_ID

Exmaine details of router 
[openstack@Ubuntu:~]$ quantum router-show $ROUTER_ID

Get instance ID for MyInstance1 
[openstack@Ubuntu:~]$ nova show MyInstance1
INSTANCE_ID=(instance id of your vm)

Find the port id for instance 
[openstack@Ubuntu:~]$ INSTANCE_PORT_ID=`quantum port-list -f csv -c id -- --device_id=$INSTANCE_ID | awk 'END{print};{gsub(/[\"\r]/,"")}'`

Create a floating IP and attach it to instance 
[openstack@Ubuntu:~]$ quantum floatingip-create --port_id=$INSTANCE_PORT_ID $PUBLIC_NET_ID

5. Install the compute service - Nova
[openstack@Ubuntu:~]$ sudo apt-get install -y nova-api nova-scheduler nova-compute nova-cert nova-consoleauth genisoimage

Create nova service user in the services tenant 
[openstack@Ubuntu:~]$ NOVA_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass notnova | awk '/ id / { print $4 }'`

Grant admin role to nova service user 
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID

List the new user and role assigment 
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $NOVA_USER_ID

Create a database for nova 
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE nova;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'notnova';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'notnova';"

Configure nova 
[openstack@Ubuntu:~]$ cat <<EOF | sudo tee -a /etc/nova/nova.conf
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://$MY_IP:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=Services
quantum_admin_username=quantum
quantum_admin_password=notquantum
quantum_admin_auth_url=http://$MY_IP:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
sql_connection=mysql://nova:notnova@$MY_IP/nova
auth_strategy=keystone
my_ip=$MY_IP
force_config_drive=True
EOF

Disable verbose logging 
[openstack@Ubuntu:~]$ sudo sed -i 's/verbose=True/verbose=False/g' /etc/nova/nova.conf

Configure nova to use keystone 
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_USER%/nova/g' /etc/nova/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notnova/g' /etc/nova/api-paste.ini

Initialize the nova database 
[openstack@Ubuntu:~]$ sudo -u nova nova-manage db sync

Restart nova services 
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart

Verify nova services successfully restarted 
[openstack@Ubuntu:~]$ pgrep -l nova

Verify nova services are functioning 
[openstack@Ubuntu:~]$ sudo nova-manage service list

List images 
[openstack@Ubuntu:~]$ nova image-list

List flavors 
[openstack@Ubuntu:~]$ nova flavor-list

Boot an instance using flavor and image names (if names are unique) 
[openstack@Ubuntu:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyFirstInstance

Boot an instance using flavor and image IDs 
[openstack@Ubuntu:~]$ nova boot --image $IMAGE_ID_1 --flavor 1 MySecondInstance

List instances, notice status of instance 
[openstack@Ubuntu:~]$ nova list

Show details of instance 
[openstack@Ubuntu:~]$ nova show MyFirstInstance

View console log of instance 
[openstack@Ubuntu:~]$ nova console-log MyFirstInstance

Get network namespace (ie, qdhcp-5ab46e23-118a-4cad-9ca8-51d56a5b6b8c) 
[openstack@Ubuntu:~]$ sudo ip netns
[openstack@Ubuntu:~]$ NETNS_ID=qdhcp-$PRIVATE_NET_ID

Ping first instance after status is active 
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ping -c 3 10.0.0.3

Log into first instance 
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3

If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command 
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3

Ping second instance from first instance 
[openstack@host1:~]$ ping -c 3 10.0.0.4

Log into second instance from first instance 
[openstack@host1:~]$ ssh cirros@10.0.0.4

Log out of second instance 
[openstack@host2:~]$ exit

Log out of first instance 
[openstack@host1:~]$ exit

Use virsh to talk directly to libvirt 
[openstack@Ubuntu:~]$ sudo virsh list --all

Delete instances 
[openstack@Ubuntu:~]$ nova delete MyFirstInstance
[openstack@Ubuntu:~]$ nova delete MySecondInstance

List instances, notice status of instance 
[openstack@Ubuntu:~]$ nova list

To start a LXC container do the following: 
[openstack@Ubuntu:~]$ sudo apt-get install nova-compute-lxc lxctl
[openstack@Ubuntu:~]$ sudo echo "compute_driver=libvirt.LibvirtDriver" >> /etc/nova/nova.conf
[openstack@Ubuntu:~]$ sudo echo "libvirt_type=lxc" >> /etc/nova/nova.conf
[openstack@Ubuntu:~]$ sudo cat /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=lxc

You need to use a raw image: 
[openstack@Ubuntu:~]$ wget http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz -O images/ubuntu-12.04-server-cloudimg-amd64.tar.gz
[openstack@Ubuntu:~]$ cd images; tar zxfv ubuntu-12.04-server-cloudimg-amd64.tar.gz; cd ..
[openstack@Ubuntu:~]$ glance image-create --name "UbuntuLXC" --disk-format raw --container-format bare --is-public True --file images/precise-server-cloudimg-amd64.img
[openstack@Ubuntu:~]$ glance image-update UbuntuLXC --property hypervisor_type=lxc
Now you can start the LXC container with nova: 
[openstack@Ubuntu:~]$ nova boot --image UbuntuLXC --flavor m1.tiny LXC

The instance files and rootfs will be located in /var/lib/nova/instances.
Logs go to /var/log/nova/nova-compute.log.
VNC does not work with LXC, but the console and ssh does. 

6. Install the dashboard - Horizon
[openstack@Ubuntu:~]$ sudo apt-get install -y memcached novnc
[openstack@Ubuntu:~]$ sudo apt-get install -y --no-install-recommends openstack-dashboard nova-novncproxy

Configure nova for VNC 
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
novncproxy_base_url=http://$MY_IP:6080/vnc_auto.html
vncserver_proxyclient_address=$MY_IP
vncserver_listen=0.0.0.0
EOF

Set default role 
[openstack@Ubuntu:~]$ sudo sed -i 's/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"/g' /etc/openstack-dashboard/local_settings.py

Restart the nova services 
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart
[openstack@Ubuntu:~]$ sudo service nova-novncproxy restart
[openstack@Ubuntu:~]$ sudo service apache2 restart

Point your browser to http://$MY_IP/horizon.
The credentials that we've create earlier are myadmin/mypassword.

7. Install the volume service - Cinder
[openstack@Ubuntu:~]$ sudo apt-get install -y cinder-api cinder-scheduler cinder-volume

Create cinder service user in the services tenant 
[openstack@Ubuntu:~]$ CINDER_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass notcinder | awk '/ id / { print $4 }'`

Grant admin role to cinder service user 
[openstack@Ubuntu:~]$ keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID

List the new user and role assigment 
[openstack@Ubuntu:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@Ubuntu:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $CINDER_USER_ID

Create a database for cinder 
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE cinder;"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'notcinder';"
[openstack@Ubuntu:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'notcinder';"

Configure cinder 
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/cinder/cinder.conf ) <<EOF
sql_connection = mysql://cinder:notcinder@$MY_IP/cinder
my_ip = $MY_IP
EOF

Configure cinder-api to use keystone 
[openstack@Ubuntu:~]$ sudo sed -i "s/service_host = 127.0.0.1/service_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = Services/g' /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_user = %SERVICE_USER%/admin_user = cinder/g' /etc/cinder/api-paste.ini
[openstack@Ubuntu:~]$ sudo sed -i 's/admin_password = %SERVICE_PASSWORD%/admin_password = notcinder/g' /etc/cinder/api-paste.ini

Initialize the database schema 
[openstack@Ubuntu:~]$ sudo -u cinder cinder-manage db sync

Configure nova to use cinder 
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=nova.volume.cinder.API
enabled_apis=osapi_compute,metadata
EOF

Restart nova-api to disable the nova-volume api (osapi_volume) 
[openstack@Ubuntu:~]$ sudo service nova-api restart
[openstack@Ubuntu:~]$ sudo service nova-scheduler restart
[openstack@Ubuntu:~]$ sudo service nova-compute restart
[openstack@Ubuntu:~]$ sudo service nova-cert restart
[openstack@Ubuntu:~]$ sudo service nova-consoleauth restart
[openstack@Ubuntu:~]$ sudo service nova-novncproxy restart

Configure tgt 
[openstack@Ubuntu:~]$ ( cat | sudo tee -a /etc/tgt/targets.conf ) <<EOF
default-driver iscsi
EOF

Restart tgt and open-iscsi 
[openstack@Ubuntu:~]$ sudo service tgt restart
[openstack@Ubuntu:~]$ sudo service open-iscsi restart

Create the volume group 
[openstack@Ubuntu:~]$ sudo pvcreate /dev/sda4
[openstack@Ubuntu:~]$ sudo vgcreate cinder-volumes /dev/sda4

Verify the volume group 
[openstack@Ubuntu:~]$ sudo vgdisplay

Restart the volume services 
[openstack@Ubuntu:~]$ sudo service cinder-volume restart
[openstack@Ubuntu:~]$ sudo service cinder-scheduler restart
[openstack@Ubuntu:~]$ sudo service cinder-api restart

Create a new volume 

[openstack@Ubuntu:~]$ cinder create 1 --display-name MyFirstVolume

Boot an instance to attach volume to 
[openstack@Ubuntu:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyVolumeInstance

List instances, notice status of instance 
[openstack@Ubuntu:~]$ nova list

List volumes, notice status of volume 
[openstack@Ubuntu:~]$ cinder list

Attach volume to instance after instance is active, and volume is available 
[openstack@Ubuntu:~]$ nova volume-attach <instance-id> <volume-id> /dev/vdc

Log into first instance 
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3

If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command 
[openstack@Ubuntu:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3

Make filesystem on volume 

[openstack@Ubuntu:~]$ sudo mkfs.ext3 /dev/vdc

Create a mountpoint 
[openstack@Ubuntu:~]$ sudo mkdir /extraspace

Mount the volume at the mountpoint 
[openstack@Ubuntu:~]$ sudo mount /dev/vdc /extraspace

Create a file on the volume 
[openstack@Ubuntu:~]$ sudo touch /extraspace/helloworld.txt
[openstack@Ubuntu:~]$ sudo ls /extraspace

Unmount the volume 
[openstack@Ubuntu:~]$ sudo umount /extraspace

Log out of instance 
[openstack@Ubuntu:~]$ exit

Detach volume from instance 
[openstack@Ubuntu:~]$ nova volume-detach <instance-id> <volume-id>

List volumes, notice status of volume 
[openstack@Ubuntu:~]$ cinder list

Delete instance 
[openstack@Ubuntu:~]$ nova delete MyVolumeInstance