Service Orchestration in Cisco devices using OpenStack

Service Orchestration in Cisco devices using OpenStack Internship Report by Sriram Ganesan Table of Content Service Orchestration in Cisco devices u...
1 downloads 2 Views 864KB Size
Service Orchestration in Cisco devices using OpenStack Internship Report by Sriram Ganesan

Table of Content Service Orchestration in Cisco devices using OpenStack............................................................................. 3 INTRODUCTION ............................................................................................................................................. 3 1.0

Setting up base OpenStack Environment ......................................................................................... 4

1.1

Setting up OpenStack Server Node ............................................................................................... 4

1.1.1

Installing the Linux OS ........................................................................................................... 4

1.1.2

Configuring the Network....................................................................................................... 5

1.1.3

Setting up RabbitMQ............................................................................................................. 5

1.1.4

MySQL ................................................................................................................................... 5

1.1.5

Keystone................................................................................................................................ 7

1.1.6

Glance ................................................................................................................................... 8

1.1.7

Nova ...................................................................................................................................... 9

1.2

Setting up the Nova-Compute node ........................................................................................... 11

1.2.1

Configuring the Network..................................................................................................... 11

1.2.2

Installing the Nova Components ......................................................................................... 11

1.3

Booting VM using OpenStack...................................................................................................... 12

1.4

Booting LXC using the OpenStack ............................................................................................... 13

2.0

Troubleshooting OpenStack............................................................................................................ 14

3.0

Linux Container orchestration in SPAG router................................................................................ 15

3.1 Router as “Nova” resource: Default options .................................................................................... 15 3.1.1 Nova Libvirt “default LXC” layer in Router ................................................................................ 15

4.0

3.1.2

Nova Libvirt “custom (VMAN)” LXC layer in Router ........................................................... 16

3.1.3

Router as “nova” resource: Thin client option ................................................................... 16

3.1.4

“Nova compute proxy” for Router ...................................................................................... 17

References and Acknowledgements ............................................................................................... 17

4.1 Acknowledgement ............................................................................................................................ 17 4.2 References ........................................................................................................................................ 18

Service Orchestration in Cisco devices using OpenStack INTRODUCTION This technical document explains the steps to be followed to 1 2 3 4

Setting up a private cloud using OpenStack using top-of-trunk sources to orchestrate VM and Linux containers (and extensible to Dockers) Approach to debugging OpenStack end to end for troubleshooting Approaches to customize OpenStack to manage Linux containers on thin compute nodes (Cisco devices) with minimum resource requirements Details on most useful reference material available in the Web for customizations of OpenStack environment

Service

The summary of this project can be seen in the embedded slides. Orchestration using OpenStack LXC - Internship Report Sriram Ganesan v1.pptx The following sections explain the steps to get the OpenStack deployment end-to-end functional. The theoretical details of OpenStack and related components are not in scope for this document. There are many reference materials available in the internet (enumerated in the last section) that explains the technology in detail. For the purpose of this document, we will be using the following topology to demonstrate the end-toend functionality of OpenStack. This setup will be customized for providing the OpenStack orchestration for Cisco devices. For the basic setup, we use three (virtual or physical) machines to create a minimal OpenStack cloud environment.

In this project, we will be focusing mainly on Nova and Glance layer of OpenStack to manage compute resources in the nodes. The node “server1” will be configured as OpenStack server node, “server2” as compute node and “Horizon” OpenStack client will be installed in a separate node that will provide GUI front-end to manage the OpenStack cloud. Please note that we use two separate network (eth0 and

eth1) in our environment, “eth0” network for administration purposes (and accessible from external network) and “eth1” for internal (private) OpenStack network.

1.0 Setting up base OpenStack Environment This section describes the installations and customization of configuration files needed to set up an OpenStack server and one or more compute nodes. The setup can be used to orchestrate VM or Linux containers (LXC) using OpenStack shell commands or the dashboard “Horizon”. Setting up OpenStack environment using pristine Linux OS (Ubuntu 14.4 is taken as reference) along with top-of-trunk production OpenStack sources is preferred over prepackaged “Devstack”. This enables us to make changes to the OpenStack sources and customize the same to orchestrate the Linux container (and Docker as follow up) on Cisco devices (taking SPAG router as an example) using thin client interfaces to minimize memory and file system requirements. At present, OpenStack top-of-trunk sources are not officially qualified for LXC, but found to be stable enough in testing. This section details the steps to follow to get end-to-end OpenStack functional with associated configuration settings.

1.1

Setting up OpenStack Server Node This section explains the server side installation and configuration to make the OpenStack server functional.

1.1.1

Installing the Linux OS For this installation we would be using Ubuntu 14.04 Operating System (the ISO file is available for free download online in Ubuntu website). Assuming you have Windows or Mac OS, you can bring it up as a Virtual Machine (I used VMWare Player which is free online). The steps to get base OS functional are simple and well documented in Ubuntu and many other websites. As part of the ISO installation, you may create an administrative user with a name of your choice (I used “localadmin” in my setup). Once the basic OS installation is complete, we need to update the system with the following commands sudo apt-get update sudo apt-get upgrade

Next we need to install vlan and bridge-utils. sudo apt-get install vlan bridge-utils

Now, we may edit /etc/sysctl.conf by uncommenting and appropriately changing the following lines: net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0

Now reboot the server with the command sudo reboot

1.1.2

Configuring the Network Please edit /etc/network/interfaces file to customize as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp

Now restart the network using the command /etc/init.d/networking restart

Note down the eth0 and eth1 IP addresses with “ifconfig eth0” and “ifconfig eth1”. For the purposes of the documentation, we will assume the eth0 IP address as “192.168.242.132” and eth1 IP address as “192.168.154.128” 1.1.3

Setting up RabbitMQ Install RabbitMQ server - as it will be required for communicating with the nova-compute node (the router) - with the command below: sudo apt-get install rabbitmq-server

1.1.4

MySQL 1.1.4.1 Setting up MySQL We need to install the MySQL and Python package. This can be done with the following command: sudo apt-get install mysql-server python-mysqldb

During installation, you will be prompted to enter your root password for MySQL. In this guide, we assume the password to be “mygreatsecret”. Now edit the file /etc/mysql/my.cnf by changing bind-address from 127.0.0.1 to 0.0.0.0 bind-address = 0.0.0.0

Now add/edit the following lines in the same file under the header “[mysqld]”: collation-server = utf8_general_ci

init-connect = ‘SET NAMES utf8’ character-set-server = utf8

Finally restart the MySQL server sudo restart mysql

1.1.4.2 Creating MySQL databases We need to create databases for nova, glance and keystone services of OpenStack. 1.1.4.2.1 Nova database First create the nova database with the command mysql –uroot –pmygreatsecret –e ‘CREATE DATABASE nova;’

Now we create the user, we may call “novadbadmin” to be the admin of the nova database with the password “novasecret”. mysql –uroot –pmygreatsecret –e ‘CREATE USER novadbadmin;’

Grant all privileges for the admin “novadbadmin” for “nova” mysql –uroot –pmygreatsecret –e “GRANT ALL PRIVILEGES ON nova.* TO ‘novadbadmin’@’%’;”

Now, set the password mysql –uroot –pmygreatsecret –e “SET PASSWORD FOR ‘novadbadmin’@’%’ = PASSWORD(‘novasecret’);”

1.1.4.2.2 Glance database Create “Glance” database with the command mysql –uroot –pmygreatsecret –e ‘CREATE DATABASE glance;’

Now we my create the user who we call “glancedbadmin” to be the admin of the glance database with the password “glancesecret”. Give all privileges for this user for the database “glance”. mysql –uroot –pmygreatsecret –e ‘CREATE USER glancedbadmin;’ mysql –uroot –pmygreatsecret –e “GRANT ALL PRIVILEGES ON glance.* TO ‘glancedbadmin’@’%’;” mysql –uroot –pmygreatsecret –e “SET PASSWORD FOR ‘glancedbadmin’@’%’ = PASSWORD(‘glancesecret’);”

1.1.4.2.3 Keystone database Create Keystone database with the command mysql –uroot –pmygreatsecret –e ‘CREATE DATABASE keystone;’

Now we create the user “keystonedbadmin” to be the admin of the keystone database with the password “keystonesecret” and enable all privileges to keystone databse. mysql –uroot –pmygreatsecret –e ‘CREATE USER keystonedbadmin;’ mysql –uroot –pmygreatsecret –e “GRANT ALL PRIVILEGES ON keystone.* TO ‘keystonedbadmin’@’%’;” mysql –uroot –pmygreatsecret –e “SET PASSWORD FOR ‘keystonedbadmin’@’%’ = PASSWORD(‘keystonesecret’);”

1.1.5

Keystone 1.1.5.1 Setting up Keystone First we need to install keystone package, which is the identity service of OpenStack. sudo apt-get install keystone python-keystone python-keystoneclient

Now, we need to make the following changes to /etc/keystone/keystone.conf Uncomment and change the line: #admin_token = ADMIN

to admin_token = admin

We will be using the MySQL database to manage all the OpenStack services, and so we need to change the line: connection = sqlite:////var/lib/keystone/keystone.db

to connection = mysql://keystonedbadmin:[email protected]/keystone

Note that you may need to edit the above line based on your keystone database, keystone user and password and your eth0 IP address. After making all of the above changes, we need to commit these changes to keystone. This is done by executing the commands sudo service keystone restart keystone-manage db_sync

1.1.5.2 Creating Keystone tenants, users and roles We have to create 2 tenants (admin and service) with the below commands. keystone tenant-create --name admin keystone tenant-create --name service

We will be creating 3 users – the admin user, nova user and glance user. We are also setting the passwords to be the same as the username for the purposes of this documentation. keystone user-create --name admin --pass admin keystone user-create --name nova --pass nova keystone user-create --name glance --pass glance

We will be creating 2 roles – admin and Member as shown below. keystone role-create --name admin keystone role-create --name Member

1.1.5.3 Assigning users to their roles and tenants Now we add roles to the users that have been created. A role to a specific user in a specific tenant can be assigned as below: We will be adding “admin” role to the user “admin” of the tenant “admin” as: keystone user-role-add --user admin --tenant admin --role admin

We may do the same for the users “nova” and “glance” as below keystone user-role-add --user nova --tenant service --role admin keystone user-role-add --user glance --tenant service --role admin

1.1.5.4 Creating the Services and Endpoints Now we will create the various services - nova, glance and keystone. keystone service-create --name nova --type compute --description ‘OpenStack Compute Service’ keystone service-create --name glance --type image --description ‘OpenStack Image Service’ keystone service-create --name keystone --type identity --description ‘OpenStack Identity Service’

And we may set the various endpoints for the services nova-compute, glance and keystone as: keystone endpoint-create --service nova –publicurl http://192.168.242.132:8774/v2/%\(tenant_id\)s –internalurl http://192.168.242.132:8774/v2/%\(tenant_id\)s –adminurl http://192.168.242.132:8774/v2/%\(tenant_id\)s keystone endpoint-create --service glance --publicurl http://192.168.242.132:9292 –internalurl http://192.168.242.132:9292 --adminurl http://192.168.242.132:9292 keystone endpoint-create --service keystone --publicurl http://192.168.242.132:5000/v2.0 internalurl http://192.168.242.132:5000/v2.0 --adminurl http://192.168.242.132:5000/v2.0

1.1.6

Glance Glance uses SQLite by default. We will be using MySQL which is configured with Glance. First we install glance with the command: sudo apt-get install glance

Then we edit both /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf as below: First edit the line containing “sqlite_db” to as below: connection = mysql://glancedbadmin:[email protected]/glance

Edit the following lines: admin_tenant_name = service admin_user = glance admin_pass = glance

Under the section containing the heading “[paste_deploy]” add the following line: flavor = keystone

Then we sync with the following commands: glance-manage db_sync sudo restart glance-api sudo restart glance-registry

To test if glance works properly, try checking if the following command works. This command downloads an image from the web and uploads it to glance. glance image-create --name Cirros --is-public true --container-format bare --disk-format qcow2 -location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img glance index

1.1.7

Nova We may install the nova packages with the following command. sudo apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient nova-compute nova-console

1.1.7.1 Editing Nova configuration files First we need to edit /etc/nova/nova.conf file as below. [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True libvirt_type=lxc virt_type=lxc rpc_backend = nova.rpc.impl_kombu

rabbit_host = 192.168.242.132 my_ip = 192.168.242.132 vncserver_listen = 192.168.242.132 vncserver_proxyclient_address = 192.168.242.132 novncproxy_base_url=http://192.168.242.132:6080/vnc_auto.html glance_host = 192.168.242.132 auth_strategy=keystone vif_plugging_is_fatal: false vif_plugging_timeout: 0 [database] connection = mysql://novadbadmin:[email protected]/nova [keystone_authtoken] auth_uri = http://192.168.242.132:5000 auth_host = 192.168.242.132 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova [libvirt] virt_type = lxc

And we edit the file /etc/nova/nova-compute.conf to be as below: [DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] libvirt_type=lxc virt_type=lxc

We may reset the nova services with the following commands: nova-manage db sync service nova-api restart; service nova-cert restart; service nova-consoleauth restart; service novascheduler restart; service nova-conductor restart; service nova-novncproxy restart; service novacompute restart; service nova-console restart

To test if nova was set correctly run the following command sudo nova-manage service list

And the following output is expected (something similar with all the smiley faces as states). Binary Host Zone Status State Updated_At nova-consoleauth server1 internal enabled :-) 2015-06-19 08:55:13 nova-conductor server1 internal enabled :-) 2015-06-19 08:55:14 nova-cert server1 internal enabled :-) 2015-06-19 08:55:13 nova-scheduler server1 internal enabled :-) 2015-06-19 08:55:13 nova-compute server1 nova enabled :-) 2015-06-19 08:55:14 nova-console server1 internal enabled :-) 2015-06-19 08:55:14

1.2

Setting up the Nova-Compute node This section describes the minimal software installations and configuration needed in order to set up an OpenStack compute node. Once the node is setup, it will peer as compute resource with the OpenStack server node. With this, we can instantiate booting a Linux container (LXC) on this node from the OpenStack server headend, using Nova commands.

1.2.1

Configuring the Network Edit /etc/network/interfaces file to be as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp

Now restart the network using the command /etc/init.d/networking restart

Note down the eth0 and eth1 IP addresses with “ifconfig eth0” and “ifconfig eth1”. For the purposes of the documentation, we will assume the eth0 IP address as “192.168.242.129” and eth1 IP address as “192.168.154.129” 1.2.2

Installing the Nova Components We will install the nova packages required for the compute node with the following command: sudo apt-get install -y nova-common python-nova nova-compute

Then we edit the file /etc/nova/nova.conf to be as follows: [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True libvirt_type=lxc virt_type=lxc rpc_backend=rabbit rabbit_host = 192.168.242.132 rabbit_port=5672 rabbit_hosts = 192.168.242.132:5672 rabbit_userid=guest rabbit_password=guest

my_ip = 192.168.242.132 vncserver_listen = 192.168.242.132 vncserver_proxyclient_address = 192.168.242.132 novncproxy_base_url=http://192.168.242.132:6080/vnc_auto.html glance_host = 192.168.242.132 auth_strategy=keystone linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = nova.virt.lxc.LXCDriver vif_plugging_is_fatal: false vif_plugging_timeout: 0 [database] connection = mysql://novadbadmin:[email protected]/nova [keystone_authtoken] auth_uri = http://192.168.242.132:5000 auth_host = 192.168.242.132 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova [libvirt] virt_type = lxc vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Now we need to restart the nova-compute service on nova-compute node and then test whether nova-compute is working with the following commands: sudo service nova-compute restart nova-manage service list

You should expect the following output after doing the latter command: Binary Host Zone Status State Updated_At nova-consoleauth server1 internal enabled :-) 2015-06-19 08:55:13 nova-conductor server1 internal enabled :-) 2015-06-19 08:55:14 nova-cert server1 internal enabled :-) 2015-06-19 08:55:13 nova-scheduler server1 internal enabled :-) 2015-06-19 08:55:13 nova-compute server1 nova enabled :-) 2015-06-19 08:55:14 nova-console server1 internal enabled :-) 2015-06-19 08:55:14 nova-compute server2 nova enabled :-) 2015-06-19 10:45:19

1.3

Booting VM using OpenStack We may create an image-file which would store the ISO file, with a maximum size set. In this case, create an image called “ubuntu.img” of size 10 GB, on which we will install the 64-bit Ubuntu Trusty 14.04.2 LTS version. We do this with the commands:

qemu-img create ubuntu.img 10G wget releases.ubuntu.com/14.04/ubuntu-14.04.2-desktop-amd64.iso qemu -boot d -cdrom ubuntu-14.04.2-desktop-amd64.iso -m 512 -hda ubuntu.img

OpenStack expects the client tools to use two credentials: “access key” and “secret key” in order to launch any instance. So for this we can generate a key-pair such that the keys will be added to the instances so that we can do SSH access to the instances without requiring a password. Keypairs can be generated with the following command (in this case we will call the keypair “mykey”): shh-keygen cd .ssh nova keypair-add --pub_key id_rsa.pub mykey

The next step would be to add the image into Glance before booting it with the nova-compute service. The name of the image is “ubuntu”. To add the image into glance, we use the command: glance image-create --name ubuntu --is-public True --disk-format raw --container-format bare progress --file ubuntu.img

Before booting, we need to know the image ID which you can view by running the command “glance image-list” and the host name of the compute node by running the command “hostname” on the compute node. For this to work though, the host name of the OpenStack server should be different from that of the compute node. Finally now we can boot the instance via Nova on to the Nova compute node. In this case, I am assuming the glance image ID of the instance to be “2318bbab-d10f-46bc-b4ca-d4a0fb43f6b4”, the host name of the compute node to be “localadmin-VirtualBox”, and “cn-01” is the name of the instance in the nova DB with the command nova boot --flavor 2 --image 2318bbab-d10f-46bc-b4ca-d4a0fb43f6b4 cn-01 --key_name mykey -hint virt_type=lxc --availability_zone nova:localadmin-VirtualBox

1.4

Booting LXC using the OpenStack As explained in the previous section, first we need to create the keypair. Once that is done, we can either create a new LXC container of name “testing” with the command lxc-create -t download -n testing

The download template will show you a list of distributions, versions and architecture to choose from. For this project, we may choose, Ubuntu for “distribution”, “trusty14.04 LTS” for “version” and “i386” for architecture. This will download the distro and update in “testing” LXC. Then you go to the rootfs folder of the lxc-container (from the “home” directory as /home/localadmin/.local/share/lxc/testing/rootfs) and tar the “rootfs” to a file as cd /home/localadmin/.local/share/lxc/testing/rootfs tar cvzf /home/localadmin/testing-container.tar.gz .

Then we need to go back to home directory, /home/localadmin (or whatever the equivalent, usually home/USER-NAME). Then do the following commands: truncate --size 2GB testing.img sudo losetup -f testing.img sudo losetup -a sudo mkfs /dev/loop0 #or whatever /dev/loopX was used last by the previous command mkdir mnt sudo mount /dev/loop0 mnt cd mnt tar xvzf ../testing-container.tar.gz . cd .. sudo umount /dev/loop0 sudo losetup -d /dev/loop0

Now we may upload the image testing.img to glance repo using the command glance image-create --name testing --is-public True --disk-format raw --container-format bare -progress --file testing.img

Like in the case of booting VMs in the previous section, we need to know the host name of the compute node and glance image ID and this is obtained by the following commands: glance image-list hostname #on the compute node

Once you have obtained both of the above, you can boot the instance through nova onto the compute node. In this case I am assuming the glance image ID of the image is “2318bbab-d10f46bc-b4ca-d4a0fb43f6b4” and the host name of the compute node is “localadmin-VirtualBox” nova boot --flavor 2 --image 2318bbab-d10f-46bc-b4ca-d4a0fb43f6b4 cn-01 --key_name mykey -hint virt_type=lxc --availability_zone nova:localadmin-VirtualBox

You can use the command “nova list” on the OpenStack server to see if the instance has booted. To further see if it has booted on the compute node, you can run the command nova list --host localadmin-VirtualBox

To test if the above is working, you can go onto the compute node and run the command virsh -c lxc:/// list

You would see a LXC instance running in the compute node.

2.0 Troubleshooting OpenStack

To troubleshoot OpenStack, you may add the line “debug=True” to the .conf files discussed above. This will enable the debug log messages in the log files in the OpenStack server/compute-node depending on where you have turned it on. Also, to view this, you may go to /var/log/upstart or /var/log/nova folders. Doing the command “ls -lart” will show you which files have been updated most recently if you are wishing to view error messages after having done a certain step (eg: adding image into Glance). Usually the file that would be the most interesting to view would be “nova-compute.log” within each of these directories. The following commands could be helpful for troubleshooting/making sure the services work:         

nova-manage service list - Gives a list of the currently running nova services glance image-list - gives a list of the currently existing images added to Glance DB nova list - gives a list of instances currently booted through Nova and their status nova image-list - gives a list of currently existing images added to Glance DB keystone tenant-list - Gives a list of tenants in Keystone keystone user-list - gives a list of Keystone users keystone role-list - gives a list of possible roles to use in Keystone keystone service-list - gives a list of services provisioned in Keystone keystone endpoint-list - gives a list of endpoints which can be used in Keystone #this may be most helpful if you have problems with Keystone

3.0 Linux Container orchestration in SPAG router The goal here is to be able to use OpenStack to boot LXCs onto the Cisco SPAG routers. These LXCs/containers in turn would be having specific services that will be part of the data or control plane in the routers virtually. In order to achieve this, there are 4 possible approaches mostly involve changing the OpenStack code in some manner as explained below. The first approach is to use end-to-end OpenStack as existing currently, to communicate between OpenStack Server and SPAG router (compute node). The second approach is to build a plugin under LIBVIRT in the compute node for the VMAN API similar to existing KVM or QEMU. The third approach is to add a thin client to OpenStack Server similar to currently existing VMWare API, Xen-API, etc, to talk to the router using our custom VMAN API. The final approach which we will be implementing is to have a proxy nova-compute node with VMAN API to talk to the router to boot LXCs on it.

3.1 Router as “Nova” resource: Default options 3.1.1 Nova Libvirt “default LXC” layer in Router This is the first and simplest approach. This would involve only 2 components - the OpenStack server and the complete nova agent in SPAG router. In this case, the SPAG router would be acting like the nova-compute node. In order for the router to act as a nova-compute node, a whole list of RPMs to be installed as in the document below.

List of RPMs to support Nova-compute in SPAG router.docx

Once the RPMs are all integrated to the router image the nova-agent service should come up and functional. The above list is representative list, and may need some tweaking, as they were installed using manual dependency resolutions (rpm2cpio and cpio to extract and put all RPMs in to a combined tar file). The ideal way would be to enable RPM support to the image and do automatic installation using “yum” commands. One advantage of this approach is that this would require no coding to be done and this would take very little time to implement with regards to making OpenStack working in this manner. However the huge amount of disk-space (hundreds of MBs) and huge memory requirement (upwards of 3GB when fully functional) will be a huge disadvantage with this approach. And it may also pose issues during runtime on low end systems (PPC based devices) due to possible hardware support for virtualization etc. Hence this approach is not recommended. 3.1.2 Nova Libvirt “custom (VMAN)” LXC layer in Router This approach is a variation of the above approach, where we will write a plugin layer to the “libvirt” (analogous to KVM, QEMU etc) to use our internal VMAN APIs to orchestrate the container. This scheme essentially will remove all the concerns of CPU virtualization support etc and ability to customize our orchestration needs (under the libvirt interface). This approach was explored using the libvirt sources (available on https://github.com/OpenStack/nova/blob/master/nova/virt/libvirt/driver.py ) with the code modifications to support custom VMAN layer support for SPAG routers (similar to how it current support for KVM and QEMU). This approach could work in theory with the advantage that most of the nova interface code being present and this limits amount of new code requirement. However this faces the same limitation of requiring many RPMs in the router to support the nova-compute agent. The RPMs which would be required for this are shown in document below:

List of RPMs estimated to support Nova-compute in SPAG router.docx

Once again, this suffers from huge disk and memory requirement and hence is not being pursued. 3.1.3 Router as “nova” resource: Thin client option This is the third approach and this would require creating our own plugin for OpenStack support using LXCs. For instance, currently OpenStack supports LIBVIRT, VMWare API, XenAPI, and so on. We could build our own code to support VMAN API and integrate that as part of OpenStack.

The advantages of this approach would be that the router would not have to be integrated with LIBVIRT API support and can be customized completely with VMAN subset interface. The libvirt “driver.py” sources were analyzed to collect all the APIs invoked during nova-compute node bootup and during LXC create/up/down events and any other direct APIs from higher nova layers. This scheme if it works would have been most ideal, since we can customize the OpenStack server end to use our VMAN interface that will work with a thin client VMAN agent in the Router. The disadvantage of this approach is the amount of time/code it would take to create a fully functional OpenStack (server) plugin for our internal (VMAN) APIs for a full “nova” server API support. This model is not being pursued at this time due to more intrusive OpenStack changes requirement and ongoing maintenance needs. 3.1.4 “Nova compute proxy” for Router This is the final approach which has been discussed and this is the approach we have been following for now. In this approach, the OpenStack server will be interfacing with a proxy compute node (VM) showing up as a compute node. This compute node will be connecting with the SPAG router over a custom simple client/server RPC model. This VM acts like the “nova compute proxy”. The nova-compute libvirt layer would be altered similar as discussed in 3.1.2. The driver.py file would be changed using a thin plugin so that it would in turn interface with the “router” (as a client/server model) and support boot an LXC onto the SPAG router via VMAN API. The advantage of this approach is that none of the components of the router have to be changed, other than having a thin agent that interfaces with “nova compute node”. This agent will in turn invoke VMAN APIs to orchestrate service LXC locally inside the router. The only requirement will be that, for each Router we would need a proxy nova-compute-agent VM that interfaces with OpenStack Server. Due to relative easier effort and minimal changes to the OpenStack and Router code, this approach is being pursued at this time. We may in the future enhance this to a fully functional agent model as described in 3.1.3 to avoid the proxy agent requirement.

4.0 References and Acknowledgements 4.1 Acknowledgement During this internship phase I have learned a lot of new areas of technologies including Linux platform, broad area of OpenStack technologies, distributed systems, Virtualization technologies, base understanding of a service Router, Client/Server architecture, and many related areas. For a student like me this has been a wonderful opportunity and this would not have been possible without lots of support, help and mentoring of the entire team that I worked with. I would like to convey my most

sincere gratitude to: Krishna Sundaresan for giving me this internship opportunity; Ramesh Veerapaneni for all the help and support during this project; Milan Ramachandran for his active mentoring, time, patience, guidance, debugging help and many insightful ideas; Gyan Ranjan for active mentoring, debugging help and support through my project; Akshay Shetty for all the help.

4.2 References 1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) 15) 16) 17) 18) 19) 20) 21) 22) 23) 24) 25) 26) 27) 28) 29) 30) 31) 32) 33) 34) 35) 36) 37) 38)

OpenStack http://docs.openstack.org/developer/nova/devref/fakes.html#the-nova-tests-api-openstack-fakesmodule http://docs.openstack.org/developer/nova/devref/services.html http://www.oracle.com/technetwork/server-storage/vm/ovm-linux-openstack-2202503.pdf http://kaivanov.blogspot.in/2013/02/installing-openstack-folsom-on-ubuntu.html http://kaivanov.blogspot.in/2013/01/configuring-lxc-using-libvirt.html http://kaivanov.blogspot.in/2012/07/configuring-lxc-linux-containers.html http://kaivanov.blogspot.in/2013/07/creating-secure-lxc-containers-with.html http://libvirt.org/drvlxc.html https://help.ubuntu.com/lts/serverguide/lxc.html http://blog.scottlowe.org/2013/11/27/linux-containers-via-lxc-and-libvirt/ http://xmodulo.com/lxc-containers-ubuntu.html https://sreeninet.wordpress.com/2015/02/21/openstack-juno-install-using-devstack/ http://blog.docker.com/2013/06/openstack-docker-manage-linux-containers-with-nova/ http://docs.openstack.org/admin-guide-cloud/admin-guide-cloud.pdf http://www.ubuntu.com/cloud/tools/lxd https://wiki.ubuntu.com/OpenStack/LXC http://blog.tutum.co/2014/12/09/hands-on-with-lxd/ http://translate.google.co.in/translate?hl=en&sl=ja&tl=en&u=http%3A%2F%2Fwww.okinawaopenlab s.org%2Fwp%2Fwp-content%2Fuploads%2F20150207_%EF%BC%92.pdf https://zulcss.wordpress.com/2014/11/14/nova-compute-flex-introduction-and-getting-started/ http://bodenr.blogspot.in/2014/05/kvm-and-docker-lxc-benchmarking-with.html#more http://www.symantec.com/connect/blogs/lxc-and-docker-containers-nova-openstack https://zulcss.wordpress.com/2014/11/ https://damithakumarage.wordpress.com/tag/openstack/ http://product-dist.wso2.com/downloads/stratos/2.0.0/Open-stack-Installation.pdf https://ask.openstack.org/en/question/14401/ubuntu-image-for-libvirt_typelxc/ https://wiki.openstack.org/wiki/CaaS_demo https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/page:1/query:novamanage%20service%20list%20compute%20entry%20missing/ https://www.flockport.com/lxc-vs-docker/ https://www.flockport.com/author/tobby/ https://clearlinux.org/documentation/installing-openstack-mvp-bundles https://clearlinux.org/features http://www.ibm.com/developerworks/cloud/library/cl-openstack-nova-glance/ https://ask.openstack.org/en/question/54533/juno-nova-instance-stuck-in-build-scheduling/ https://ask.openstack.org/en/question/51800/endpoint-does-not-support-rpc-version-333/ http://blogs.cisco.com/datacenter/application-enablement-and-innovation-leveraging-linux-containers http://blogs.cisco.com/cloud/open-framework http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/openstack-atcisco/linux-containers-white-paper-cisco-red-hat.pdf

39) http://www.linuxjournal.com/content/docker-lightweight-linux-containers-consistent-developmentand-deployment 40) https://ask.openstack.org/en/question/61163/glance-configuration-identity-credentials/ 41) http://serverfault.com/questions/590550/open-stack-attempting-to-launch-instance-libvirterrorunsupported-configurati 42) https://www.linux-tips.org/article/4/booting-from-an-iso-image-using-qemu 43) https://cloud-images.ubuntu.com/releases/14.04.1/release/ 44) http://askubuntu.com/questions/189466/what-is-the-default-password-for-ubuntu-12-04 45) http://www.virtuallyghetto.com/2014/09/how-to-run-qemu-kvm-on-esxi.html 46) http://www.sebastien-han.fr/blog/2012/12/20/where-does-my-instance-run/ 47) https://wiki.openstack.org/wiki/Ceilometer 48) http://developer.openstack.org/ 49) vpn 50) https://review.openstack.org/#/c/177740 51) https://review.openstack.org/#/c/152377/ 52) https://review.openstack.org/#/c/136929 53) neutron 54) https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture3 55) https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture 56) lxc 57) http://containerops.org/2013/11/19/lxc-networking/ 58) https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-nesting 59) https://libvirt.org/drvlxc.html 60) https://www.stgraber.org/2015/04/21/lxd-getting-started/ 61) https://zulcss.wordpress.com/2015/05/01/introduction-to-nova-compute-lxd/ 62) http://voices.canonical.com/user/138/ 63) http://sirupsen.com/production-docker/ 64) https://ask.openstack.org/en/question/7991/weird-problem-that-nova-computes-state-cannot-beupdated/ 65) salt 66) https://docs.saltstack.com/en/pdf/Salt-2015.5.2.pdf 67) rpm 68) http://www.unix.com/unix-and-linux-applications/111461-cpio-problem-copy-root-dir-instead-currentdir.html 69) http://www.cyberciti.biz/howto/question/linux/linux-rpm-cheat-sheet.php 70) https://www.youtube.com/watch?v=hWWSaBOMTNo&feature=player_embedded

Suggest Documents