Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide Integrate VMware vSphere with Red Hat Enterprise Linux OpenStack Platform Ope...
Author: Lester Harris
8 downloads 2 Views 291KB Size
Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Integrate VMware vSphere with Red Hat Enterprise Linux OpenStack Platform

OpenStack Team

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Integrate VMware vSphere with Red Hat Enterprise Linux OpenStack Platform OpenStack Team [email protected]

Legal Notice Copyright © 2016 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners.

Abstract A guide for integrating VMware vSphere and VMware NSX with Red Hat Enterprise Linux OpenStack Platform.

Table of Contents

Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . PREFACE .CHAPTER . . . . . . . . .1.. .INTEGRATE . . . . . . . . . . .WITH . . . . .VMWARE . . . . . . . . .VCENTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . 1.1. ARCHITECTURE OVERVIEW 4 1.2. SYSTEM REQUIREMENTS 5 1.3. POST-DEPLOYMENT TASKS 11 1.4. USE HOST AGGREGATES 13 1.5. KNOWN ISSUES 14 .CHAPTER . . . . . . . . .2.. .IMPORT . . . . . . . VMWARE . . . . . . . . .GUESTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 ........... 2.1. CONFIGURE THE V2V HOST 16 .CHAPTER . . . . . . . . .3.. .INTEGRATE . . . . . . . . . . .VMWARE . . . . . . . . NSX . . . . .WITH . . . . .OPENSTACK . . . . . . . . . . . .NETWORKING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 ........... 3.1. SERVICE NODE CONCEPTS 21 3.2. DISABLE NEUTRON-OPENVSWITCH-AGENT 21 3.3. INSTALL AND ENABLE THE VMWARE NSX PLUGIN 3.4. CONFIGURE THE VMWARE NSX PLUGIN 3.5. CONFIGURE THE DHCP AGENT

22 23 24

3.6. CONFIGURE THE NETWORK NODE

25

1

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

2

PREFACE

PREFACE This guide describes how to integrate RHEL OpenStack Platform with your existing VMware vSphere environment, and includes coverage of the following topics: Integration with the VMware vCenter server. Importing VMware guests into Red Hat Enterprise Linux OpenStack Platform. How to integrate OpenStack Networking (neutron) with VMware NSX.

3

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER This chapter describes the steps required to add a VMware hypervisor to your RHEL OpenStack Platform environment. Portions adapted from the OpenStack Configuration Reference. See Configuration Reference in Red Hat OpenStack Platform Licenses for Documentation.

1.1. ARCHITECTURE OVERVIEW RHEL OpenStack Platform uses the Compute (nova) API to integrate with ESXi hypervisors. The VMware vCenter server is central to this integration, using the vSphere driver in your OpenStack environment to grant openstack-nova-compute access to resources hosted in vSphere, including hypervisors and datastores. You can then use the OpenStack Dashboard to schedule instances to ESXi hypervisors while still retaining vSphere features such as HA and DRS. Note Refer to Known Issues at the end of this guide for details on current integration issues. The diagram below illustrates how the VMware vCenter driver allows integration between the RHEL OpenStack Platform and vSphere environments:

4

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

1.2. SYSTEM REQUIREMENTS RHEL OpenStack Platform 4 or higher. VMware vSphere 5.1 or higher For DRS, only fully-automated placement is supported. OpenStack integration requires a vCenter service account with the following minimum permissions. Apply the permissions to the Datacenter root object, and select the Propagate to Child Objects option.

5

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

The current supported configuration is RHEL OpenStack Platform with vCenter, with networking provided by a combination of either Neutron/NSX or Neutron/Nuage. For more information on Neutron/Nuage, see https://access.redhat.com/articles/2172831

Root location

Subtree value

Subtree value

Datastore

Allocate space

Browse datastore

Low level file operation

Remove file

Extension

Register extension

Folder

Create folder

Host

Configuration

Maintenance

Network configuration

Storage partition configuration

6

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

Root location

Subtree value

Subtree value

Network

Assign network

Resource

Assign virtual machine to resource pool

Migrate powered off virtual machine

Migrate powered on virtual machine

Virtual Machine

Configuration

Add existing disk

Add new disk

Add or remove device

Advanced

Change CPU count

Disk change tracking

7

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Root location

Subtree value

Subtree value

Host USB device

Memory

Raw device

Remove disk

Rename

Swapfile placement

Interaction

Configure CD media

Power Off

Power On

Reset

Suspend

Inventory

Create from existing

Create new

8

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

Root location

Subtree value

Subtree value

Move

Remove

Unregister

Provisioning

Clone virtual machine

Customize

Sessions

Validate session

View and stop sessions

Snapshot management

Create snapshot

Remove snapshot

vApp

Export

Import

9

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

1.2.1. Packstack Deployment Packstack supports integration with an existing vSphere deployment. Specify the options below to configure the vCenter driver:

Setting

Explanation

CONFIG_VCENTER_HOST

IP Address or hostname of the VMware vCenter server.

CONFIG_VCENTER_USER

VMware vCenter service account to use.

CONFIG_VCENTER_PASSWORD

Password for the VMware vCenter service account specified above.

CONFIG_VCENTER_CLUSTER_NAME

Specify the vSphere cluster name.

CONFIG_CINDER_BACKEND=vmdk

Configure Block Storage (cinder) to use VMDK as the storage backend.

CONFIG_VMWARE_BACKEND=y

Set to y to use VMware vCenter as the hypervisor and storage backend.

CONFIG_VCENTER_CLUSTER_NAME

Specify the vCenter cluster that will host VMs created by Compute.

For more information on Packstack configuration, refer to Deploying Openstack: Proof-Of-Concept Environments.

1.2.2. Foreman Deployment The use of Foreman to deploy OpenStack Compute and connect it to a vSphere installation using the vCenter driver is not supported.

1.2.3. Manual Deployment 1. Enable the VMware vCenter driver in nova.conf: [DEFAULT] compute_driver=vmwareapi.VMwareVCDriver 2. Enter the details of your vSphere environment in the [vmware] section of nova.conf:

10

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

[vmware] host_ip - Hostname or IP address of the vCenter server. host_port - Port to use for connecting to the vCenter server. Default is 443. host_username - Username of the service account to use for logging in to the vCenter server. host_password - Password of the service account to use for logging in to the vCenter server. cluster_name - Name of the vSphere cluster to use. This will be the cluster containing the ESXi hypervisors to integrate with Compute. 3. Review the management options in the [vmware] section of nova.conf:

Setting

Explanation

api_retry_count

Number of times to retry an action after a failure occurs. Default is 10. This value should never be changed unless advised by support.

datastore_regex

Regular expression to match the datastore name.

maximum_objects

Limit the maximum number of objects to return in a single result. Default is 100. This value should never be changed unless advised by support.

task_poll_interval

The interval used for polling for remote tasks. Default is 0.5. This value should never be changed unless advised by support.

use_linked_clone

Determine whether a linked clone should be used. Enter true or false (boolean).

vnc_port

The starting port to use for VNC connections. Default is 5900. Refer to the VMware documentation for procedures allowing VNC access through to guests.

wsdl_location

Enter the URL of the optional VIM service WSDL location. This value should never be changed unless advised by support.

1.3. POST-DEPLOYMENT TASKS 11

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

1.3.1. Create networks using novanetwork VlanManager If using nova-networking, this step creates a network that allocates dhcp-server addresses # nova network-create novanetwork --fixed-range-v4 192.168.36.0/22 -bridge-interface trunk0 --vlan 103 This example network uses vlan id 103 on the trunk0 interface. Novanetwork creates the corresponding vlan103@trunk0 interface and attaches it to bridge br103. This bridge is then used for connectivity to the VM vifs. In addition, the vCenter driver searches for the br103 port group in vCenter. If not present, it will create a standard switch port group. You can pre-create the Distributed Switch (dvSwitch) port group, however ensure it’s configured to tag the packets with the correct VLAN tag. Note You cannot change the name of the bridge with the --bridge parameter.

1.3.2. Configure VNC access To enable VNC console access to a VMware guest running on an ESXi host, edit the .vmx of the powered-off guest: RemoteDisplay.vnc.enabled = true RemoteDisplay.vnc.port = 5900 RemoteDisplay.vnc.password = [Enter a complex password here] Each guest requires a unique VNC port number. You may need to review your ESXi host’s firewall configuration for the incoming connections.

1.3.3. Image uploads Images can be uploaded using the glance image-create command:

12

Setting

Explanation

--name

A descriptive name for the image. For easier file management, it can be the same name as the .vmdk file.

--file

The path to the VMDK file.

--property vmware_adaptertype

Specifies the storage adapter type for the VM. Examples include scsi and lsilogic.

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

Setting

Explanation

--property vmware_disktype

Specifies the disk provisioning for the VM. Examples include sparse , preallocated, and streamOptimized.

--property vmware_ostype

The VMware GuestID of the image. Examples include rhel7_64Guest and windows8Server64Guest.

--disk-format

The disk format of the VMware image. Likely to be vmdk.

--container-format

Defines the container format for Image Service (glance). Examples include bare and ovf.

--property hypervisor_type

Tags the image to a particular hypervisor type. Useful for multi-hypervisor environments.

For example: glance image-create --name rhel-guest-image-7.1-x86_64.vmdk --file rhel-guest-image-7.1.x86_64.vmdk --property vmware_adaptertype=scsi -property vmware_disktype=sparse --property vmware_ostype=rhel7_64Guest --disk-format vmdk --container-format bare --property hypervisor_type=vmware

1.4. USE HOST AGGREGATES If running a heterogeneous environment, it is recommended you aggregate the hosts by hypervisor. This allows greater control over where an instance of a particular type (flavor) gets scheduled to. 1. Create two host aggregates for kvm and vcenter: # # # #

nova nova nova nova

aggregate-create kvm-hosts aggregate-set-metadata kvm-hosts hypervisor=kvm aggregate-create vcenter-hosts aggregate-set-metadata vcenter-hosts hypervisor=vcenter

2. Add the Compute nodes to the kvm aggregate, and the Controller to the vcenter aggregate. Replace the values in [square brackets] with those suiting your deployment: # nova aggregate-add-host kvm-hosts [compute0 name]

13

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

# # # #

nova nova nova nova

aggregate-add-host aggregate-add-host aggregate-add-host aggregate-add-host

kvm-hosts [compute1 name] kvm-hosts [compute2 name] kvm-hosts [compute3 name] vcenter-hosts [controller name]

3. Set properties on the flavors, to control which aggregate the flavor gets scheduled to. Replace the values in [square brackets] with those suiting your deployment: # nova flavor-key m1.medium set hypervisor=kvm # nova flavor-key [vcenter flavor] set hypervisor=vcenter

1.5. KNOWN ISSUES Important - BZ#1190706 - When running more than one Compute agent configured for the same ESXi cluster (not a recommended configuration), rebooting of one of the Compute agents would delete all running instances. A new configuration key named destroy_after_evacuate has been added to the workarounds configuration group. It is recommended that users of the vCenter driver set the value of this configuration key to False, preventing automatic deletion of instances when encountering what Compute assumes is an evacuation event. This will prevent data loss, and in the event that an evacuation was truly intended, allow for manual cleanup of the instance(s) to occur. BZ#1133566 - Uploads to the Image Service can fail after about 45 minutes of running due to a session expiration bug.

14

CHAPTER 2. IMPORT VMWARE GUESTS

CHAPTER 2. IMPORT VMWARE GUESTS This chapter demonstrates how to use the virt-v2v command-line tool to retrieve an existing VMware (vCenter or ESX/ESXi) guest, and then convert and upload it directly to the Image Service (glance) in RHEL OpenStack Platform. While this chapter demonstrates the procedure in detail, you must first ensure you have (or can access) the following prerequisites before continuing: A Red Hat Enterprise Linux 7.1 host (virtual or physical), with: Access to the RHEL OpenStack Platform package repository. Access to the Supplementary RPMs packages repository. Sufficient Red Hat subscriptions to allow installation of the virt-v2v packages and their dependencies. Sufficient disk space and network access to retrieve and temporarily store the VMware guest(s). Administrative credentials and network access to a RHEL OpenStack Platform (5 or higher) installation. Administrative credentials and network access to a VMware vCenter + ESX(i) installation. The name of the VMware datacenter and host. The name of the guest(s) to be migrated to the Image Service. Reliable and verified backups. System and network recommendations for this procedure are as follows: Reliable, low-latency network bandwidth of Gigabit speeds or better. Higher amounts of memory on VMware and Red Hat Enterprise Linux hosts will alleviate slow copying issues. For multiple parallel conversions, consider allocating one CPU core and at least 1GB memory for each running conversion instance. Note Your system names and IP addresses will differ from those shown in this chapter, and you must modify example commands shown within to match those in your environment. Example systems in this chapter are named and addressed as follows: Red Hat Enterprise Linux 7.1 system - v2vsystem1.example.com VMware vCenter host - vcenterhost1.example.com VMware ESX(i) host - esxihost1.example.com Guest virtual machine targeted for conversion and migration - guestvm1 RHEL OpenStack Platform cloud - 192.0.2.1

15

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Note Although the virt-v2v tool reads the VMware guest, it does not make changes to the target VM itself until conversion is completed. The target VM should be shutdown/offline before beginning this process.

2.1. CONFIGURE THE V2V HOST 1. Login to the host system v2vsystem1.example.com as the root user. 2. Register the system to Red Hat Subscription Management: # subscription-manager register Enter your Red Hat Subscription Management credentials to continue. 3. To retrieve and store a list of your available pools, run the following command: # subscription-manager list --available > /tmp/pools_available Open the /tmp/pools_available file and take note of the Pool ID for the subscription that provides access to the following package groups: Red Hat OpenStack RPMs Red Hat Enterprise Linux Server RPMs Red Hat Enterprise Linux Supplementary RPMs 4. Take that Pool ID value and run a command similar to the following. For example: # subscription-manager subscribe -pool=8a85f4923cf494ef013ddc6045a541c8 5. Now that your system is subscribed, disable all repositories, and re-enable only those required for this procedure: # # # #

subscription-manager subscription-manager subscription-manager subscription-manager

repos repos repos repos

--disable=* --enable rhel-7-server-rpms --enable rhel-7-server-supplementary-rpms --enable rhel-7-server-v2vwin-1-rpms

6. Enable the RHEL OpenStack Platform repos: * For RHEL OpenStack Platform 5, use: #

subscription-manager repos --enable rhel-7-server-openstack-5.0-rpms

For RHEL OpenStack Platform 6, use: # subscription-manager repos --enable=rhel-7-server-rh-common-rpms # subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms For RHEL OpenStack Platform 7, use:

16

CHAPTER 2. IMPORT VMWARE GUESTS

# subscription-manager repos --enable=rhel-7-server-rh-common-rpms # subscription-manager repos --enable=rhel-7-server-openstack-7.0-rpms 7. Run a system update: # yum update 8. Reboot the system into the new kernel version if one was provided in the system update, and login again as the root user. Install the virt-v2v package and its dependencies: # yum install virt-v2v 9. Install the Image Service client binary and tools: # yum install python-glanceclient 10. Login to your RHEL OpenStack Platform machine (or your OpenStack Dashboard host) as the root user. Ensure TCP port 5000 is open: # firewall-cmd --permanent --add-port=5000/tcp # firewall-cmd --reload 11. Copy the following variables and their values from the /root/keystonerc_admin file: OS_USERNAME OS_TENANT_NAME OS_PASSWORD OS_AUTH_URL Logout of the machine. 12. Login to the v2vsystem1.example.com machine, and add those variables and their values into the /root/.bashrc file, substituting your settings. For example: export export export export

OS_USERNAME=admin OS_TENANT_NAME=admin OS_PASSWORD=97ea7049b1e94a65 OS_AUTH_URL=http://192.0.2.1:5000/v2.0/

13. Add the following to /root/.bashrc also: export LIBGUESTFS_BACKEND=direct Logout and back in to the machine. You are now ready to run the virt-v2v command to retrieve, convert, and upload the guest to the Image Service.

2.1.1. Connect to VMware vCenter and convert the target guest 1. Login to your Red Hat Enterprise Linux 7.1 system as the root user.

17

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Optionally connect to your VMware vCenter server and list the available guests with the following command, substituting your VMware host names: # virsh -c 'vpx://vcenter1.example.com/datacentername/esxihost1.example.com? no_verify=1' list --all Enter your vCenter administrator password when prompted. Note The no_verify=1 string is only required if you have invalid/self-signed SSL configured on your VMware host(s). 2. Once you have chosen the guest name to be used, retrieve, convert and upload it with the following command, substituting your VMware host names and guest name: # virt-v2v -ic 'vpx://vcenter1.example.com/datacentername/esxihost1.example.com? no_verify=1' "guestvm1" -o glance Enter your vCenter administrator password when prompted. Note You can specify a different guest name to be used after conversion with the -on _different_name_ option. This process can take a long time. See the following for an example of the output shown: [ 14.0] Creating an overlay to protect the source from being modified [ 16.0] Opening the overlay [ 77.0] Initializing the target -o glance [ 77.0] Inspecting the overlay [ 677.0] Checking for sufficient free disk space in the guest [ 677.0] Estimating space required on target for each disk [ 677.0] Converting Red Hat Enterprise Linux release 6 (Update 6) to run on KVM virt-v2v: This guest has virtio drivers installed. [1139.0] Mapping filesystem data to avoid copying unused and blank areas [1158.0] Closing the overlay [1158.0] Copying disk 1/1 to /var/tmp/glance.9jLwLB/sda (raw) (0.00/100%) (1.03/100%) (2.05/100%) (3.08/100%) [...output truncated...] (100.00/100%) [49643.0] Creating output metadata +------------------+--------------------------------------+ | Property | Value |

18

CHAPTER 2. IMPORT VMWARE GUESTS

+------------------+--------------------------------------+ | checksum | 675bab2875fc027f33b2f881afa86272 | | container_format | bare | | created_at | 2014-11-08T13:29:24 | | deleted | False | | deleted_at | None | | disk_format | raw | | id | 98b0f3c4-86a0-4daa-baef-878cb7610cb4 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | guestvm1 | | owner | b33d23bd5ab94a1fa35522e0c0677381 | | protected | False | | size | 8589934592 | | status | active | | updated_at | 2014-11-08T13:33:24 | | virtual_size | None | +------------------+--------------------------------------+ +----------------------------+-------------------------------------+ | Property | Value | +----------------------------+-------------------------------------+ | Property 'architecture' | x86_64 | Property 'hw_disk_bus' | virtio | Property 'hw_vif_model' | virtio | Property 'hypervisor_type' | kvm | Property 'os_distro' | rhel | Property 'os_type' | linux | Property 'os_version' | 6.6 | Property 'vm_mode' | hvm | checksum | 675bab2875fc027f33b2f881afa86272 | container_format | bare | | created_at | 2014-11-08T13:29:24 | deleted | False | | deleted_at | None | | disk_format | raw | | id | 98b0f3c4-86a0-4daa-baef-878cb7610cb4 | is_public | False | | min_disk | 0 | | min_ram | 512 | | name | guestvm1 | | owner | b33d23bd5ab94a1fa35522e0c0677381 | protected | False | | size | 8589934592 |

| | | | | | | | |

|

|

|

19

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

| status | active | | updated_at | 2014-11-08T13:33:26 | | virtual_size | None | +----------------------------+-------------------------------------+ [9886.0] Finishing off 3. Login to your OpenStack Dashboard and confirm that the image has been uploaded in Project > Compute > Images. The guest has been successfully converted, and the image can now be used to launch instances. See Manage Images in the Instances and Images Guide for further details on managing images in OpenStack.

20

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING This chapter describes the steps required to integrate OpenStack Networking with VMware NSX using the NSX plugin for Neutron. OpenStack Networking uses the NSX plugin for Neutron to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plugin enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they’re added as hypervisors to the NSX controller. The diagram below depicts an example NSX deployment and illustrates the route East-West traffic takes between subnets on separate Compute nodes:

3.1. SERVICE NODE CONCEPTS This integration work conceptually introduces a number of new node types, although these nodes would host services that are likely already running in your environment: Controller Node Runs neutron-server Runs VMware NSX plugin for Neutron Network Node Runs dhcp-agent Runs metadata-agent Runs Open vSwitch (OVS)

3.2. DISABLE NEUTRON-OPENVSWITCH-AGENT The neutron-openvswitch-agent must be disabled for environments integrating with VMware NSX. The required steps will differ for HA and non-HA environments:

3.2.1. Disable neutron-openvswitch-agent for HA environments 1. Add the new constraints first, to be as non-disruptive as possible:

21

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

# pcs constraint order neutron-netns-cleanup-clone then neutron-dhcpagent-clone # pcs constraint colocation add neutron-dhcp-agent-clone with neutronnetns-cleanup-clone 2. Remove older Ordering constraints: # pcs constraint remove order-neutron-openvswitch-agent-clone-neutrondhcp-agent-clone-mandatory order-neutron-netns-cleanup-clone-neutronopenvswitch-agent-clone-mandatory 3. Remove older Colocation constraints: # pcs constraint remove colocation-neutron-dhcp-agent-clone-neutronopenvswitch-agent-clone-INFINITY colocation-neutron-openvswitch-agentclone-neutron-netns-cleanup-clone-INFINITY 4. Disable the resource: # pcs resource disable neutron-openvswitch-agent 5. Remove the resource completely: # pcs resource delete neutron-openvswitch-agent

3.2.2. Disable neutron-openvswitch-agent for non-HA environments 1. Disable the neutron-openvswitch-agent: # systemctl disable neutron-openvswitch-agent 2. Stop the neutron-openvswitch-agent: # systemctl stop neutron-openvswitch-agent 3. Mask the neutron-openvswitch-agent: # systemctl mask neutron-openvswitch-agent

3.3. INSTALL AND ENABLE THE VMWARE NSX PLUGIN 1. On the Controller node, install the VMware NSX plugin for Neutron: # yum install openstack-neutron-vmware.noarch 2. Restart the openvswitch service: # systemctl restart openvswitch 3. Symlink /etc/neutron/plugin.ini to plugins/vmware/nsx.ini:

22

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING

#

- file: path=/etc/neutron/plugin.ini state=absent - file: path=/etc/neutron/plugin.ini src= plugins/vmware/nsx.ini state=link 4. Configure these settings in the plugin.ini and neutron.conf files: Note Replace controllerList with a comma separated string that lists the applicable NSX controllers. Replace tzuuid with the UUID of the default NSX Transport zone.

# crudini --set /etc/neutron/plugin.ini DEFAULT nsx_controllers controllerList # crudini --set /etc/neutron/plugin.ini DEFAULT default_tz_uuid tzuuid # crudini --set /etc/neutron/neutron.conf DEFAULT 5. Set the core_plugin to vmware: # crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin vmware 6. Delete and recreate the ovs_neutron database: # mysql

Suggest Documents