# Load Balancers The following guides only cover the basic mandatory settings. If you would like to read more about some advanced load balancer setups and settings, please refer to our [Advanced Topics](https://docs.nws.netways.de/books/cloud-services/chapter/advanced-topics "Advanced Topics") section or the official [Basic Load Balancing Cookbook of OpenStack](https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html "Basic Load Balancing Cookbook of OpenStack"). #### Configuration via the OpenStack Dashboard "Load Balancers" can be found as a subsection of the submenu "Network" in the menu on the left. ![Bildschirm­foto 2022-12-20 um 11.15.58.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/wMgbC2gQk2vGL7EU-bildschirmfoto-2022-12-20-um-11-15-58.png) To create a load balancer just click on the button "+ Create Load Balancer". A window will open on that page which will help you to walk through the initial configuration. On the first page you can fill in some basic details like name and description. It is not necessary but recommended to set a name. However it is mandatory to choose a subnet. On a plain new OpenStack project you will find your main subnet by looking for a subnet which has the same name as your OpenStack project but with a "-subnet" extension. To read more about the meaning of the advanced fields, you can click on the question mark on the right side of the window. ![Bildschirm­foto 2022-12-20 um 11.22.09.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/tu4nHTYSmWkoiriu-bildschirmfoto-2022-12-20-um-11-22-09.png) Click on the "Next" button to continue. On the following page you will be asked to fill in the listener details. You don't have to set a name. But make sure to choose a protocol and to set a port number. The listener defines the frontend listening port of the load balancer. Let's say you would like to deploy a load balancer for some web servers. Then you would choose HTTP or TCP as protocol and set the port to 80 or 443. ![Bildschirm­foto 2022-12-20 um 11.35.39.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/Q0H6KGT4nrBQct8V-bildschirmfoto-2022-12-20-um-11-35-39.png) By default the "Connection Limit" is set to "-1" which means, that no limit is enforced. However we discovered that setting no limit here can [cause problems](https://nws.netways.de/tutorials/custom-connnection-limit-for-load-balancers/#:~:text=Problems%20with%20High%20Connection%20Limits). After clicking on "Next", you will be able to choose a load balancing algorithm - for default setups we recommend setting it to "ROUND\_ROBIN. ![Bildschirm­foto 2022-12-20 um 11.53.32.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/EV4PgcJJsFdDoPGs-bildschirmfoto-2022-12-20-um-11-53-32.png) In the following section "Pool Members" you will be asked to assign members to the pool. The members resemble the VMs you would like to forward incoming requests to. Additionally you will have to set the port number where the application or web server is listening on inside of the corresponding VM. You can also set a weight - members with a higher weight will receive a higher amount of requests than members with lower weight. ![Bildschirm­foto 2022-12-20 um 11.59.11.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/r1XC35PUyTqgjrfD-bildschirmfoto-2022-12-20-um-11-59-11.png) The last section will let you define a health monitor for the pool and its members. Depending on your application or web server make sure to choose an appropriate protocol type. Additionally you can set some timeouts and delay parameter according to which the health monitor will execute the health checks against the pool members. Reminder: you can click on the question mark on the right side of the window to read more about these parameters. ![Bildschirm­foto 2022-12-20 um 12.10.47.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/o8JirH4PeTWNifQc-bildschirmfoto-2022-12-20-um-12-10-47.png) Now that you have filled in all the mandatory fields, you are good to go and therefore you can click on the orange button "Create Load Balancer". It will take a while until the load balancer and all its dependent resources have been deployed. As a final optional action you can associate a floating IP with your load balancer. Doing so will let you expose your load balancer to the public internet. Be aware that everyone on the internet would be able to access your application or web site running on your VMs if you assign a floating IP. To associate a floating IP, click on the dropdown menu of your load balancer and choose "Associate Floating IP". ![Bildschirm­foto 2022-12-20 um 12.22.01.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/6SFoAO05CgEZzWzl-bildschirmfoto-2022-12-20-um-12-22-01.png) Depending on if you already have floating IPs allocated to your OpenStack project, you can either choose a existing floating IP or allocate a new one from the window that opens up next. If you choose to allocate a new floating IP from a pool, make sure to select the public-network. [![Bildschirm­foto 2022-12-20 um 12.56.12.png](https://docs.nws.netways.de/uploads/images/gallery/2022-12/scaled-1680-/xgmdHOftKgo38Bq5-bildschirmfoto-2022-12-20-um-12-56-12.png)](https://docs.nws.netways.de/uploads/images/gallery/2022-12/xgmdHOftKgo38Bq5-bildschirmfoto-2022-12-20-um-12-56-12.png) Finally click on "Associate". That's it - you have finished the basic configuration of your load balancer. #### Configuration via the OpenStack-API using the CLI-Client To get started with the OpenStack CLI-Client, please refer to [our article over here](https://docs.nws.netways.de/books/cloud-services/page/access-the-openstack-api-with-the-cli-client). The first step after the installation is to get a list of the available subnets in your OpenStack project. You will need a subnet where you can connect your Load Balancer to. Usually this will be the same subnet where your VMs are connected to, as the Load Balancer will forward the incoming requests from the internet to your VMs. The output of the subnet list could look similar to the following: ```bash ~ openstack subnet list +--------------------------------------+-----------------------------+--------------------------------------+--------------------+ | ID | Name | Network | Subnet | +--------------------------------------+-----------------------------+--------------------------------------+--------------------+ | 220b5e2f-e086-4c00-8ddb-fbbd539da3ce | public-subnet-4-1 | 210e26d7-942f-4e17-89b2-571eee87d7e4 | 185.11.255.0/24 | | cd9abbed-fe5b-40a4-b53c-1d42d2732594 | public-subnet-4-2 | 210e26d7-942f-4e17-89b2-571eee87d7e4 | 91.198.2.0/24 | | ec7b7b20-0bcb-4dc9-87cf-11249689bfd5 | public-subnet-6 | 210e26d7-942f-4e17-89b2-571eee87d7e4 | 2a02:ed80:0:3::/64 | | f12a110e-b906-4433-85ae-209f5d283bd7 | 1826-openstack-7f8d2-subnet | 3f3089ec-9f19-4082-892a-3fbeaf6a92de | 172.16.0.0/24 | +--------------------------------------+-----------------------------+--------------------------------------+--------------------+ ``` As you can see, you should have access to multiple public subnet aswell as one project-specific subnet if you are starting from a new project. The ID of the project-specific subnet is what is relevant for the next step. You can identify your project-specific subnet by looking at the name of the subnet. The default project-specific subnet starts with your project name (here "1826-openstack-7f8d2") and has a "-subnet" extension at the end. Now copy the ID of the subnet to your clipboard. Next you can initiate the creation of your Load Balancer. To do so, just execute the following command but replace the argument of "--vip-subnet-id" with your own subnet ID: ```bash ~ openstack loadbalancer create --name my-lb --vip-subnet-id f12a110e-b906-4433-85ae-209f5d283bd7 ``` Before you continue with the configuration of your Load Balancer, you will have to wait until it is up and running. You can check the status of your Load Balancer to see if it is already operational by executing the following command: ```bash ~ openstack loadbalancer status show my-lb ``` In the output you should look for "provisioning\_status": "ACTIVE" and "operating\_status": "ONLINE". As soon as you see those values, you are good to go on with the next steps. Now that your Load Balancer is up and running, you will need to configure a frontend listening port called "listener". You can choose from a variety of protocol types for the listener (TCP, HTTP, HTTPS, TERMINATED\_HTTPS, UDP) and you will also have to specify a port number. The following example shows how you would create a listener which will listen on incoming HTTP requests on port 80: ```bash ~ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 my-lb ``` The last parameter needs to be the name of your Load Balancer (here "my-lb"). Continuing with the setup of your Load Balancer, you will have to create a so-called "pool". A pool usually holds multiple "members", which in turn consist of IPs and ports of VMs. In terms of HAProxy configuration, the pool is the equivalent of a backend. The "members" of the pool are therefore the same as the backend servers. In order to configure a pool you will also have to set a load balancing algorithm which basically is a strategy according to which the incoming requests will be forwarded to the members. A commonly used algorithm is Round Robin. You can choose from the following algorithms: SOURCE\_IP, ROUND\_ROBIN, LEAST\_CONNECTIONS, SOURCE\_IP\_PORT. The command to create the pool using ROUND\_ROBIN, would look like this: ```bash ~ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP ``` A load balancer pool should also be configured with a health monitor. The health monitor regularly checks the reachability of the pool members. In case of one of the members becoming unreachable the requests will only be forwarded to members which are still reachable. But you will have to specify a few parameters to define how the health monitor should act in such circumstances. The meaning of the parameters is being described after the command. ``` ~ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path / pool1 ``` The first parameter "--delay" is the seconds to wait between health checks. The parameter "--max-retries" consists of the maximum number of failed health check attempts until the member will be labeled as unreachable. The "--timeout" parameter defines the seconds to wait for a single health check in order to complete. With "--type" you can specify which protocol should be used for the health checks. You can choose from PING, HTTP, TCP, HTTPS, TLS-HELLO, UDP-CONNECT and SCTP. Finally, when choosing "HTTP" or "HTTPS" as type, you can specify a "--url-path" which the name already indicates defines the HTTP path the health check should try to access. To add members to the pool you will need to supply the subnet ID from the beginning once again on the next command. Since the member is a representation of a VM you will also have to check for the IP of the VM you would like to forward the incoming requests to. Lastly you will also need to supply the protocol port number - for a web server this would usually be port 80 or port 443. Here is an example of how the command should look like: ``` ~ openstack loadbalancer member create --subnet-id f12a110e-b906-4433-85ae-209f5d283bd7 --address 172.16.0.11 --protocol-port 80 pool1 ``` Normally you would have at least two members in a pool. To add more members, just replace the IP address with the one of the other VM and execute the command again. At last you can ensure that your load balancer is publicly reachable by attaching a floating IP to its virtual port. This however is an optional step which you can skip if you intend to use the load balancer only for internal connections. If you have allocated floating IPs before, you can check if you already have unbound floating IPs in your project and reuse one of them. To get a list of all floating IPs allocated in your project, execute the following command: ``` ~ openstack floating ip list +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+ | ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+ | 9fdba43e-5978-4d13-b9b0-6022140bd437 | 91.198.2.199 | 172.16.0.35 | 8ef689d1-a89d-42b2-8af4-5a700484ad58 | 210e26d7-942f-4e17-89b2-571eee87d7e4 | f8683d921d4b499eade654f5546c2875 | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+ ``` Unbound floating IPs can be identified by looking at the columns "Fixe IP Address" or "Port". Those fields will be blank if the IP is not bound to a port. If you don't have any floating IPs available you can just allocate a new one by issuing: ``` ~ openstack floating ip create public-network +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2022-12-20T09:34:15Z | | description | | | dns_domain | None | | dns_name | None | | fixed_ip_address | None | | floating_ip_address | 91.198.2.117 | | floating_network_id | 210e26d7-942f-4e17-89b2-571eee87d7e4 | | id | ba086040-8c9e-4590-a3ca-1bfac9102aac | | name | 91.198.2.117 | | port_details | None | | port_id | None | | project_id | f8683d921d4b499eade654f5546c2875 | | qos_policy_id | None | | revision_number | 0 | | router_id | None | | status | ACTIVE | | subnet_id | None | | tags | [] | | updated_at | 2022-12-20T09:34:15Z | +---------------------+--------------------------------------+ ``` If you want to attach the floating IP to your load balancer then you will first have to obtain the ID of the virtual port of the load balancer. Issue the following command to do so: ``` ~ openstack loadbalancer show my-lb -c vip_port_id +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | vip_port_id | 27ebe90b-f0a7-486f-907e-9c579c452ecf | +-------------+--------------------------------------+ ``` Copy the ID from the response and you can now bind the floating ip to your load balancer port with this command: ``` ~ openstack floating ip set --port 27ebe90b-f0a7-486f-907e-9c579c452ecf ba086040-8c9e-4590-a3ca-1bfac9102aac ``` The ID at the "--port" parameter is the one of the vip port. The last ID is the ID of the floating IP you would like to use. In this case it was taken from the output of the "openstack floating ip create public-network" command. This is already everything you need to do for the basic setup of a load balancer.