Kubernetes in-tree cloud providers

This page describes some additional guidelines setting up some of the Kubernetes in-tree supported cloud providers. For others not listed on this page check out Kubernetes official documentation at https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

AWS Cloud Provider

cloud:
  provider: aws

All nodes added to the cluster must be able to communicate with EC2 so that they can create and remove resources. You can enable this interaction by using an IAM role attached to the EC2 instance.

Example IAM role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["ec2:*"],
      "Resource": ["*"]
    },
    {
      "Effect": "Allow",
      "Action": ["elasticloadbalancing:*"],
      "Resource": ["*"]
    }
  ]
}
Configuring ClusterID

AWS cloud provider needs a ClusterID tag for following resources in a cluster:

  • ec2 instances - all EC2 instances that belong to the cluster
  • security groups - the security group used for the cluster

Tag syntax:

  • key = kubernetes.io/cluster/<CLUSTER_ID>
  • value = shared

Note: autoscaling launch configuration should tag EC2 instances with value owned.

Azure Cloud Provider

cloud:
  provider: azure
  config: ./azure-cloud-config.json

Kubernetes knows how to interact with Azure via the cloud configuration file. You can create Azure cloud configuration file by specifying the following details in it.

{
    "tenantId": "0000000-0000-0000-0000-000000000000",
    "aadClientId": "0000000-0000-0000-0000-000000000000",
    "aadClientSecret": "0000000-0000-0000-0000-000000000000",
    "subscriptionId": "0000000-0000-0000-0000-000000000000",
    "resourceGroup": "<name>",
    "location": "eastus",
    "subnetName": "<name>",
    "securityGroupName": "<name>",
    "vnetName": "<name>",
    "vnetResourceGroup": "",
    "primaryAvailabilitySetName": "<name>",
    "useInstanceMetadata": true
}

For more details see Azure cloud provider documentation.

OpenStack Cloud Provider

cloud:
  provider: openstack
  config: ./openstack-cloud-config.ini

Kubernetes knows how to interact with OpenStack via the cloud configuration file. It is the file that will provide Kubernetes with credentials and location for the OpenStack auth endpoint. You can create a cloud.conf file by specifying the following details in it.

Example OpenStack Cloud Configuration

This is an example of a typical configuration that touches the values that most often need to be set. It points the provider at the OpenStack cloud’s Keystone endpointa and provides details for how to authenticate with it:

[Global]
username=user
password=pass
auth-url=https://<keystone_ip>/identity/v3
tenant-id=c869168a828847f39f7f06edd7305637
domain-id=2a73b8f597c04551a0fdc8e95544be8a

For more details see Kubernetes cloud.conf documentation.

VSphere Cloud Provider

cloud:
  provider: vsphere
  config: ./vsphere.conf

Prerequisites

  • All node VMs must be placed in vSphere VM folder. Create a VM folder following the instructions mentioned in this link and move Kubernetes Node VMs to this folder.
  • The disk UUID on the node VMs must be enabled: the disk.EnableUUID value must be set to True. This step is necessary so that the VMDK always presents a consistent UUID to the VM, thus allowing the disk to be mounted properly. For each of the virtual machine nodes that will be participating in the cluster, follow the steps below using govc

    • Find Node VM Paths govc ls /datacenter/vm/<vm-folder-name>
    • Set disk.EnableUUID to true for all VMs govc vm.change -e="disk.enableUUID=1" -vm='VM Path'

      Note: If Kubernetes Node VMs are created from template VM then disk.EnableUUID=1 can be set on the template VM. VMs cloned from this template, will automatically inherit this property.

Example VSphere Cloud Configuration

[Global]
user = "Administrator1@vsphere.local"
password = "password"
port = "443"
insecure-flag = "1"
datacenters = "us-east"

[VirtualCenter "1.1.1.1"]

[Workspace]
server = "1.1.1.1"
datacenter = "us-east"
default-datastore="sharedVmfs-0"
folder = "kubernetes"

[Disk]
scsicontrollertype = pvscsi

[Network]
public-network = "VM Network"

For more details see VSphere cloud.conf documentation.

results matching ""

    No results matching ""