A Kubernetes controller that automatically provisions and configures HAProxy-based load balancers in Triton Data Center when Kubernetes Services of type LoadBalancer are created.
The Triton LoadBalancer Controller watches for Kubernetes Service resources of type LoadBalancer and creates corresponding load balancer instances in Triton Data Center using the CloudAPI. The controller manages the full lifecycle of these load balancers, including creation, updates, and deletion.
When a Service of type LoadBalancer is created or updated, the controller:
- Automatically provisions a load balancer instance in Triton with the appropriate configuration
- Sets up the necessary port mappings based on the Service ports
- Configures certificates for HTTPS if specified
- Sets up metrics access control if configured
- Updates the Service status with the load balancer's IP address
- Automatic Load Balancer Provisioning: Creates HAProxy-based load balancers in Triton when a LoadBalancer type Service is created in Kubernetes
- Dynamic Port Mapping: Maps Service ports to the load balancer configuration
- HTTPS Support: Integration with triton-dehydrated for certificate generation
- Metrics Endpoint: Optional metrics endpoint with IP-based access control
- Full Lifecycle Management: Handles creation, updates, and deletion of load balancers
- Kubernetes cluster v1.19+
- Access to Triton Data Center with valid credentials
- Proper RBAC permissions to watch and modify Services in the cluster
-
Clone this repository:
git clone https://github.com/triton/loadbalancer-controller.git cd loadbalancer-controller
-
Edit the credentials in
config/controller.yaml
to include your Triton account details:stringData: triton-url: "https://us-east-1.api.joyent.com" # Replace with your Triton CloudAPI endpoint triton-account: "" # Replace with your Triton account ID triton-key-id: "" # Replace with your Triton key ID (fingerprint) triton-key: | # MUST BE PEM FORMAT: $ ssh-keygen -p -m PEM -f <id_rsa_file> to convert a file to PEM -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY-----
Note: If your SSH key is not in PEM format, convert it using:
ssh-keygen -p -m PEM -f your_private_key_file
-
Apply the controller configuration:
kubectl apply -f config/controller.yaml
To create a load balancer, simply create a Kubernetes Service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
cloud.tritoncompute/max_rs: "64" # Optional: Set maximum number of backends
cloud.tritoncompute/certificate_name: "example.com" # Optional: Certificate subject
cloud.tritoncompute/metrics_acl: "10.0.0.0/8 192.168.0.0/16" # Optional: Metrics access control
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: my-app
The controller recognizes several annotations that can be used to configure the load balancer:
cloud.tritoncompute/max_rs
: Optional; maximum number of backends (default: 32)cloud.tritoncompute/certificate_name
: Optional; comma-separated list of certificate subjectscloud.tritoncompute/metrics_acl
: Optional; IP prefix or comma/space-separated list of prefixes for metrics access control
The controller automatically maps the Service ports to the load balancer configuration:
- Ports with name "http" or port 80 are configured as HTTP
- Ports with name "https" or port 443 are configured as HTTPS
- All other ports are configured as TCP
-
Build the controller binary:
go build -o manager cmd/manager/main.go
-
Build the Docker image:
docker build -t triton/loadbalancer-controller:latest .
-
Push the image to a registry:
docker push triton/loadbalancer-controller:latest
/cmd/manager
: Main entry point for the controller/pkg/controller
: Controller logic for reconciling Services/pkg/triton
: Triton CloudAPI client implementation/config
: Kubernetes manifests for deploying the controller/bin
: Test and utility scripts
Run the unit tests with:
go test ./...
The controller supports real integration tests against a Triton cloud environment. To run these tests:
- Set up your Triton credentials:
export TRITON_TEST_INTEGRATION=true
export TRITON_ACCOUNT=<your-account-name>
export TRITON_KEY_ID=<your-key-id>
export TRITON_KEY_PATH=<path-to-your-private-key>
export TRITON_URL=<triton-api-url>
- Run the tests with the
-tags=integration
flag:
go test -tags=integration ./...
These tests will create actual load balancers in your Triton environment and then clean them up after the test is complete.
- Clone the repository
- Create a new branch for your feature
- Implement the feature or fix
- Add tests for your changes
- Submit a pull request
- Load balancer not being created: Verify that the Triton credentials are correct and that the controller has the necessary RBAC permissions
- Load balancer status not being updated: Check the controller logs for any errors communicating with the Triton API
- HTTPS not working: Ensure that the certificate name is correctly specified and that the triton-dehydrated service is running properly
kubectl logs -n triton-system -l app=triton-loadbalancer-controller
Before integrating with Kubernetes, you can test the Triton load balancer implementation using the provided test script:
go run bin/test-loadbalancer.go \
--key-path=/path/to/your/private/key \
--key-id=<your-key-id> \
--account=<your-account-name> \
--url=<triton-api-url> \
--name=test-lb \
--action=create \
--target-port=8080
Available actions:
create
- Create a new load balancerget
- Get information about an existing load balancerupdate
- Update an existing load balancerdelete
- Delete a load balancer
The controller and test script support the following environment variables for configuration:
Environment Variable | Description | Default |
---|---|---|
TRITON_LB_PACKAGE |
Triton package to use for load balancer instances | g4-highcpu-1G |
TRITON_LB_IMAGE |
Triton image ID to use for load balancer instances | HAProxy image ID |
TRITON_PROVISION_TIMEOUT |
Timeout (in seconds) for load balancer provisioning | 300 |
TRITON_DELETE_TIMEOUT |
Timeout (in seconds) for load balancer deletion | 300 |
MIT License