I was able to successfully start keycloak server on AWS K3S Kubernetes Cluster with Istio Gateway and AWS HTTPS Application Load Balancer.
I can successfully see Keycloak Home Page: https://keycloak.skycomposer.net/auth/
But when I click on Admin Console link, then the Blank Page is shown: https://keycloak.skycomposer.net/auth/admin/master/console/
Browser Inspect Tool shows that: http://keycloak.skycomposer.net/auth/js/keycloak.js?version=rk826 link returns the following status:
(blocked:mixed-content)
I did some research on the internet and the reason seems to be related with redirection from https to http, which is not correctly handled by istio gateway and aws load balancer
But unfortunately, I couldn't find the solution, how to solve it for my particular environment.
Here are my configuration files:
keycloak-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: keycloak
data:
KEYCLOAK_USER: admin@keycloak
KEYCLOAK_MGMT_USER: mgmt@keycloak
JAVA_OPTS_APPEND: '-Djboss.http.port=8080'
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_HOSTNAME: 'keycloak.skycomposer.net'
KEYCLOAK_FRONTEND_URL: 'https://keycloak.skycomposer.net/auth'
KEYCLOAK_LOGLEVEL: INFO
ROOT_LOGLEVEL: INFO
DB_VENDOR: H2
keycloak-deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
containers:
- name: keycloak
image: jboss/keycloak:13.0.1
imagePullPolicy: Always
ports:
- containerPort: 8080
hostPort: 8080
volumeMounts:
- name: keycloak-data
mountPath: /opt/jboss/keycloak/standalone/data
env:
- name: KEYCLOAK_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_USER
- name: KEYCLOAK_MGMT_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_USER
- name: JAVA_OPTS_APPEND
valueFrom:
configMapKeyRef:
name: keycloak
key: JAVA_OPTS_APPEND
- name: DB_VENDOR
valueFrom:
configMapKeyRef:
name: keycloak
key: DB_VENDOR
- name: PROXY_ADDRESS_FORWARDING
valueFrom:
configMapKeyRef:
name: keycloak
key: PROXY_ADDRESS_FORWARDING
- name: KEYCLOAK_HOSTNAME
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_HOSTNAME
- name: KEYCLOAK_FRONTEND_URL
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_FRONTEND_URL
- name: KEYCLOAK_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_LOGLEVEL
- name: ROOT_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: ROOT_LOGLEVEL
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_PASSWORD
- name: KEYCLOAK_MGMT_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_PASSWORD
volumes:
- name: keycloak-data
persistentVolumeClaim:
claimName: keycloak-pvc
keycloak-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
ports:
- protocol: TCP
name: http
port: 80
targetPort: 8080
selector:
app: keycloak
istio-gateway.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "keycloak.skycomposer.net"
istio-virtualservice.yaml:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- keycloak.skycomposer.net
gateways:
- istio-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: keycloak.default.svc.cluster.local
port:
number: 80
I successfully installed istio 1.9.1 with istioctl:
istioctl install \
--set meshConfig.accessLogFile=/dev/stdout \
--skip-confirmation
Also, I labelled default namespace with istio injection, so all my pods in default namespace have istio sidecar container:
kubectl label namespace default istio-injection=enabled
NAME READY STATUS RESTARTS AGE
whoami-6c4757bbb5-9zkbl 2/2 Running 0 13m
notification-microservice-5dfcf96b95-ll8lm 2/2 Running 0 13m
customermgmt-6b48586868-ddlnw 2/2 Running 0 13m
usermgmt-c5b65964-df2vc 2/2 Running 0 13m
keycloak-d48f9bbbf-tsm5h 2/2 Running 0 13m
Here is also terraform configuration of AWS Load Balancer:
resource "aws_lb" "mtc_lb" {
name = "mtc-loadbalancer"
subnets = var.public_subnets
security_groups = [var.public_sg]
idle_timeout = 400
}
resource "aws_lb_target_group" "mtc_tg" {
name = "mtc-lb-tg-${substr(uuid(), 0, 3)}"
port = var.tg_port
protocol = var.tg_protocol
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
ignore_changes = [name]
}
health_check {
healthy_threshold = var.elb_healthy_threshold
unhealthy_threshold = var.elb_unhealthy_threshold
timeout = var.elb_timeout
interval = var.elb_interval
}
}
resource "aws_lb_listener" "mtc_lb_listener_http" {
load_balancer_arn = aws_lb.mtc_lb.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_lb_listener" "mtc_lb_listener" {
load_balancer_arn = aws_lb.mtc_lb.arn
port = 443
protocol = "HTTPS"
depends_on = [aws_lb_target_group.mtc_tg]
certificate_arn = var.certificate_arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.mtc_tg.arn
}
}
所有评论(0)