I have a minikube cluster running with Docker with a deployment of Harbor done with helm.It is local setup I used openssl to generate cert and key to use TLS to access harbor.The problem I encounter is that I can't access harbor.
Here is my value yaml file:
# Global settingsglobal: persistence: enabled: trueharborAdminPassword: Harbor12345hostname: k8s.localhttp: port: 85https: port: 443# Ingress settingsingress: enabled: true annotations: {} hosts: - host: k8s.local paths: ["/"] tls: - secretName: registry-tls hosts: - k8s.local# TLS settingstls: secretName: registry-tls existingSecret: true# Certificate and private key settingscertificates: secretName: registry-tls existingSecret: true secretKey: registry.k8s.local.key secretCrt: registry.k8s.local.crt# External URL for accessing HarborexternalURL: https://k8s.local# Harbor authentication modeauthMode: db_auth# Harbor database settingsdatabase: type: internal password: Harbor12345# Harbor storage settingsstorage: type: filesystem filesystem: rootdirectory: /data# Harbor registry settingsregistry: secretName: registry-secret# Harbor Redis settingsredis: enabled: true persistence: enabled: true# Harbor Trivy settingstrivy: enabled: true# Harbor portal settingsportal: secretName: portal-secret# Harbor jobservice settingsjobservice: secretName: jobservice-secretexporter: secretName: exporter-secret
curl https://k8s.localcurl: (7) Failed to connect to k8s.local port 443 after 1 ms: Connection refused
output kubectl get po -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5dd5756b68-xg76g 1/1 Running 9 (6d4h ago) 10detcd-minikube 1/1 Running 9 (6d4h ago) 10dkube-apiserver-minikube 1/1 Running 9 (6d4h ago) 10dkube-controller-manager-minikube 1/1 Running 9 (6d4h ago) 10dkube-proxy-qtcc6 1/1 Running 9 (6d4h ago) 10dkube-scheduler-minikube 1/1 Running 9 (6d4h ago) 10dregistry-c2fjr 1/1 Running 7 (6d4h ago) 8dregistry-proxy-bdv59 1/1 Running 7 (6d4h ago) 8dstorage-provisioner 1/1 Running 18 (14h ago) 10d
checked dns resolution and firewall config as well - No problem thereI am looking to find a way to check the log of the proxy.
kubectl get services -n harbor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEharbor-core ClusterIP 10.102.243.216 <none> 80/TCP 24hharbor-database ClusterIP 10.97.101.141 <none> 5432/TCP 24hharbor-jobservice ClusterIP 10.100.245.195 <none> 80/TCP 24hharbor-portal ClusterIP 10.97.185.234 <none> 80/TCP 24hharbor-redis ClusterIP 10.103.50.52 <none> 6379/TCP 24hharbor-registry ClusterIP 10.98.42.132 <none> 5000/TCP,8080/TCP 24hharbor-trivy ClusterIP 10.106.210.64 <none> 8080/TCP 24h
I wonder if the problem is with Kube-proxy:Do I need to add a rule?
kubectl get configmap kube-proxy -n kube-system -o yaml
apiVersion: v1data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: 0 min: null tcpCloseWaitTimeout: 0s tcpEstablishedTimeout: 0s detectLocal: bridgeInterface: "" interfaceNamePrefix: "" detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: localhostNodePorts: null masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 metricsBindAddress: 0.0.0.0:10249 mode: "" nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" winkernel: enableDSR: false forwardHealthCheckVip: false networkName: "" rootHnsEndpointName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://control-plane.minikube.internal:8443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenkind: ConfigMapmetadata: creationTimestamp: "2024-04-19T22:05:52Z" labels: app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "272" uid: a44631f7-8dfb-47bf-bf00-45f11098ff53
I think the problem is in the ingress rule.
There isn't a specific rule defined for accessing the Harbor registry directly.Should I add a rule to access the registry directly?
kubectl get ingress harbor-ingress -n harbor -o yamlapiVersion: networking.k8s.io/v1kind: Ingressmetadata: annotations: ingress.kubernetes.io/proxy-body-size: "0" ingress.kubernetes.io/ssl-redirect: "true" meta.helm.sh/release-name: harbor meta.helm.sh/release-namespace: harbor nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/ssl-redirect: "true" creationTimestamp: "2024-04-29T20:38:14Z" generation: 1 labels: app: harbor app.kubernetes.io/managed-by: Helm chart: harbor heritage: Helm release: harbor name: harbor-ingress namespace: harbor resourceVersion: "57041" uid: 92a37acc-9138-415f-9c40-6d6bfb1d366dspec: rules: - host: core.harbor.domain http: paths: - backend: service: name: harbor-core port: number: 80 path: /api/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /service/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /v2/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /chartrepo/ pathType: Prefix - backend: service: name: harbor-core port: number: 80 path: /c/ pathType: Prefix - backend: service: name: harbor-portal port: number: 80 path: / pathType: Prefix tls: - hosts: - core.harbor.domain secretName: harbor-ingressstatus: loadBalancer: ingress: - ip: 192.168.49.2