Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Elasticsearch 5.6.16 & Kubernetes 1.13.6 #162

Closed
zifeo opened this issue Jun 15, 2019 · 10 comments
Closed

Elasticsearch 5.6.16 & Kubernetes 1.13.6 #162

zifeo opened this issue Jun 15, 2019 · 10 comments

Comments

@zifeo
Copy link

zifeo commented Jun 15, 2019

Chart version: 7.1.1 (but image 5.6.16, major version 5)

Kubernetes version: 1.13.6

Kubernetes provider: GKE (Google Kubernetes Engine)

Helm Version: 2.14.1

helm get release output -

Describe the bug:

Upgrading to 1.13.6 from 1.12.x triggered Error caused by can not run elasticsearch as root

Steps to reproduce:

  1. Run the chart with the correct versions

Expected behavior:

Stable and usable cluster.

Provide logs and/or server output (if relevant): -

Any additional context:

The issue can be patched by adding a runAsUser: 1000 here.

@Crazybus
Copy link
Contributor

Thanks for opening this. 1.13 just came out on GKE so I'm going to added it to the automated testing suite to see if there is anything else that breaks.

@Crazybus
Copy link
Contributor

Could you give me more information about your GKE setup? I wasn't able to reproduce this on a 1.13 cluster.

I'm looking for the output of:

gcloud container clusters describe helm-113-26fe7570fb --zone=us-central1-a

Here is the output from the cluster that I tested with:

addonsConfig:
  kubernetesDashboard:
    disabled: true
  networkPolicyConfig:
    disabled: true
clusterIpv4Cidr: 10.52.0.0/14
createTime: '2019-06-18T17:28:48+00:00'
currentMasterVersion: 1.13.6-gke.6
currentNodeCount: 4
currentNodeVersion: 1.13.6-gke.6
defaultMaxPodsConstraint:
  maxPodsPerNode: '110'
description: Helm testing kubernetes cluster
endpoint: removed
initialClusterVersion: 1.13.6-gke.6
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-b/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-f30f02a9-grp
- https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-f/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-c64f999e-grp
- https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-c/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-12415962-grp
- https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-a/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-42079064-grp
labelFingerprint: a9dc16a7
legacyAbac: {}
location: us-central1-a
locations:
- us-central1-b
- us-central1-f
- us-central1-c
- us-central1-a
loggingService: logging.googleapis.com
masterAuth:
  clusterCaCertificate: removed
monitoringService: monitoring.googleapis.com
name: helm-113-26fe7570fb
network: helm-charts-k8s
networkConfig:
  network: projects/elastic-ci-prod/global/networks/helm-charts-k8s
  subnetwork: projects/elastic-ci-prod/regions/us-central1/subnetworks/helm-charts-k8s
nodeConfig:
  diskSizeGb: 100
  diskType: pd-standard
  imageType: COS
  machineType: n1-standard-8
  metadata:
    disable-legacy-endpoints: 'true'
  oauthScopes:
  - https://www.googleapis.com/auth/logging.write
  - https://www.googleapis.com/auth/monitoring
  serviceAccount: default
nodeIpv4CidrSize: 24
nodePools:
- config:
    diskSizeGb: 100
    diskType: pd-standard
    imageType: COS
    machineType: n1-standard-8
    metadata:
      disable-legacy-endpoints: 'true'
    oauthScopes:
    - https://www.googleapis.com/auth/logging.write
    - https://www.googleapis.com/auth/monitoring
    serviceAccount: default
  initialNodeCount: 1
  instanceGroupUrls:
  - https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-b/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-f30f02a9-grp
  - https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-f/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-c64f999e-grp
  - https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-c/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-12415962-grp
  - https://www.googleapis.com/compute/v1/projects/elastic-ci-prod/zones/us-central1-a/instanceGroupManagers/gke-helm-113-26fe7570fb-default-pool-42079064-grp
  management: {}
  name: default-pool
  podIpv4CidrSize: 24
  selfLink: https://container.googleapis.com/v1/projects/elastic-ci-prod/zones/us-central1-a/clusters/helm-113-26fe7570fb/nodePools/default-pool
  status: RUNNING
  version: 1.13.6-gke.6
selfLink: https://container.googleapis.com/v1/projects/elastic-ci-prod/zones/us-central1-a/clusters/helm-113-26fe7570fb
servicesIpv4Cidr: 10.55.240.0/20
status: RUNNING
subnetwork: helm-charts-k8s
zone: us-central1-a

@zifeo
Copy link
Author

zifeo commented Jun 18, 2019

@Crazybus Here is the one dev config where I was able to reproduce the issue.

addonsConfig:
  httpLoadBalancing:
    disabled: true
  kubernetesDashboard:
    disabled: true
  networkPolicyConfig: {}
clusterIpv4Cidr: 10.20.0.0/14
createTime: [redacted]
currentMasterVersion: 1.13.6-gke.6
currentNodeCount: 5
currentNodeVersion: 1.13.6-gke.6
defaultMaxPodsConstraint:
  maxPodsPerNode: '110'
endpoint: [redacted]
initialClusterVersion: 1.11.6-gke.2
instanceGroupUrls: [redacted]
ipAllocationPolicy:
  clusterIpv4Cidr: 10.20.0.0/14
  clusterIpv4CidrBlock: 10.20.0.0/14
  clusterSecondaryRangeName: [redacted]
  servicesIpv4Cidr: 10.24.0.0/20
  servicesIpv4CidrBlock: 10.24.0.0/20
  servicesSecondaryRangeName: [redacted]
  useIpAliases: true
labelFingerprint: [redacted]
legacyAbac: {}
location: [redacted]
locations: [redacted]
loggingService: none
maintenancePolicy:
  window:
    dailyMaintenanceWindow:
      duration: PT4H0M0S
      startTime: 04:00
masterAuth:
  clusterCaCertificate: [redacted]
masterAuthorizedNetworksConfig: {}
monitoringService: none
name: [redacted]
network: default
networkConfig:
  network: [redacted]
  subnetwork: [redacted]
networkPolicy:
  enabled: true
  provider: CALICO
nodeConfig:
  diskSizeGb: [redacted]
  diskType: [redacted]
  imageType: COS
  machineType: [redacted]
  metadata: [redacted]
  oauthScopes: [redacted]
  serviceAccount: [redacted]
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
  config:
    diskSizeGb: [redacted]
    diskType: [redacted]
    imageType: COS
    machineType: [redacted]
    metadata: [redacted]
    oauthScopes: [redacted]
    serviceAccount: [redacted]
  initialNodeCount: 3
  instanceGroupUrls: [redacted]
  management:
    autoRepair: true
  maxPodsConstraint:
    maxPodsPerNode: '110'
  name: [redacted]
  podIpv4CidrSize: 24
  selfLink: [redacted]
  status: RUNNING
  version: 1.13.6-gke.6
selfLink: [redacted]
servicesIpv4Cidr: 10.24.0.0/20
status: RUNNING
subnetwork: default
zone: [redacted]

@Crazybus
Copy link
Contributor

I still can't reproduce it however the fix that you said worked is already being worked on in #171

@Crazybus
Copy link
Contributor

Crazybus commented Jul 5, 2019

#171 has been merged. Can you try with latest master to see if the issue is fixed for you?

@zifeo
Copy link
Author

zifeo commented Jul 7, 2019

Seems good, thanks!

@zifeo zifeo closed this as completed Jul 7, 2019
@dcvtruong
Copy link

dcvtruong commented Sep 14, 2020

@zifeo Where did you get the elasticsearch 5.6.16 helm chart? I couldn't find the 5.6.x chart from elastic.

@Crazybus Could someone point me to the elasticsearch 5.6.x helm chart?

@zifeo
Copy link
Author

zifeo commented Sep 15, 2020

@dcvtruong This is not 5.6.16 helm chart but the ES docker image version.

@jacquesvdm7
Copy link

Still fails for me:

  • helm upgrade --install logstash-container .sub/ourporject/logstash-container/charts --atomic --kubeconfig=/kubeconfigs/ourcluster-kubeconfig/kubeconfig.yaml --namespace=ournamespace --set image.repository=harbor.repo/ourporject/ourporject_logstash-container --set image.tag=dev-90-4fda5398752 --values .sub/ourporject/logstash-container/charts/values.yaml --values .sub/ourporject/logstash-container/charts/values-dev.yaml --set ingress.fqdn=logstash-container.ourporject-ourcluster --set buildID=dev-90-4fda5398752
  • sub_build_tools kube watch-deployment --namespace=ourproject-ourporject-dev --kube_config_file=/kubeconfigs/ourcluster-kubeconfig/kubeconfig.yaml --deployment_name=logstash-container-docker

panic: interface conversion: interface {} is string, not map[string]interface {}

@dcvtruong
Copy link

dcvtruong commented Sep 15, 2020

@zifeo Looks like you were able to install ES 5.6.16 image (docker.elastic.co) via helm chart to K8s 1.13.6

Chart version: 7.1.1 (but image 5.6.16, major version 5)

Kubernetes version: 1.13.6

We have a couples of ES 5.4.x cluster running in VMs that need to be containerized and deploy to K8s cluster.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants