Helm > Error: trying to send message larger than max #oom #lfn #helm #kubernetes #casablanca

David Perez Caparros


I am starting to get errors like the one below when deploying/undeploying with helm, branch Casablanca, running for ~70 days.

# helm undeploy dev-sniro-emulator --purge
Error: trying to send message larger than max (23353031 vs. 20971520)

# kubectl get configmap | wc -l

# kubectl --namespace=kube-system get cm | wc -l

# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean”}

# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:41:50Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.5-rancher1", GitCommit:"44636ddf318af0483af806e255d0be4bb6a2e3d4", GitTreeState:"clean", BuildDate:"2018-12-04T04:28:34Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64”}

# docker -v
Docker version 17.03.2-ce, build f5ec1e2

# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   79M  720M  10% /run
/dev/vda1        97G   21G   77G  22% /
tmpfs           3.9G  392K  3.9G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
none             97G   21G   77G  22% /var/lib/docker/aufs/mnt/168af35e85e3e4df3bf740d4e9540d7f38b3fd4d7d89ed18d5dce6c3a1ac555a
shm              64M     0   64M   0% /var/lib/docker/containers/4a98d330c0febe51e82a18cf9ff4076519e62a7817a8b36b4acd4b495791de78/shm
tmpfs           799M     0  799M   0% /run/user/1000

Did anyone encounter similar issues? any suggestion?


David Pérez Caparrós
Senior Innovation Engineer
Swisscom (Switzerland)

Join onap-discuss@lists.onap.org to automatically receive all group messages.