Skip to content

Sideloaded image fails to unpack in a deployment or a run #4029

@halfer

Description

@halfer

I may have exhausted my options asking for help on the web, so I am converting my issue to a bug report. Interestingly the core error message is hardly evident in web searches, which seems odd to me, since I think I am doing something very trivial.

I am new to K8S and MicroK8S. I have selected MicroK8S as I believed it would simplify some basics (and, in the case of the K8S dashboard, I think it did). I have a cluster of three 16GB Tiny PCs on a LAN: all are running Ubuntu Server + MicroK8S, everything looks healthy, HA mode is automatically on.

Here is my version of MicroK8S: MicroK8s v1.26.4 revision 5219.

I created a tarball of a local Docker image, thus:

docker save k8s-workload > k8s-workload.docker.tar

I then side-loaded it to all three nodes (which was sent to all nodes without error):

microk8s images import < k8s-workload.docker.tar

This image works in Docker on the leader node, so it seems to be valid (whether it is an OSI image suitable for MicroK8S I cannot say, but if it had a problem I would expect images import to have reported it to me).

After much digging I get errors of this type (with a deployment):

Failed to pull image "k8s-workload": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/k8s-workload:latest": failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:e823...45c8: not found

Or if I try a run:

Error: failed to create containerd container: error unpacking image: unexpected media type text/html for sha256:1f2c...753e1: not found

Here is my status:

root@arran:/home/myuser# microk8s status
microk8s is running
high-availability: yes
  datastore master nodes: 192.168.50.251:19001 192.168.50.74:19001 192.168.50.135:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated

A couple of thoughts to stimulate fix suggestions. I have not tried these as I am not fond of shooting in the dark, given that the more random stuff I try, the more clean-up of my cluster I will have to do.

  • The registry is disabled by default, and I have not enabled it. I am strongly of the view that with the images having been side-loaded, they do not need to be pulled from anywhere. They are already in place.
  • Images were converted to a tarball using docker save. I could use docker export to export a container instead, though I'd rather not do that - I want a clean image, and not any runtime cruft accumulated by a locally run container on my dev machine.

Corrections to any of my assumptions are welcome.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions