Skip to content

podman machine fails when running inside a container #25950

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
m2Giles opened this issue Apr 22, 2025 · 9 comments · Fixed by #26026
Closed

podman machine fails when running inside a container #25950

m2Giles opened this issue Apr 22, 2025 · 9 comments · Fixed by #26026
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. machine triaged Issue has been triaged

Comments

@m2Giles
Copy link

m2Giles commented Apr 22, 2025

Issue Description

I am attempting to run podman machine init inside a rootful/rootless podman container. I've attempted the following configurations:

rootless container w/ systemd
rootless container w/o systemd
rootful container w/o systemd

Steps to reproduce the issue

Steps to reproduce the issue

  1. podman run -it --rm --privileged --security-opt label=disable quay.io/fedora/fedora:latest bash
  2. dnf5 install podman-machine ssh-keygen
  3. mkdir -p /run/user/1000 && chown 1000:1000 /run/user/1000
  4. useradd -m core && su -l core
  5. export XDG_RUNTIME_DIR=/run/user/1000
  6. podman --log-level=trace machine init

You can also do this in a toolbox/distrobox resulting in the same.

Describe the results you received

I end getting the following each time:

[core@d89fb2fc8a77 ~]$ podman --log-level=trace machine init
INFO[0000] podman filtering at log level trace          
DEBU[0000] Using Podman machine with `qemu` virtualization provider 
DEBU[0000] socket length for /home/core/.config/containers/podman/machine/qemu is 49 
DEBU[0000] socket length for /home/core/.local/share/containers/podman/machine/qemu is 54 
DEBU[0000] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache is 60 
DEBU[0000] socket length for /run/user/1000/podman is 21 
DEBU[0000] socket length for /home/core/.config/containers/podman/machine/qemu is 49 
DEBU[0000] socket length for /home/core/.local/share/containers/podman/machine/qemu is 54 
DEBU[0000] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache is 60 
DEBU[0000] socket length for /run/user/1000/podman is 21 
DEBU[0000] socket length for /home/core/.config/containers/podman/machine/qemu/podman-machine-default.json is 77 
DEBU[0000] socket length for /home/core/.local/share/containers/podman/machine/qemu/podman-machine-default-amd64.qcow2 is 89 
Looking up Podman Machine image at quay.io/podman/machine-os:5.4 to create VM
DEBU[0000] Using registries.d directory /etc/containers/registries.d 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" 
DEBU[0000] Trying to access "quay.io/podman/machine-os:5.4" 
DEBU[0000] No credentials matching quay.io/podman/machine-os found in /run/user/1000/containers/auth.json 
DEBU[0000] No credentials matching quay.io/podman/machine-os found in /home/core/.config/containers/auth.json 
DEBU[0000] No credentials matching quay.io/podman/machine-os found in /home/core/.docker/config.json 
DEBU[0000] No credentials matching quay.io/podman/machine-os found in /home/core/.dockercfg 
DEBU[0000] No credentials for quay.io/podman/machine-os found 
DEBU[0000]  No signature storage configuration found for quay.io/podman/machine-os:5.4, using built-in default file:///home/core/.local/share/containers/sigstore 
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io 
DEBU[0000] GET https://quay.io/v2/                      
DEBU[0000] Ping https://quay.io/v2/ status 401          
DEBU[0000] GET https://quay.io/v2/auth?scope=repository%3Apodman%2Fmachine-os%3Apull&service=quay.io 
DEBU[0000] Increasing token expiration to: 60 seconds   
DEBU[0000] GET https://quay.io/v2/podman/machine-os/manifests/5.4 
DEBU[0000] Content-Type from manifest GET is "application/vnd.oci.image.index.v1+json" 
DEBU[0000] found image in digest: "sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8" 
DEBU[0000] GET https://quay.io/v2/podman/machine-os/manifests/sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 
DEBU[0002] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json" 
DEBU[0002] original artifact file name: podman-machine.x86_64.qemu.qcow2.zst 
DEBU[0002] GET https://quay.io/v2/podman/machine-os/manifests/sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 
DEBU[0007] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json" 
DEBU[0007] original artifact file name: podman-machine.x86_64.qemu.qcow2.zst 
DEBU[0007] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache/240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8.qcow2.zst is 135 
DEBU[0007] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache/240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 is 125 
DEBU[0007] Using registries.d directory /etc/containers/registries.d 
DEBU[0007] Trying to access "quay.io/podman/machine-os@sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8" 
DEBU[0007] No credentials matching quay.io/podman/machine-os found in /run/user/1000/containers/auth.json 
DEBU[0007] No credentials matching quay.io/podman/machine-os found in /home/core/.config/containers/auth.json 
DEBU[0007] No credentials matching quay.io/podman/machine-os found in /home/core/.docker/config.json 
DEBU[0007] No credentials matching quay.io/podman/machine-os found in /home/core/.dockercfg 
DEBU[0007] No credentials for quay.io/podman/machine-os found 
DEBU[0007]  No signature storage configuration found for quay.io/podman/machine-os@sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8, using built-in default file:///home/core/.local/share/containers/sigstore 
DEBU[0007] Looking for TLS certificates and private keys in /etc/docker/certs.d/quay.io 
DEBU[0007] GET https://quay.io/v2/                      
DEBU[0007] Ping https://quay.io/v2/ status 401          
DEBU[0007] GET https://quay.io/v2/auth?scope=repository%3Apodman%2Fmachine-os%3Apull&service=quay.io 
DEBU[0007] Increasing token expiration to: 60 seconds   
DEBU[0007] GET https://quay.io/v2/podman/machine-os/manifests/sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 
DEBU[0008] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json" 
DEBU[0008] Using SQLite blob info cache at /home/core/.local/share/containers/cache/blob-info-cache-v1.sqlite 
DEBU[0008] IsRunningImageAllowed for image docker:quay.io/podman/machine-os@sha256:240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 
DEBU[0008]  Using default policy section                
DEBU[0008]  Requirement 0: allowed                      
DEBU[0008] Overall: allowed                             
Getting image source signatures
DEBU[0008] Reading /home/core/.local/share/containers/sigstore/podman/machine-os@sha256=240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8/signature-1 
DEBU[0008] Not looking for sigstore attachments: disabled by configuration 
DEBU[0008] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json] 
DEBU[0008] ... will first try using the original manifest unmodified 
DEBU[0008] Checking if we can reuse blob sha256:d982f2a01613fbd566d81266a619f7bad958268def3a3f924a8e209f48578d75: general substitution = true, compression for MIME type "application/zstd" = false 
DEBU[0008] Downloading /v2/podman/machine-os/blobs/sha256:d982f2a01613fbd566d81266a619f7bad958268def3a3f924a8e209f48578d75 
DEBU[0008] GET https://quay.io/v2/podman/machine-os/blobs/sha256:d982f2a01613fbd566d81266a619f7bad958268def3a3f924a8e209f48578d75 
Copying blob d982f2a01613 [--------------------------------------] 0.0b / 940.5MiB | 0.0 b/s
DEBU[0008] Detected compression format zstd             
DEBU[0008] Compression change for blob sha256:d982f2a01613fbd566d81266a619f7bad958268def3a3f924a8e209f48578d75 ("application/zstd") not supported 
Copying blob d982f2a01613 done   | 
DEBU[0112] Downloading /v2/podman/machine-os/blobs/sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a 
DEBU[0112] GET https://quay.io/v2/auth?scope=repository%3Apodman%2Fmachine-os%3Apull&service=quay.io 
Copying config 44136fa355 [--------------------------------------] 0.0b / 2.0b | 0.0 b/s
DEBU[0113] Increasing token expiration to: 60 seconds   
DEBU[0113] GET https://quay.io/v2/podman/machine-os/blobs/sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8aCopying config 44136fa355 [--------------------------------------] 0.0b / 2.0b | 0.0 b/s
DEBU[0114] No compression detected                      
DEBU[0114] Compression change for blob sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a ("application/vnd.oci.empty.v1+json") not supported 
Copying config 44136fa355 done   | 
Writing manifest to image destination
DEBU[0114] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache/240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 is 125 
DEBU[0114] socket length for /home/core/.local/share/containers/podman/machine/qemu/cache/240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8.qcow2.zst is 135 
d982f2a01613fbd566d81266a619f7bad958268def3a3f924a8e209f48578d75
DEBU[0114] Detected compression format zstd             
Extracting compressed file: podman-machine-default-amd64.qcow2: done  
DEBU[0118] cleaning cached file: /home/core/.local/share/containers/podman/machine/qemu/cache/240859e1e722e5d0c95d2744fe671f5aa3660809e928885cf7e513264225bcf8 
DEBU[0118] --> imagePath is "/home/core/.local/share/containers/podman/machine/qemu/podman-machine-default-amd64.qcow2" 
DEBU[0118] socket length for /home/core/.config/containers/podman/machine/qemu/podman-machine-default.ign is 76 
Error: exit status 1
DEBU[0119] Shutting down engines 

Describe the results you expected

machine to init / start and be able to used with podman --remote

podman info output

host:
  arch: amd64
  buildahVersion: 1.39.4
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.13-1.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: '
  cpuUtilization:
    idlePercent: 96.63
    systemPercent: 1.24
    userPercent: 2.12
  cpus: 22
  databaseBackend: sqlite
  distribution:
    codename: Archaeopteryx
    distribution: bluefin
    variant: bluefin
    version: "41"
  eventLogger: journald
  freeLocks: 2035
  hostname: bluefin
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.13.6-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 13052370944
  memTotal: 66841563136
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.14.0-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.14.0
    package: netavark-1.14.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.14.1
  ociRuntime:
    name: crun
    package: crun-1.21-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.21
      commit: 10269840aa07fb7e6b7e1acff6198692d8ff5c88
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20250320.g32f6212-2.fc41.x86_64
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 6h 0m 24.00s (Approximately 0.25 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /var/home/m2/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/m2/.local/share/containers/storage
  graphRootAllocated: 1998678130688
  graphRootUsed: 296319758336
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 28
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/m2/.local/share/containers/storage/volumes
version:
  APIVersion: 5.4.2
  BuildOrigin: Fedora Project
  Built: 1743552000
  BuiltTime: Tue Apr  1 20:00:00 2025
  GitCommit: be85287fcf4590961614ee37be65eeb315e5d9ff
  GoVersion: go1.23.7
  Os: linux
  OsArch: linux/amd64
  Version: 5.4.2

Podman in a container

Yes

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

running inside podman rootless/rootful w/ and w/o systemd as init result in failures.
running inside docker w/o systemd as results in a failure.

Additional information

n/a

@m2Giles m2Giles added the kind/bug Categorizes issue or PR as related to a bug. label Apr 22, 2025
@baude
Copy link
Member

baude commented Apr 23, 2025

This is not a use case that we have considered or support. You want to run a container that runs podman machine so you can you a container?

@baude
Copy link
Member

baude commented Apr 23, 2025

your reproducer does not seem to work for me. i had to s/security/security-opt and dnf install ssh-keygen to trigger what I assume is the same error.

@baude baude added triaged Issue has been triaged machine labels Apr 23, 2025
@m2Giles
Copy link
Author

m2Giles commented Apr 23, 2025

I will fix the reproducer.

@baude
Copy link
Member

baude commented Apr 23, 2025

why is this use case important ?

@m2Giles
Copy link
Author

m2Giles commented Apr 23, 2025

I have fixed the reproducer.

The usecase:
Not all users of podman have access to rootful podman. At work I am limited to rootless podman; however, I am able to run VMs inside of rootless podman to then have access to a rootful instance. Supporting podman machine means that this would be seemless to have access to a rootful instance.

Additionally, for toolbx/distrobox users, it means that instead of executing podman commands on the host, you could use nested instances.

@mheon
Copy link
Member

mheon commented Apr 23, 2025

This feels like it ought to work; VMs in containers are an established usecase through Kubevirt. But I admit that it's unusual to do a VM for running Podman containers, in a Podman container... Our typical intended podman machine usecase is Windows/Mac where we don't have the ability to run native containers at all.

@Luap99
Copy link
Member

Luap99 commented Apr 24, 2025

No matter what exit status 1 is just a bad error, it seems we just return the child command error without any wrapping so we don't even know which command failed so that must be fixed regardless IMO.

Then if we know what the actual problem is we can see if this is something that we can make work easily or not. In any case that wouldn't be a priority for me due the unusual setup.
Running with strace -f might be useful to quickly see the command which is failing and also why

@baude baude self-assigned this Apr 28, 2025
@baude
Copy link
Member

baude commented Apr 29, 2025

@m2Giles good news ... i was able to identify and overcome the error you reported. The init will complete now and the start also works. But there does seem to be another problem in connecting with it. I'll at least fix this problem and if I don't figure out the next problem, i'll write another issue.

baude added a commit to baude/podman that referenced this issue Apr 30, 2025
In cases where systemd was not available, podman machine was erroring
out using timedatectl (it requires systemd).  on other providers like
windows, we don't do any timezone detection so it seems valid to return
a "" for timezone.  This fixes the first problem described containers#25950.

Fixes: containers#25950

Signed-off-by: Brent Baude <bbaude@redhat.com>
@baude
Copy link
Member

baude commented Apr 30, 2025

#26026 fixes the first problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. machine triaged Issue has been triaged
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants