Forums

Resolved
0 votes
I thought with the coming ClearOS 8 version which is going to support Podman the Docker replacement from RHEL it would be a good idea to check Podman on ClearOS 7.6.

You can find the package in "clearos-centos-extras"

Command to install Podman:


yum --enablerepo=clearos-centos-extras install podman



Failed to set locale, defaulting to C
Loaded plugins: clearcenter-marketplace, fastestmirror
ClearCenter Marketplace: fetching repositories...
Loading mirror speeds from cached hostfile
* clearos: mirror1-amsterdam.clearos.com
* clearos-centos-extras: download1.clearsdn.com
* clearos-centos-sclo-rh: download1.clearsdn.com
* clearos-centos-verified: mirror1-amsterdam.clearos.com
* clearos-contribs: mirror1-amsterdam.clearos.com
* clearos-epel-verified: mirror1-amsterdam.clearos.com
* clearos-fast-updates: download1.clearsdn.com
* clearos-infra: mirror1-amsterdam.clearos.com
* clearos-verified: mirror1-amsterdam.clearos.com
* private-clearcenter-ad: download3.clearsdn.com:80
* private-clearcenter-antimalware: download3.clearsdn.com:80
* private-clearcenter-antispam: download3.clearsdn.com:80
* private-clearcenter-business: download3.clearsdn.com:80
* private-clearcenter-content-filter: download4.clearsdn.com:80
* private-clearcenter-dyndns: download2.clearsdn.com:80
* private-clearcenter-ids: download2.clearsdn.com:80
* private-clearcenter-master-slave: download3.clearsdn.com:80
* private-clearcenter-plex: download3.clearsdn.com:80
* private-clearcenter-proxypass: download1.clearsdn.com:80
* private-clearcenter-rbs: download4.clearsdn.com:80
* private-clearcenter-verified-updates: download4.clearsdn.com:80
Resolving Dependencies
--> Running transaction check
---> Package podman.x86_64 0:0.12.1.2-2.git9551f6b.el7.centos will be installed
--> Processing Dependency: skopeo-containers >= 0.1.29-3 for package: podman-0.12.1.2-2.git9551f6b.el7.centos.x86_64
--> Processing Dependency: containernetworking-plugins >= 0.7.0-101 for package: podman-0.12.1.2-2.git9551f6b.el7.centos.x86_64
--> Processing Dependency: runc for package: podman-0.12.1.2-2.git9551f6b.el7.centos.x86_64
--> Running transaction check
---> Package containernetworking-plugins.x86_64 0:0.7.1-1.el7 will be installed
---> Package containers-common.x86_64 1:0.1.31-8.gitb0b750d.el7.centos will be installed
---> Package runc.x86_64 0:1.0.0-59.dev.git2abd837.el7.centos will be installed
--> Processing Dependency: criu for package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
--> Running transaction check
---> Package criu.x86_64 0:3.9-5.el7 will be installed
--> Processing Dependency: libprotobuf-c.so.1(LIBPROTOBUF_C_1.0.0)(64bit) for package: criu-3.9-5.el7.x86_64
--> Processing Dependency: libnet.so.1()(64bit) for package: criu-3.9-5.el7.x86_64
--> Processing Dependency: libprotobuf-c.so.1()(64bit) for package: criu-3.9-5.el7.x86_64
--> Running transaction check
---> Package libnet.x86_64 0:1.1.6-7.el7 will be installed
---> Package protobuf-c.x86_64 0:1.0.2-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================
Installing:
podman x86_64 0.12.1.2-2.git9551f6b.el7.centos clearos-centos-extras 7.6 M
Installing for dependencies:
containernetworking-plugins x86_64 0.7.1-1.el7 clearos-centos-extras 10 M
containers-common x86_64 1:0.1.31-8.gitb0b750d.el7.centos clearos-centos-extras 21 k
criu x86_64 3.9-5.el7 clearos-centos-verified 432 k
libnet x86_64 1.1.6-7.el7 clearos-centos-verified 59 k
protobuf-c x86_64 1.0.2-3.el7 clearos-centos-verified 28 k
runc x86_64 1.0.0-59.dev.git2abd837.el7.centos clearos-centos-extras 2.9 M

Transaction Summary
=================================================================================================================================================================================
Install 1 Package (+6 Dependent packages)

Total download size: 21 M
Installed size: 80 M



Check the version installed:


podman version



Version: 0.12.1.2
Go Version: go1.10.3
OS/Arch: linux/amd64
[root@voyager ~]#



podman info



host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-0.12.1.2-2.git9551f6b.el7.centos.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.12.0-dev, commit: b909c9e1a3e8f14d5694a118fb9c0c0325a31d4f-dirty'
Distribution:
distribution: '"clearos"'
version: "7"
MemFree: 26136059904
MemTotal: 33671147520
OCIRuntime:
package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.0'
SwapFree: 16844320768
SwapTotal: 16844320768
arch: amd64
cpus: 8
hostname: voyager.lionux.lan
kernel: 3.10.0-862.11.6.v7.x86_64
os: linux
rootless: false
uptime: 267h 30m 23.85s (Approximately 11.12 days)
insecure registries:
registries: []
registries:
registries:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 0
RunRoot: /var/run/containers/storage
Saturday, May 04 2019, 02:52 PM
Share this post:
Responses (25)
  • Accepted Answer

    Saturday, August 17 2019, 06:16 AM - #Permalink
    Resolved
    0 votes
    We are now on version:

    [root@discovery podman]# podman version
    Version: 1.4.4
    RemoteAPI Version: 1
    Go Version: go1.10.3
    OS/Arch: linux/amd64


    Podman makes a separate network

    [root@discovery podman]# ifconfig
    cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.88.0.1 netmask 255.255.0.0 broadcast 10.88.255.255
    inet6 fe80::d04a:1eff:fea1:d333 prefixlen 64 scopeid 0x20<link>
    ether d2:4a:1e:a1:d3:33 txqueuelen 1000 (Ethernet)
    RX packets 16979 bytes 13909739 (13.2 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 21446 bytes 21599631 (20.5 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    So the container are inside "10.88.0.1"

    [root@discovery podman]# podman inspect sonarr | grep -i ipaddr
    "SecondaryIPAddresses": null,
    "IPAddress": "10.88.0.12",


    [root@discovery podman]# podman inspect nzbget | grep -i ipaddr
    "SecondaryIPAddresses": null,
    "IPAddress": "10.88.0.13",


    With this information I can point the Sonarr container to the Nzbget container thus I use the local ip address (10.88.0.13) to connect.

    This is on server in standalone mode (no firewall) but I think this is also work (communication between containers) with a system in gateway mode.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 04:53 PM - #Permalink
    Resolved
    0 votes
    Marcel van van Leeuwen wrote:

    Nick Howitt wrote:

    For the firewall, What I've been doing in ClearOS/docker is creating firewall scripts for the docker app and for the containers. so that if docker is running, its firewall rules get reinstated. If a container is running its firewall rules also get reinstated, but after the docker rules as you need the docker chains in place first for the container rules. It is not an elegant solution and you have to study the firewall rules to work out which package is creating which rules.

    It is made more complicated as the container id changes each time it starts and this changes the interface name so I had to solve that programatically as well.


    That is indeed a big challenge to solve that programatically. Not sure but you know that you also can start a container by using it's name thus not it's ID? The name does not change!


    4c703f1c877e docker.io/binhex/arch-sonarr:latest /usr/bin/tini -- ... 8 hours ago Up 5 seconds ago 0.0.0.0:8989->8989/tcp, 0.0.0.0:9897->9897/tcp sonarr


    So you can start the above container also with "podman start sonarr".
    That is more or less what I do but I then needed to query "/usr/bin/docker inspect ..." to pick up the UID for the interface name and also the network IP/subnet. Have a look at the Windows Networking thread or install ClearGLASS.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 03:07 PM - #Permalink
    Resolved
    0 votes
    Nick Howitt wrote:

    I guess there is not a "start all" option because they expect people to have containers they may not want to start, but there is a good reasons for a "stop all", so you can, for example, do a controlled shutdown of your system.


    Yes, that is also what I thought but then why bother keep those containers.

    For the firewall, What I've been doing in ClearOS/docker is creating firewall scripts for the docker app and for the containers. so that if docker is running, its firewall rules get reinstated. If a container is running its firewall rules also get reinstated, but after the docker rules as you need the docker chains in place first for the container rules. It is not an elegant solution and you have to study the firewall rules to work out which package is creating which rules.

    It is made more complicated as the container id changes each time it starts and this changes the interface name so I had to solve that programatically as well.


    That is indeed a big challenge to solve that programatically. Not sure but you know that you also can start a container by using it's name thus not it's ID? The name does not change!


    4c703f1c877e docker.io/binhex/arch-sonarr:latest /usr/bin/tini -- ... 8 hours ago Up 5 seconds ago 0.0.0.0:8989->8989/tcp, 0.0.0.0:9897->9897/tcp sonarr


    So you can start the above container also with "podman start sonarr".
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 02:08 PM - #Permalink
    Resolved
    0 votes
    I guess there is not a "start all" option because they expect people to have containers they may not want to start, but there is a good reasons for a "stop all", so you can, for example, do a controlled shutdown of your system.

    For the firewall, What I've been doing in ClearOS/docker is creating firewall scripts for the docker app and for the containers. so that if docker is running, its firewall rules get reinstated. If a container is running its firewall rules also get reinstated, but after the docker rules as you need the docker chains in place first for the container rules. It is not an elegant solution and you have to study the firewall rules to work out which package is creating which rules.

    It is made more complicated as the container id changes each time it starts and this changes the interface name so I had to solve that programatically as well.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 01:42 PM - #Permalink
    Resolved
    0 votes
    I created a bash script to start all containers at once.


    #!/bin/bash

    # Script to start all containers at once

    my_array=($(podman ps --all --quiet))

    echo ${my_array}

    for i in ${my_array}
    do
    :
    podmanstart $i
    done
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 07:44 AM - #Permalink
    Resolved
    0 votes
    It's not completely similar because container can communicate which each other. I don't have to add rules..

    I can confirm after an "service firewall restart" the rules are gone. You have to restart the containers.. It's a pity there is now "podman start -all" command. You have to start the containers one by one. A bit weird because there is a "podman stop -all" command.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 07:19 AM - #Permalink
    Resolved
    0 votes
    hmm, yes good point..
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 07:08 AM - #Permalink
    Resolved
    0 votes
    So far, although the firewall rules look a little different, the operation you describe is similar to docker. What I am concerned about is that when you have your containers running, if you then restart the firewall with a "service firewall restart" or add or remove any rule, that all your Podman rules get wiped. You don't really want to be inthe situation that you ave to restart all your containers each time there is a firewall restart.
    The reply is currently minimized Show
  • Accepted Answer

    Tuesday, May 07 2019, 05:18 AM - #Permalink
    Resolved
    0 votes
    This is iptables -nvL -t nat after I stopped the containers:


    [root@voyager ~]# iptables -nvL -t nat
    Chain PREROUTING (policy ACCEPT 4526 packets, 380K bytes)
    pkts bytes target prot opt in out source destination
    219 13140 DNAT tcp -- * * 0.0.0.0/0 62.xxx.xxx.xxx tcp dpt:xxxxxxxx to:192.168.xxxx.x:xxxxxx

    Chain INPUT (policy ACCEPT 1614 packets, 119K bytes)
    pkts bytes target prot opt in out source destination

    Chain OUTPUT (policy ACCEPT 1579 packets, 110K bytes)
    pkts bytes target prot opt in out source destination

    Chain POSTROUTING (policy ACCEPT 243 packets, 14351 bytes)
    pkts bytes target prot opt in out source destination
    0 0 ACCEPT all -- * tun+ 0.0.0.0/0 0.0.0.0/0
    0 0 SNAT tcp -- * * 192.168.100.0/24 192.xxx.xxx.xxx tcp dpt:xxxxxx to:192.168.xxx.x
    50169 2850K MASQUERADE all -- * eno1 0.0.0.0/0 0.0.0.0/0



    This iptables -nvL -t with started containers:


    [root@voyager ~]# iptables -nvL -t nat
    Chain PREROUTING (policy ACCEPT 75 packets, 5500 bytes)
    pkts bytes target prot opt in out source destination
    39 2617 CNI-HOSTPORT-DNAT all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
    225 13500 DNAT tcp -- * * 0.0.0.0/0 62.xxx.xxx.xxx tcp dpt:32400 to:192.168.xxx.xx:xxxxxxx

    Chain INPUT (policy ACCEPT 38 packets, 2582 bytes)
    pkts bytes target prot opt in out source destination

    Chain OUTPUT (policy ACCEPT 38 packets, 2618 bytes)
    pkts bytes target prot opt in out source destination
    1 60 CNI-HOSTPORT-DNAT all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

    Chain POSTROUTING (policy ACCEPT 2 packets, 180 bytes)
    pkts bytes target prot opt in out source destination
    64 4074 CNI-HOSTPORT-MASQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* CNI portfwd requiring masquerade */
    0 0 ACCEPT all -- * tun+ 0.0.0.0/0 0.0.0.0/0
    0 0 SNAT tcp -- * * 192.168.100.0/24 192.xxxx.xxx.xx tcp dpt:xxxxxxx to:192.168.100.1
    53055 3027K MASQUERADE all -- * eno1 0.0.0.0/0 0.0.0.0/0
    0 0 CNI-549c7a5d2ec7b2bbeae19666 all -- * * 10.88.0.0/16 0.0.0.0/0 /* name: "podman" id: "16ca53c3e704a0f5b5164ac4b2261a30fa0f47d8c3e40073b49e077432c04738" */

    Chain CNI-549c7a5d2ec7b2bbeae19666 (1 references)
    pkts bytes target prot opt in out source destination
    0 0 ACCEPT all -- * * 0.0.0.0/0 10.88.0.0/16 /* name: "podman" id: "16ca53c3e704a0f5b5164ac4b2261a30fa0f47d8c3e40073b49e077432c04738" */
    0 0 MASQUERADE all -- * * 0.0.0.0/0 !224.0.0.0/4 /* name: "podman" id: "16ca53c3e704a0f5b5164ac4b2261a30fa0f47d8c3e40073b49e077432c04738" */

    Chain CNI-DN-549c7a5d2ec7b2bbeae19 (1 references)
    pkts bytes target prot opt in out source destination
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 10.88.0.35 0.0.0.0/0 tcp dpt:8080
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 127.0.0.1 0.0.0.0/0 tcp dpt:8080
    1 60 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:10.88.0.35:8080
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 10.88.0.35 0.0.0.0/0 tcp dpt:8090
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 127.0.0.1 0.0.0.0/0 tcp dpt:8090
    0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8090 to:10.88.0.35:8090
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 10.88.0.35 0.0.0.0/0 tcp dpt:8118
    0 0 CNI-HOSTPORT-SETMARK tcp -- * * 127.0.0.1 0.0.0.0/0 tcp dpt:8118
    0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8118 to:10.88.0.35:8118

    Chain CNI-HOSTPORT-DNAT (2 references)
    pkts bytes target prot opt in out source destination
    1 60 CNI-DN-549c7a5d2ec7b2bbeae19 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* dnat name: "podman" id: "16ca53c3e704a0f5b5164ac4b2261a30fa0f47d8c3e40073b49e077432c04738" */ multiport dports 8080,8090,8118

    Chain CNI-HOSTPORT-MASQ (1 references)
    pkts bytes target prot opt in out source destination
    0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 mark match 0x2000/0x2000

    Chain CNI-HOSTPORT-SETMARK (6 references)
    pkts bytes target prot opt in out source destination
    0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* CNI portfwd masquerade mark */ MARK or 0x2000



    So the rules are created when you start the container and not when you create the container. Also the rules are deleted when you stop the container not when you remove the container. I think this really great news!
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 06 2019, 08:03 PM - #Permalink
    Resolved
    0 votes
    I think the rules are recreated if you start the container after an reboot.
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 06 2019, 07:57 PM - #Permalink
    Resolved
    0 votes
    That is good and bad. My expectation is that ClearOS will wipe all the Podman rules if it restarts the firewall so you end up in the docker type of scenario with the firewall, unless Podman has some magic to reinstall rules if ClearOS wipes them. I wonder if Podman has a nice API you can call to recreate the rules. Also I thought chains had a maximum character limit in their names and this breaks it! Perhaps it is no longer a limit
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 06 2019, 07:42 PM - #Permalink
    Resolved
    0 votes
    checked ifconfig:


    eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 62.xxx.xxx.xxx netmask 255.255.255.0 broadcast 255.255.255.255
    inet6 fe80::21e:67ff:fe9f:fe54 prefixlen 64 scopeid 0x20<link>
    ether 00:1e:67:9f:fe:54 txqueuelen 1000 (Ethernet)
    RX packets 3528 bytes 611291 (596.9 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 3452 bytes 387532 (378.4 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    device memory 0xc1200000-c127ffff

    eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.100.1 netmask 255.255.255.0 broadcast 192.168.100.255
    inet6 fe80::21e:67ff:fe9f:fe55 prefixlen 64 scopeid 0x20<link>
    ether 00:1e:67:9f:fe:55 txqueuelen 1000 (Ethernet)
    RX packets 4961 bytes 623900 (609.2 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 4058 bytes 743743 (726.3 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    device memory 0xc1100000-c117ffff

    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10<host>
    loop txqueuelen 1000 (Local Loopback)
    RX packets 777 bytes 71831 (70.1 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 777 bytes 71831 (70.1 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    as soon as I start a container the following interfaces popup:


    cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.88.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
    inet6 fe80::dcdc:2dff:fe00:8894 prefixlen 64 scopeid 0x20<link>
    ether de:dc:2d:00:88:94 txqueuelen 1000 (Ethernet)
    RX packets 12 bytes 712 (712.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 4 bytes 308 (308.0 B)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    vethcf68f7bd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet6 fe80::e000:ffff:fe73:65ad prefixlen 64 scopeid 0x20<link>
    ether e2:00:ff:73:65:ad txqueuelen 0 (Ethernet)
    RX packets 10 bytes 748 (748.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 8 bytes 628 (628.0 B)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0



    Hmm, I think this is positive news. I see rules from Podman. I must warn you you also see rules from one container which runs a vpn connection to the outside world. That is the container which starts with 16ca......
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 06 2019, 08:15 AM - #Permalink
    Resolved
    0 votes
    Does the firewall look default or are there references to Podman interfaces. Do:
    ifconfig
    iptables -nvL
    iptables -nvL -t nat
    The reply is currently minimized Show
  • Accepted Answer

    Monday, May 06 2019, 04:21 AM - #Permalink
    Resolved
    0 votes
    I'm not sure what to look for any suggestions?
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 09:32 PM - #Permalink
    Resolved
    0 votes
    I've no idea if that is good or bad! It is good that it works, but is there any firewalling at all, and if there is not, does it matter? I have a suspicion that, if it is possible, you are running Podman without any firewall rules. Can you have a look?
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 06:41 PM - #Permalink
    Resolved
    0 votes
    Tested this on a live machine in gateway mode and no issues. Containers can communicate with each other!
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 04:41 PM - #Permalink
    Resolved
    0 votes
    @Nick, with Podman I have no firewall issues communicating between containers! :o
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 04:32 PM - #Permalink
    Resolved
    0 votes
    a thank you for your explanation how CleaOS handles this and your ideas . @Anyone more opinions?
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 03:47 PM - #Permalink
    Resolved
    0 votes
    ClearOS typically has apps under /usr/clearos/ (for webconfig stuff), settings under /var/clearos and /etc/clearos and and data under /var/clearos. Docker containers go under /var/lib/docker. My docker/samba implementation stores its externally accessible configs under /var/clearos/samba. I have no idea what is the best place. I suspect /var/podman/.... might be a good place but I am not the oracle on this.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 02:53 PM - #Permalink
    Resolved
    0 votes
    Okay, so you know you can save you config files outside a container? In case of Sabnzbd it maps "/usr/podman/sabnzbd:/config". I have chosen the location of "/usr/podman/sabnzbd". It seemed to me logical to do this in "/usr". Is this the right assumption?
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 01:29 PM - #Permalink
    Resolved
    0 votes
    .... I don't use Podman, just Google. I have used Docker for only one app.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 08:39 AM - #Permalink
    Resolved
    0 votes
    @Nick, where do you save your config files of the containers? I made a directory Podman in "/usr" to store the config files of the containers. So "/usr/podman/sabnzbd/<config_files>".
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 08:31 AM - #Permalink
    Resolved
    0 votes
    Good catch Nick!

    The solution to this is:

    create directory in /run


    mkdir /run/sabnzbd


    give correct permissions (set the correct PUID, PGID during creation of the container!)


    chown user:group /run/sabnzbd


    To run the container be sure to use the volume bind below


    -v /run/sabnzbd:/run



    Edit: It seems that this is related to the images the containers uses. I don't have this issue with different container.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 07:45 AM - #Permalink
    Resolved
    0 votes
    Try this. May be related to this but it is 2 years old.
    The reply is currently minimized Show
  • Accepted Answer

    Sunday, May 05 2019, 06:51 AM - #Permalink
    Resolved
    0 votes
    Im trying to start a Docker container (Docker containers are supported on Podman) and I'll get a error:


    s6-supervise (child): fatal: unable to exec run: Permission denied
    s6-supervise sabnzbd: warning: unable to spawn ./run - waiting 10 seconds
    [services.d] done.


    There is a permission problem. I'm not sure what the problem is.. I think Podman is not allowed to run the container..
    The reply is currently minimized Show
Your Reply