Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: local server setup #1032

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ jobs:

- name: Test with pytest
run: |
podman system service --time=0 unix:///tmp/podman.sock &
export CONTAINER_HOST="unix:///tmp/podman.sock"
export CONTAINER_SOCK="/tmp/podman.sock"
podman system service --time=0 "unix://$CONTAINER_SOCK" &
efahl marked this conversation as resolved.
Show resolved Hide resolved
poetry run coverage run -m pytest -vv --runslow
poetry run coverage xml

Expand Down
2 changes: 2 additions & 0 deletions Containerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,7 @@ RUN poetry config virtualenvs.create false \
&& poetry install --only main --no-interaction --no-ansi

COPY ./asu/ ./asu/
RUN --mount=type=bind,source=./.env,target=/tmp/.env \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm semi excited to "hardcode" the envs into the created container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand, are you currently changing them after the containers are started or something like that?

I've been setting defaults in config.py, then overriding them in .env (like ALLOW_DEFAULTS=True), then that RUN (really a COPY) sets the container up as I expect.

grep -vE 'REDIS_URL|PUBLIC_PATH' /tmp/.env > ./.env

CMD uvicorn --host 0.0.0.0 'asu.main:app'
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ the dependencies:
#### Running a worker

# podman unix socket (not path), no need to mount anything
export CONTAINER_HOST=unix:///run/user/1001/podman/podman.sock
export CONTAINER_SOCK=/run/user/$(id -u)/podman/podman.sock
poetry run rq worker

#### Update targets
Expand Down
36 changes: 20 additions & 16 deletions asu/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,10 @@
request_hash = get_request_hash(build_request)
bin_dir: Path = settings.public_path / "store" / request_hash
bin_dir.mkdir(parents=True, exist_ok=True)
log.debug(f"Bin dir: {bin_dir}")
host_dir: Path = settings.host_path / "store" / request_hash
builder_dir: Path = settings.builder_path

log.debug(f"Build dirs:\n {bin_dir = }\n {host_dir = }\n {builder_dir = }")

job = job or get_current_job()
job.meta["detail"] = "init"
Expand Down Expand Up @@ -98,8 +101,8 @@
mounts.append(
{
"type": "bind",
"source": str(bin_dir / "keys" / fingerprint),
"target": "/builder/keys/" + fingerprint,
"source": str(host_dir / "keys" / fingerprint),
"target": str(builder_dir / "keys" / fingerprint),

Check warning on line 105 in asu/build.py

View check run for this annotation

Codecov / codecov/patch

asu/build.py#L105

Added line #L105 was not covered by tests
"read_only": True,
},
)
Expand All @@ -120,8 +123,8 @@
mounts.append(
{
"type": "bind",
"source": str(bin_dir / "repositories.conf"),
"target": "/builder/repositories.conf",
"source": str(host_dir / "repositories.conf"),
"target": str(builder_dir / "repositories.conf"),

Check warning on line 127 in asu/build.py

View check run for this annotation

Codecov / codecov/patch

asu/build.py#L127

Added line #L127 was not covered by tests
"read_only": True,
},
)
Expand All @@ -130,13 +133,14 @@
log.debug("Found defaults")

defaults_file = bin_dir / "files/etc/uci-defaults/99-asu-defaults"
defaults_file.parent.mkdir(parents=True)
log.info(f"Found defaults, storing at {defaults_file = }")
defaults_file.parent.mkdir(parents=True, exist_ok=True)
defaults_file.write_text(build_request.defaults)
mounts.append(
{
"type": "bind",
"source": str(bin_dir / "files"),
"target": str(bin_dir / "files"),
"source": str(host_dir / "files"),
"target": str(builder_dir / "files"),
"read_only": True,
},
)
Expand Down Expand Up @@ -237,11 +241,11 @@
f"PROFILE={build_request.profile}",
f"PACKAGES={' '.join(build_cmd_packages)}",
f"EXTRA_IMAGE_NAME={packages_hash}",
f"BIN_DIR=/builder/{request_hash}",
f"BIN_DIR={builder_dir}/{request_hash}",
]

if build_request.defaults:
job.meta["build_cmd"].append(f"FILES={bin_dir}/files")
job.meta["build_cmd"].append(f"FILES={builder_dir}/files")

# Check if custom rootfs size is requested
if build_request.rootfs_size_mb:
Expand All @@ -256,7 +260,7 @@
returncode, job.meta["stdout"], job.meta["stderr"] = run_cmd(
container,
job.meta["build_cmd"],
copy=["/builder/" + request_hash, bin_dir.parent],
copy=[str(builder_dir / request_hash), str(bin_dir.parent)],
)

container.kill()
Expand Down Expand Up @@ -297,7 +301,7 @@
# job.meta["imagebuilder_status"] = "signing_images"
job.save_meta()

build_key = getenv("BUILD_KEY") or str(Path.cwd() / "key-build")
build_key = getenv("BUILD_KEY") or str(host_dir / "key-build")

if Path(build_key).is_file():
log.info(f"Signing images with key {build_key}")
Expand All @@ -307,18 +311,18 @@
{
"type": "bind",
"source": build_key,
"target": "/builder/key-build",
"target": str(builder_dir / "key-build"),
"read_only": True,
},
{
"type": "bind",
"source": build_key + ".ucert",
"target": "/builder/key-build.ucert",
"target": str(builder_dir / "key-build.ucert"),
"read_only": True,
},
{
"type": "bind",
"source": str(bin_dir),
"source": str(host_dir),
"target": request_hash,
"read_only": False,
},
Expand All @@ -327,7 +331,7 @@
working_dir=request_hash,
environment={
"IMAGES_TO_SIGN": " ".join(images),
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/builder/staging_dir/host/bin",
"PATH": f"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:{builder_dir}/staging_dir/host/bin",
},
auto_remove=True,
)
Expand Down
13 changes: 10 additions & 3 deletions asu/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,16 @@
class Settings(BaseSettings):
model_config = SettingsConfigDict(env_file=".env", env_file_encoding="utf-8")

# The following two vary between host and container. Default values
# are for the container, and should not be overridden in copied .env, see
# Containerfile for where we remove them.
redis_url: str = "redis://redis/" # host value = "redis://localhost:6379"
public_path: Path = Path.cwd() / "public"
json_path: Path = public_path / "json" / "v1"
redis_url: str = "redis://localhost:6379"

host_path: Path = "" # The fixed host "public" path, must be in .env.
builder_path: Path = Path("/builder") # Path to working directory on builder.
json_path: Path = Path(public_path) / "json" / "v1"

upstream_url: str = "https://downloads.openwrt.org"
allow_defaults: bool = False
async_queue: bool = True
Expand All @@ -19,7 +26,7 @@ class Settings(BaseSettings):
repository_allow_list: list = []
base_container: str = "ghcr.io/openwrt/imagebuilder"
update_token: Union[str, None] = "foobar"
container_host: str = "localhost"
container_sock: str = ""
container_identity: str = ""
branches: dict = {
"SNAPSHOT": {
Expand Down
2 changes: 1 addition & 1 deletion asu/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ def get_container_version_tag(input_version: str) -> str:

def get_podman():
return PodmanClient(
base_url=settings.container_host,
base_url=f"unix://{settings.container_sock}",
identity=settings.container_identity,
)

Expand Down
190 changes: 190 additions & 0 deletions local-server.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Setting up a local server

Assumptions:
- You're using a recent Ubuntu for install, below examples developed on a qemu VM using 24.04.
- Examples below use `apt`
- `git` is already installed
- Has Python 3.12 (3.11 is ok)
- You are going to use the server on your LAN for local installs, and not expose it to the internet, hence no discussion of proxies or whatnot.

## First check IPv6 connectivity from your VM

Run curl against an external server forcing IPv6, if this works, then skip forward.

```bash
curl -6 https://sysupgrade.openwrt.org/json/v1/overview.json
```

If that fails to connect, then you will have all sorts of issues unless you resolve them. The easiest thing to do is just disable IPv6 on your VM:
```bash
sudo vi /etc/sysctl.d/10-ipv6-privacy.conf
```
Add one line:
```
net.ipv6.conf.all.disable_ipv6 = 1
```
and reload:
```bash
sudo sysctl -f /etc/sysctl.d/10-ipv6-privacy.conf
```

If you can figure out how to get qemu to punch through the IPv6 blocking, @efahl would really (really) like to know.

## Podman installation

Make sure you have `podman`, Ubuntu 24.04 did not:

```bash
cd ~
sudo apt -y install podman
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket
```

## Python configuration

Create a new Python virtual environment using `venv`:

```bash
sudo apt -y install python3-venv
python3 -m venv asu-venv
. asu-venv/bin/activate
```

Test your new virtual environment. Verify that the executables are in your venv, and that the Python version is 3.11 or newer.

```bash
$ which python
/home/efahlgren/asu-venv/bin/python
$ which pip
/home/efahlgren/asu-venv/bin/pip
$ python --version
Python 3.12.3
```

Install the basic tools (`poetry` will be used to easily install all the rest of the requirements):

```bash
pip install poetry podman-compose
```

## Attended Sysupgrade installation and configuration

Get ASU and install all of its requirements:

```bash
git clone https://github.com/openwrt/asu.git
cd asu/
poetry install
```

Edit `podman-compose.yml` and make the server listen on the VM's WAN port at `0.0.0.0`:
```bash
server:
...
ports:
- "0.0.0.0:8000:8000"
```

Set up your initial podman environment:

```bash
echo "# where to store images and json files
PUBLIC_PATH=$(pwd)/public
HOST_PATH=$(pwd)/public
# absolute path to podman socket mounted into worker containers
CONTAINER_SOCK=/run/user/$(id -u)/podman/podman.sock
# allow host cli tools access to redis database
REDIS_URL=redis://localhost:6379
# turn on the 'defaults' option on the server
ALLOW_DEFAULTS=True
" > .env
```

## Running the server

Start up the server:
```bash
$ podman-compose up -d
...

$ podman logs asu_server_1
INFO: Started server process [2]
INFO: Waiting for application startup.
INFO:root:ASU server starting up
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
```

Check that the server is accessible. `ssh` into your router and fetch the front page, this should spew a pile of html:
```bash
asu_server=<your server's IPv4, or its name if you have it in DNS>
uclient-fetch -O - "http://$asu_server:8000/"
```

On a host with "real" curl (we need `--headers`), pick a version, target and subtarget and compose an update query as follows. This is the mechanism by which your ASU server will learn about new releases, so for each version/target/subtarget combination, you need to run a similar query. (To update almost everything, you can run `python misc/update_all_targets.py`, but that's fairly wasteful of time and bandwidth.)

```bash
curl -v --header "x-update-token: foobar" "http://$asu_server:8000/api/v1/update/SNAPSHOT/x86/64"
```
Note that the value of "x-update-token" is "foobar" by default, but can be changed in `asu/config.py` or by adding `UPDATE_TOKEN=whatever` in the `.env` file.

Selectively add more versions to the server from your router (if you have curl installed), or from your workstation using the data from the router. Here's how you'd go about it on the router:

```bash
$ eval $(ubus call system board | jsonfilter -e 'version=$.release.version' -e 'target=$.release.target')
$ echo "$version $target"
23.05.5 mediatek/mt7622
$ curl -v --header "x-update-token: foobar" "http://$asu_server:8000/api/v1/update/$version/$target"
```
(Note that you can run these `curl` queries on the ASU server itself, it has `curl` and you just use `localhost` as the value for `$asu_server`.)

Back on your ASU server, look at the worker log and see what happened:

```bash
$ podman logs asu_worker_1
...
01:18:20 default: asu.update.update(target_subtarget='x86/64', version='SNAPSHOT') (2376baed-c4bf-4d37-ba9c-4021feec54b6)
01:18:20 SNAPSHOT: Found 86 targets
01:18:20 SNAPSHOT/x86/64: Found 1 profiles
01:18:20 SNAPSHOT/x86/64: Found revision r27707-084665698b
01:18:20 default: Job OK (2376baed-c4bf-4d37-ba9c-4021feec54b6)
01:18:20 Result is kept for 500 seconds
```

You can now try to do a download using LuCI ASU, `auc` or `owut`. First point the `attendedsysupgrade` config at your server.

```bash
uci set attendedsysupgrade.server.url="http://$asu_server:8000"
uci commit
```
(To revert, simply substitute `https://sysupgrade.openwrt.org` as the `url`.)

On snapshot, run an `owut` check with `--verbose` to see where it's getting data:
```
$ owut check -v
owut - OpenWrt Upgrade Tool
Downloaded http://$asu_server:8000/json/v1/overview.json to /tmp/owut-overview.json (16073B at 0.245 Mbps)
...
```

Or for 23.05 and earler, use `auc`:
```bash
$ auc -c
auc/0.3.2-1
Server: https://10.1.1.207:8000
Running: 23.05.5 r24106-10cc5fcd00 on mediatek/mt7622 (linksys,e8450-ubi)
Available: 23.05.5 r24106-10cc5fcd00
Requesting package lists...
luci-app-adblock: git-24.224.28330-dc8b3a6 -> git-24.284.61672-4b84d8e
adblock: 4.2.2-5 -> 4.2.2-6
luci-mod-network: git-24.264.56960-63ba3cb -> git-24.281.58052-a6c2279
```

## Deployment notes

If you want your server to remain active after you log out of the server, you must enable "linger" in `loginctl`:
```bash
loginctl enable-linger
```
Loading
Loading