Compare commits

..

1 Commits

Author SHA1 Message Date
Jonathan Dahan d107193958 Try using the proxy, failure logging into webdav
2 years ago

7
.gitignore vendored

@ -1,3 +1,4 @@
/secrets/
/data/
env.production
secrets/
data/
.redo
*.tmp

@ -0,0 +1 @@
DOMAIN=localhost

@ -1,15 +0,0 @@
# copy this to .env and it will be sourced by the appropriate services
# domain your services will be running on
DOMAIN=localhost
# admin user for auth
ADMIN_USER=
ADMIN_PASS=
# used for sending notifications and reset passwords
# only supports smtp+starttls
SMTP_ADDR=
SMTP_PORT=587
SMTP_USER=
SMTP_PASS=

@ -2,151 +2,30 @@
Experiment in digital autonomy
Latest code is hosted on https://git.woodbine.nyc/micro/woodbine.nyc
Hosted on https://git.woodbine.nyc/micro/woodbine.nyc
If you are new to running your own websites, welcome!
## running
Note that a "service" is a fuzzy name for software that is expected to be always running.
docker-compose --env-file env.production \
--file services/caddy.yaml \
--file services/zitadel.yaml \
up --build
A simple web server (`python3 -m http.server`) could be a service, as could something like Gmail.
## port forwarding
## Goals
echo 'net.ipv4.ip_unprivileged_port_start=80' | sudo tee -a /etc/sysctl.conf
sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
Understandable
## beta release
- a person should be able to adapt this to their community while learning the least amount of new concepts and technology
- the person who set it up should not be needed to maintain the services
Resiliant
- services should work even when other parts of the web are not accessible
Lean
- we prefer lightweight software, which usually require less long-term maintenance
## Decisions
There are many other kinds of digital autonomy, but most people are used to the web.
We hope to share our decision making here, so you can follow our thought process.
### Decisions made for you
These needs are required for anyone who wants to deploy **web-based** services.
#### Auth
We need a way for people to either register an account or sign in with an external account to use the services.
After trying authelia, zitadel, authentik, and keycloak, got the furthest with zitadel.
#### Web
To host a webpage, you need some software that listens for http requests. We chose Caddy.
If you would like to edit the webpage, either change the files in `./data/web/site/` directly, or you can connect via WebDAV and edit the file remotely via https://web.localhost.
#### Backup
If you will be helping a community, its important to have backups and restore. We have two helper services, `backup-files` and `backup-database`.
These use duplicity to backup to a backblaze instance, so you will need to setup that beforehand.
#### Secrets
We have two helper services for making sure secrets exist (`check-secrets`), or generating unique secrets for other services that need them (`generate-secrets`).
---
## getting started
### setup
Make a backblaze B2 account for backups. Add the secrets to ./secrets/backup/.
Fill out env.template and make sure to pass it in the next command
### running
Helper scripts can be found in [the scripts directory](./scripts)
To start
./scripts/up
To stop, you can press ctrl+c, or in another terminal run
./scripts/down
To generate secrets for all services ahead-of-time
./scripts/generate-secrets
### port forwarding
The caddy service expects to be able to bind to ports 80 and 443
One simple way is to allow unprivileged users access to these low ports
If you are on linux, you can run
$ sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
$ echo 'net.ipv4.ip_unprivileged_port_start=80' | sudo tee -a /etc/sysctl.conf
The first command will set privileges until reboot. The second will make those privileges permanent.
If you are on macOS, using podman, you will want to run those commands in the linux virtual machine
$ podman machine ssh
core@localhost:~$ echo 'net.ipv4.ip_unprivileged_port_start=80' | sudo tee -a /etc/systctl.conf
core@localhost:~$ sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80
---
## design
All the services are defined by docker compose files.
We provide `backup-files`, `backup-database`, `check-secrets`, and `generate-secrets` helper services.
We have configured Caddy to import all files found in /etc/caddy.d/, so if you want to add a new service, you will need to make a small `Proxyfile` to tell caddy what subdomain to forward to what port.
See [the services readme](./services/readme.md) for a guide on adding a new service.
---
## roadmap
### alpha
- [ ] decide on single postgres instance or multiple
- [ ] postgres backup (duplicity)
- [ ] single sign-on for webdav (one user per folder)
- [ ] single sign-on for one more service
- [x] identity provider (zitadel)
- [x] file backup (duplicity)
- [x] reverse proxy (caddy)
- [x] personal home pages (caddy-webdav)
- [x] setup notifications via smtp
### beta
- [ ] file restore
- [ ] postgres restore
- [x] caddy for homepage
- [x] webdav for personal home pages
- [ ] backup using duplicity uploaded to backblaze b2
- [ ] restore using duplicity downloaded from backblaze b2
- [ ] zitadel sso
- [ ] wiki
- [ ] matrix server (dendrite)
- [ ] mail server (stalwart or maddy)
- [ ] mailing list (listmonk)
- [ ] code forge (gitea or forgejo)
### 0.1
- [ ] only expose 443, 587, 993
- [ ] running on beta.woodbine.nyc
- [ ] audit on secrets management
- [ ] audit on mail server
- [ ] audit on general architecture
- [ ] dendrite matrix server
- [ ] gitea
## credits

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
down --volumes

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
exec "$@"

@ -1,4 +0,0 @@
echo generating zitadel secrets; {
openssl rand -hex 16 | tr -d '\n' >! secrets/auth/zitadel/MASTER_KEY
openssl rand -hex 32 | tr -d '\n' >! secrets/auth/zitadel/STORAGE_PASSWORD
}

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
ps

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
pull

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
run "$@"

@ -1,8 +0,0 @@
podman compose --env-file ${ENV_FILE:-.env} \
--file services/secrets.yaml \
--file services/backup.yaml \
--file services/proxy.yaml \
--file services/auth.yaml \
--file services/web.yaml \
--file services/git.yaml \
up --build

@ -1,3 +1 @@
Do not check in anything in this directory
Check out ../services/secrets.yaml on how to make it easy to check that secrets are defined, or to generate secrets on start

@ -1,69 +0,0 @@
secrets:
MASTER_KEY:
file: ../secrets/auth/zitadel/MASTER_KEY
services:
backup:
volumes:
- ../data/auth:/mnt/backup/src/auth:ro
generate-secrets:
volumes:
- ../secrets/auth/zitadel/MASTER_KEY:/secrets/auth/zitadel/MASTER_KEY
zitadel:
restart: 'unless-stopped'
image: 'ghcr.io/zitadel/zitadel:v2.48.3'
environment:
ZITADEL_DATABASE_COCKROACH_HOST: crdb
ZITADEL_EXTERNALSECURE: true
ZITADEL_EXTERNALDOMAIN: auth.${DOMAIN}
ZITADEL_EXTERNALPORT: 443
ZITADEL_WEBAUTHN_NAME: ${DOMAIN}
ZITADEL_FIRSTINSTANCE_ORG_NAME: basement
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_USERNAME: ${ADMIN_USER}
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORD: ${ADMIN_PASS}
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_HOST: "${SMTP_ADDR}:${SMTP_PORT}"
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_USER: ${SMTP_USER}
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_PASSWORD: ${SMTP_PASS}
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_SSL: true
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_FROM: basement@mail.${DOMAIN}
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_FROMNAME: basement
ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_REPLYTOADDRESS: basement@mail.${DOMAIN}
secrets:
- MASTER_KEY
command: "start-from-init --masterkeyFile /run/secrets/MASTER_KEY --tlsMode external"
depends_on:
generate-secrets:
condition: 'service_completed_successfully'
caddy:
condition: 'service_healthy'
crdb:
condition: 'service_healthy'
ports:
- '8080:8080'
crdb:
restart: unless-stopped
image: 'cockroachdb/cockroach:latest-v23.1'
depends_on:
generate-secrets:
condition: 'service_completed_successfully'
command: "start-single-node --insecure --store=path=/cockroach/cockroach-data,size=20%"
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health?ready=1"]
interval: '10s'
timeout: '30s'
retries: 5
start_period: '20s'
ports:
- '9090:8080'
- '26257:26257'
volumes:
- ../data/auth/crdb/data:/cockroach/cockroach-data:rw
caddy:
volumes:
- ./auth/Proxyfile:/etc/caddy.d/zitadel:ro

@ -1,4 +0,0 @@
auth.{$DOMAIN}:443 {
reverse_proxy zitadel:8080
tls internal
}

@ -1,42 +0,0 @@
secrets:
B2_APPLICATION_KEY:
file: ../secrets/backup/duplicity/B2_APPLICATION_KEY
B2_APPLICATION_KEY_ID:
file: ../secrets/backup/duplicity/B2_APPLICATION_KEY_ID
BUCKET_NAME:
file: ../secrets/backup/duplicity/BUCKET_NAME
PASSPHRASE:
file: ../secrets/backup/duplicity/PASSPHRASE
services:
backup:
image: ghcr.io/tecnativa/docker-duplicity:3.3.1
restart: unless-stopped
depends_on:
generate-secrets:
condition: 'service_completed_successfully'
secrets: [B2_APPLICATION_KEY, B2_APPLICATION_KEY_ID, BUCKET_NAME, PASSPHRASE]
environment:
HOSTNAME: ${DOMAIN}
TZ: America/New_York
volumes:
- ./backup/backup-files:/backup-files:ro
entrypoint: ["/bin/sh", "/backup-files"]
generate-secrets:
volumes:
- ../secrets/backup/duplicity/BUCKET_NAME:/secrets/backup/duplicity/BUCKET_NAME
- ../secrets/backup/duplicity/PASSPHRASE:/secrets/backup/duplicity/PASSPHRASE
# duplicity-postgres:
# image: tecnativa/docker-duplicity-postgres:latest
# restart: unless-stopped
# depends_on: [secrets]
# secrets: [B2_APPLICATION_KEY, B2_APPLICATION_KEY_ID, BUCKET_NAME, PASSPHRASE]
# environment:
# HOSTNAME: ${DOMAIN}
# TZ: America/New_York
# volumes:
# - ./backup/backup-databases:/backup-databases:ro
# entrypoint: ["/bin/sh", "/backup-databases"]

@ -1,14 +0,0 @@
read B2_APPLICATION_KEY_ID < /run/secrets/B2_APPLICATION_KEY_ID
read B2_APPLICATION_KEY < /run/secrets/B2_APPLICATION_KEY
read BUCKET_NAME < /run/secrets/BUCKET_NAME
export DST=b2://${B2_APPLICATION_KEY_ID}:${B2_APPLICATION_KEY}@${BUCKET_NAME}
read PASSPHRASE < /run/secrets/PASSPHRASE
export PASSPHRASE
for environment in /backup/*; do
. $environment
export PGHOST PGPASSWORD PGUSER DBS_TO_INCLUDE DBS_TO_EXCLUDE
/usr/local/bin/entrypoint
unset PGHOST PGPASSWORD PGUSER DBS_TO_INCLUDE DBS_TO_EXCLUDE
done

@ -1,9 +0,0 @@
read B2_APPLICATION_KEY_ID < /run/secrets/B2_APPLICATION_KEY_ID
read B2_APPLICATION_KEY < /run/secrets/B2_APPLICATION_KEY
read BUCKET_NAME < /run/secrets/BUCKET_NAME
export DST=b2://${B2_APPLICATION_KEY_ID}:${B2_APPLICATION_KEY}@${BUCKET_NAME}
read PASSPHRASE < /run/secrets/PASSPHRASE
export PASSPHRASE
/usr/local/bin/entrypoint

@ -0,0 +1,14 @@
. ../../env.production
service=$(basename $PWD)
secrets="../../secrets/$service"
read B2_APPLICATION_KEY_ID < $secrets/application-key-id
read B2_APPLICATION_KEY < $secrets/application-key
export BUCKET_NAME=${DOMAIN}-backup
export DESTINATION=b2://${B2_APPLICATION_KEY_ID}:${B2_APPLICATION_KEY}@${BUCKET_NAME}
read PASSPHRASE < $secrets/passphrase
env PASSPHRASE=$PASSPHRASE duplicity backup ../../data $DESTINATION >&2
env PASSPHRASE=$PASSPHRASE duplicity remove-older-than 28D $DESTINATION >&2

@ -0,0 +1,32 @@
version: "3.7"
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
privileged: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
- ../data/caddy/site:/site
- ../data/caddy/data:/data
- caddy_config:/config
environment:
- DOMAIN
- CADDY_INGRESS_NETWORKS=caddy
labels:
caddy: ${DOMAIN}
caddy.file_server.root: /site
networks:
- caddy
networks:
caddy:
external: true
volumes:
caddy_config:

@ -1,48 +0,0 @@
secrets:
DB_PASSWD:
file: ../secrets/git/gitea/DB_PASSWD
services:
caddy:
volumes:
- ./git/Proxyfile:/etc/caddy.d/git
backup:
volumes:
- ../data/git:/mnt/backup/src/git
gitea:
image: gitea/gitea:1.21.3-rootless
secrets: [ DB_PASSWD ]
environment:
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: "db:5432"
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD__FILE: /run/secrets/DB_PASSWD
GITEA__mailer__ENABLED: true
GITEA__mailer__FROM: gitea@mail.${DOMAIN}
GITEA__mailer__PROTOCOL: smtp+starttls
GITEA__mailer__SMTP_ADDR: ${SMTP_ADDR}
GITEA__mailer__SMTP_PORT: ${SMTP_PORT}
GITEA__mailer__USER: ${SMTP_USER}
GITEA__mailer__PASSWD: ${SMTP_PASS}
restart: unless-stopped
volumes:
- ../data/git/gitea/data:/data
ports:
- 3000:3000
db:
image: postgres:16.1-alpine
secrets: [ DB_PASSWD ]
environment:
POSTGRES_USER: gitea
POSTGRES_PASSWORD_FILE: /run/secrets/DB_PASSWD
POSTGRES_DB: gitea
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
expose:
- 5432
volumes:
db_data:

@ -1,3 +0,0 @@
git.{$DOMAIN} {
reverse_proxy gitea:3000
}

@ -1,54 +0,0 @@
secrets:
SMTP_PASSWORD:
file: ../secrets/mail/SMTP_PASSWORD
services:
generate-secrets:
volumes:
- ../secrets/mail/maddy/SMTP_PASSWORD:/secrets/mail/maddy/SMTP_PASSWORD
backup:
volumes:
- ../data/mail:/mnt/backup/src/mail:ro
caddy:
volumes:
- ./mail/Proxyfile:/etc/caddy.d/mail:ro
maddy:
image: foxcpp/maddy:0.7
secrets: [SMTP_PASSWORD]
restart: unless-stopped
depends_on:
generate-secrets:
condition: 'service_completed_successfully'
environment:
- MADDY_HOSTNAME=mx.mail.${DOMAIN}
- MADDY_DOMAIN=mail.${DOMAIN}
volumes:
- ../data/mail/maddy:/data
# TODO: get from caddy?
#- ../secrets/tls/fullchain.pem:/data/tls/fullchain.pem:ro
#- ../secrets/tls/privkey.pem:/data/tls/privkey.pem:ro
ports:
- 25:25
- 143:143
- 587:587
- 993:993
roundcube:
image: roundcube/roundcubemail:1.6.5-fpm-alpine
environment:
ROUNDCUBEMAIL_DEFAULT_HOST: ssl://mx.mail.${DOMAIN}
ROUNDCUBEMAIL_DEFAULT_PORT: 993
ROUNDCUBEMAIL_SMTP_SERVER: tls://mx.mail.${DOMAIN}
ROUNDCUBEMAIL_SMTP_PORT: 587
ROUNDCUBEMAIL_DB_TYPE: sqlite
volumes:
- ../data/mail/roundcube/db:/var/roundcube/db
ports:
- 9002:80
check-secrets:
secrets:
- SMTP_PASSWORD

@ -1,4 +0,0 @@
mail.{$DOMAIN} {
reverse_proxy roundcube:9002
}

@ -1,25 +0,0 @@
services:
caddy:
image: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./proxy/Caddyfile:/etc/caddy/Caddyfile
- ../data/proxy/caddy/site:/site
- ../data/proxy/caddy/data:/data
- ../data/proxy/caddy/config:/config
environment:
- DOMAIN
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost"]
interval: '10s'
timeout: '30s'
retries: 5
start_period: '20s'
backup:
volumes:
- ../data/proxy:/mnt/backup/src/proxy:ro

@ -16,7 +16,7 @@ we have a backup script that uses duplicity, this should be moved into a contain
caddy is the web server, and handles https certificates, and proxying to all the services.
#### [Zitadel](ihttps://zitadel.com/docs/self-hosting/deploy/overview)
#### [Zitadel](https://zitadel.com/docs) **WIP**
zitadel lets you have a single username and password to sign on to all your services.
@ -31,48 +31,14 @@ without having to sync anything.
There are three things to think about when adding a service:
1. How to enable sign-on?
1. How to enable sign-in with zitadel?
Look at https://www.authelia.com/integration/openid-connect/introduction/ for integration guides.
Generally, zitadel has some cli commands that we have put in scripts in the zitadel folder.
2. How to expose as a subdomain?
2. How to expose as a subdomain in caddy?
Add a volume mount of your reverse proxy config to your compose file.
# in the services: part of your compose file
caddy:
volumes:
- ./some-service/Proxyfile:/etc/caddy.d/some-service
# Proxyfile looks something like
someservice.{$DOMAIN} {
reverse_proxy someservice:4321
}
You will want to make a Caddyfile, which will get mounted by the Caddy compose file.
3. How will this be backed up and restored?
For plain files, add the appropriate volume mount like so:
# in the services: part of your compose file
backup:
volumes:
- ../data/some-service:/mnt/backup/src/some-service:ro
This will be backed up according to the plan in [the backup service](./backup.yaml)
For postgres databases, we are figuring out the best way
4. How do we manage secrets?
If your service requires secrets, you can use docker secrets, and have them generated on startup as follows:
# in the services: part of your compose file
some-service:
depends_on:
- secrets
secrets:
volumes:
- ../secrets/some-service/SECRET_TO_INITIALIZE_IF_EMPTY:/secrets/some-service/SECRET_TO_INITIALIZE_IF_EMPTY
We backup all files in the data/ directory, but if your service interacts with a database like postgres, will need additional work.

@ -1,14 +0,0 @@
services:
generate-secrets:
image: alpine/openssl
restart: no
volumes:
- ./secrets/generate-secrets:/generate-secrets:ro
entrypoint: ["/generate-secrets"]
check-secrets:
image: alpine
restart: no
volumes:
- ./secrets/check-secrets:/check-secrets:ro
entrypoint: ["/check-secrets"]

@ -1,14 +0,0 @@
#!/usr/bin/env sh
# this throws an error if any secrets are empty
set -o errexit
set -o nounset
set -o pipefail
for secret in /run/secrets/* ; do
if [ -s "$secret" ]; then
>&2 echo "ERROR: empty secret: $(basename $secret)"
exit 1
fi
done

@ -1,13 +0,0 @@
#!/usr/bin/env sh
# this generates a random 64 char hex string for all empty secret files in /secrets/*/*/*
set -o errexit
set -o nounset
set -o pipefail
for secret in /secrets/*/*/* ; do
test -d "$secret" && rmdir "$secret"
test -s "$secret" && continue
openssl rand -hex ${2:-64} > $secret
done

@ -1,3 +0,0 @@
auth.{$DOMAIN} {
reverse_proxy authelia:9091
}

@ -1 +0,0 @@
authelia is our single sign-on

@ -1,89 +0,0 @@
version: "3.8"
services:
postgresql:
image: docker.io/library/postgres:12-alpine
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
volumes:
- database:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
POSTGRES_USER: ${PG_USER:-authentik}
POSTGRES_DB: ${PG_DB:-authentik}
redis:
image: docker.io/library/redis:alpine
command: --save 60 1 --loglevel warning
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
volumes:
- redis:/data
authentik:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.2}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
volumes:
- ../data/authentik/media:/media
- ../data/authentik/custom-templates:/templates
ports:
- "${COMPOSE_PORT_HTTP:-9000}:9000"
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
depends_on:
- postgresql
- redis
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.2}
restart: unless-stopped
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
# `user: root` and the docker socket volume are optional.
# See more for the docker socket integration here:
# https://goauthentik.io/docs/outposts/integrations/docker
# Removing `user: root` also prevents the worker from fixing the permissions
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
# (1000:1000 by default)
user: root
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ../data/authentik/media:/media
- ../data/authentik/custom-templates:/templates
- ../secrets/authentik/certs:/certs
depends_on:
- postgresql
- redis
# setup a reverse proxy for caddy
caddy:
volumes:
- ./authentik/Proxyfile:/etc/caddy.d/authentik:ro
# backup the zitadel folder
backup:
volumes:
- ../data/authentik:/mnt/backup/src/authentik:ro
volumes:
database:
driver: local
redis:
driver: local

@ -1,3 +0,0 @@
auth.{$DOMAIN} {
reverse_proxy authentik:9000
}

@ -1,25 +1,38 @@
version: "3.7"
services:
web:
depends_on:
- caddy
build:
context: ./web
dockerfile: Containerfile
restart: unless-stopped
depends_on:
- caddy
privileged: true
ports:
- "8081:80"
- "4431:443"
- "4431:443/udp"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./web/Caddyfile:/etc/caddy/Caddyfile
- ../data/web/site:/site
- ../data/web/data:/data
- ../data/web/config:/config
- caddy_config:/config
environment:
- DOMAIN
networks:
- caddy
labels:
caddy: web.${DOMAIN}
# caddy.reverse_proxy: "{{upstreams 4431}}"
caddy.reverse_proxy: services-web-1:4431
#security_opt:
# - label=disable
networks:
caddy:
volumes:
- ./web/Proxyfile:/etc/caddy.d/web:ro
external: true
backup:
volumes:
- ../data/web:/mnt/backup/src/web:ro
volumes:
caddy_config:

@ -5,7 +5,7 @@ root * /site
@notget not method GET
route @notget {
webdav
webdav
}
file_server browse

@ -1,8 +1,8 @@
FROM caddy:builder-alpine AS builder
FROM caddy:2.7.5-builder-alpine AS builder
RUN xcaddy build \
--with github.com/mholt/caddy-webdav
FROM caddy:alpine
FROM caddy:latest
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

@ -1,9 +0,0 @@
web.{$DOMAIN} {
# forward_auth authelia:9091 {
# uri /api/verify?rd=https://auth.{$DOMAIN}/
# copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
# }
reverse_proxy web:4431
}

@ -0,0 +1,44 @@
version: '3.8'
services:
zitadel:
restart: 'always'
networks:
- zitadel
- caddy
image: 'ghcr.io/zitadel/zitadel:latest'
command: 'start-from-init --masterkey "6cd52ccbc4da912319f0fdc016d68575dd391bd932ebdc045c89b2dce9e90315" --tlsMode disabled'
environment:
- 'ZITADEL_DATABASE_COCKROACH_HOST=crdb'
- 'ZITADEL_EXTERNALSECURE=false'
depends_on:
crdb:
condition: 'service_healthy'
ports:
- '8123:8080'
labels:
- caddy: login.${DOMAIN}
- caddy.reverse_proxy: "{{upstreams}}"
crdb:
restart: 'always'
networks:
- zitadel
- caddy
image: 'cockroachdb/cockroach:v22.2.2'
command: 'start-single-node --insecure'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health?ready=1"]
interval: '10s'
timeout: '30s'
retries: 5
start_period: '20s'
ports:
- '9090:8080'
- '26257:26257'
networks:
caddy:
external: true
zitadel:
Loading…
Cancel
Save