Compare commits

..

1 Commits

Author SHA1 Message Date
southerntofu 064f36f368 début de support de prosody (serveur jabber) 2020-04-25 18:54:24 +02:00
40 changed files with 750 additions and 234 deletions

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "roles"]
path = roles
url = https://codeberg.org/southerntofu/ansible-selfhosted

103
README.md
View File

@ -1,5 +1,3 @@
**ATTENTION:** Tout ce qui suit ci-dessous n'est plus d'actualité. La documentation de nos recettes est disponible [ici](https://codeberg.org/southerntofu/ansible-selfhosted).
Bienvenue sur le dépôt de la configuration système de ~fr !
On écrit des recettes [Ansible](https://fr.wikipedia.org/wiki/Ansible_(logiciel)) pour décrire les étapes à suivre pour configurer notre serveur basé sur Debian 10 Buster (stable). Pour l'instant, on gère:
@ -9,104 +7,3 @@ On écrit des recettes [Ansible](https://fr.wikipedia.org/wiki/Ansible_(logiciel
- des [.onion](https://fr.wikipedia.org/wiki/.onion) pour les pages perso
Nous mettons à disposition un [guide utilisateurice](docs/utilisateurice.md) et un [guide administrateurice](docs/administrateurice.md). Nous essayons de rendre notre système compréhensible et reproductible. Si quelque chose n'est pas clair, ou que ça ne fonctionne pas chez toi, c'est un bug alors n'hésite pas à le signaler.
Nos recettes visent à configurer tout un système d'hébergement mutualisé à partir d'un fichier de configuration central: [config.yml](https://tildegit.org/tilde-fr/infra/src/branch/master/config.yml). En suivant des principes [déclaratifs](https://fr.wikipedia.org/wiki/Programmation_d%C3%A9clarative), nous espérons obtenir des recettes robustes et interchangeables.
**ATTENTION:** Tout ce qui suit ci-dessous n'est plus d'actualité. La documentation de nos recettes est disponible [ici](https://codeberg.org/southerntofu/ansible-selfhosted).
# Configuration déclarative
La mise en place d'un système de configuration déclarative nous force à nous concentrer sur les fonctionnalités et services que nous voulons mettre en place (qui sont communes à de nombreux hébergeurs), sans nous soucier précisément de comment faire.
Plus précisément, cette approche nous donne de la marge de manoeuvre pour expérimenter plusieurs approches pour implémenter une fonctionnalité. Par exemple, les pages personnelles de nos utilisateurices peuvent être servies par différents serveurs web.. pourquoi devrait-on toujours réécrire des recettes spécifiquement conçues pour notre système et notre serveur web préféré ?
Nous prenons l'approche inverse : nous décrivons les fonctionnalités que nous voulons obtenir dans la configuration, et laissons le champ libre à différents roles d'implémenter l'interface de configuration correspondante pour les mettre en oeuvre concrètement. Cela nous permettrait par exemple de remplacer un serveur web par un autre de façon entièrement transparente pour les sysadmin.
De plus, cette approche facilite les nouvelles implémentations. En effet, chaque logiciel a ses spécificités et écrire des recettes génériques pour un service peut être complexe. Une fois le champ des besoins étudié et délimité, la standardisation d'une interface de configuration permet aux futures implémentations d'avoir un aperçu clair des concepts et fonctionnalités en jeu.
Bref, notre système est actuellement basé sur des recettes Ansible. Mais le but est de concevoir un système qui permettent à d'autres implémentations d'être compatibles avec nos configurations, qu'elles soient écrites en bash, en Nix, en Guix... Le projet Yunohost considère actuellement de migrer progressivement [vers un modèle déclaratif](https://github.com/YunoHost/issues/issues/1614).
# Services
L'architecture de nos recettes reposent sur la distinction entre **services** et **roles**. Un service est une référence qui définit une interface de configuration. Un role est une implémentation spécifique d'un service.
Par exemple, `webserver` est un service, qui est implémenté par un rôle apache, nginx, ou lighttpd. Pour passer de apache à nginx, on peut simplement faire:
```
$ rm roles/webserver
$ ln -s roles/nginx roles/webserver
```
## Gestionnaires de paquets
Les gestionnaires de paquets sont des rôles chargés de l'installation de paquets additionnels sur le système. Parmi ceux-ci, on retrouve [npm](https://www.npmjs.com/), [cargo](https://doc.rust-lang.org/cargo/) ou encore [apt](https://en.wikipedia.org/wiki/APT_(Debian)).
Interface :
```
packages:
x: [ packageA, packageB ]
y: [ packageC, packageD ]
```
Gestionnaires de paquets implémentés :
- [x] apt (appelé `debian`)
- [x] cargo (appelé `rust`)
- [ ] npm
De plus, un gestionnaire de paquets `custom` permet de définir des recettes personnalisées pour des logiciels qui ne sont pas packagés autrement.
Fonctionnalités implémentées :
- [x] liste de paquets
- [ ] sources tierces
- [ ] paramètres de compilation
Actuellement, tous les gestionnaires de paquets dans roles/ ont un nom commençant par `.`. Dans le futur, ce préfixe sera probablement renommé `pkg-`, de telle façon qu'un gestionnaire de paquets `x` soit situé dans `roles/pkg-x`.
## Webserver
Le serveur web est un service qui permet de servir des pages et applications web.
Interface :
```
webserver:
vhosts:
- hostname: example.org
aliases: [ www.example.org ]
root: /var/www/example.org
- hostname: thepiratebay.example.org
proxy: [ https://thepiratebay.org ]
```
En plus de cette interface de configuration, le service webserver peut être appelé par d'autres services avec des paramètres additionnels. Par exemple, les pages [well-known](https://en.wikipedia.org/wiki/Well-known_URIs) sont configurées directement par d'autres services, comme ceci:
```
webserver:
vhosts:
- hostname: example.org
well-known:
regex: ^/~(.+?)(/.*)?$
alias: /home/$1/public/html/tilde/$2
autoindex: on|off
```
C'est ainsi que sont activées les pages perso sur le domaine principal du serveur. (TODO: ce n'est pas encore le cas!)
# Mutualisation
Le but de ce projet est de mutualiser des recettes d'administration système afin d'encourager les bonnes pratiques et de faciliter la mise en oeuvre de services autohébergés. En cela, notre projet est similaire à [Yunohost](https://yunohost.org/), [AlternC](https://alternc.com/), [ISPConfig](https://www.ispconfig.org/), ou encore [Freedombone](https://freedombone.net/).
Pourtant, nous faisons des choix techniques radicalement différents afin d'explorer ce que la configuration déclarative pourrait apporter aux solutions d'autohébergement.
# Écrire des recettes
Pour l'instant, toutes les recettes sont contenues dans ce dépôt. Pourtant, il est possible d'utiliser des recettes tierces via des sous-modules git. Cela pose des questions de sécurité, qui ont très bien été étudiées par le projet [Guix](https://guix.gnu.org/blog/2020/securing-updates/).
Note: pour le moment, ces recettes ne sont pas spécifiquement articulées pour pouvoir facilement remplacer des bouts. Cela va demander du taf en plus mais ça avance.
# Sécurité
**ATTENTION:** les recettes présentées ici sont dévelopées avec amateurisme et ne présentent aucune garantie de fiabilité ou de sécurité. Ne nous fais pas confiance, mais viens participer au projet pour l'améliorer !

View File

@ -1,102 +1,15 @@
hostname: "fr.tild3.org"
contact: "root@fr.tild3.org"
services:
- ".common"
- "webserver"
- "peering"
- "unix_users"
- "chatbridge"
#- "mucbridge"
- "simpleweb_peertube"
webserver:
vhosts:
- host: "fr.tild3.org"
template: "zola"
git: "https://tildegit.org/tilde-fr/site"
hostname: fr.tild3.org
roles: [ webserver ]
irc_announce:
chan: "#fr"
# TODO: reimplement peering with new recipes
simpleweb_peertube:
vhosts:
- host: "tube.fr.tild3.org"
accounts: [ "submedia@kolektiva.media", "contrainfo@kolektiva.media", "enough14@kolektiva.media" ]
channels: [ "mooc.chatons.1@framatube.org", "mobilizon@framatube.org", "bf54d359-cfad-4935-9d45-9d6be93f63e8@framatube.org" ]
- host: "tubetest.fr.tild3.org"
mucbridge:
chans:
# List of JIDs to bridge together, carefully following cheogram-muc-bridge settings
- [ { jid: "#joinjabber-fr%irc.tilde.chat@irc.localhost", tag: "~", nickChars: "Some \"a-zA-Z0-9`|^_{}[]\\\\-\"", nickLength: "Some 32" }, { jid: "fr@joinjabber.org", tag: "jabberFR", nickChars: "None Text", nickLength: "None Natural" } ]
#- [ { jid: "#whereiseveryone%irc.libera.chat@irc.localhost", tag: "libera", nickChars: "Some \"a-zA-Z0-9`|^_{}[]\\\\-\"", nickLength: "Some 32" }, { jid: "guix@rooms.dismail.de", tag: "dismail", nickChars: "None Text", nickLength: "None Natural" } ]
# Mappings of server to channel name
#- { tildechat: "#foo", jj: "foo" }
# mappings defined in settings.profiles.jjworkinggroups
#disroot: [ "anarchism", "federation" ]
#jfr: [ "anarchisme", "fédération" ]
jfr: [ "fédération" ]
disroot: [ "federation" ]
jjworkinggroups: [ "privacy", "bridging", "abuse", "translations", "sysadmin", "website" ]
settings:
host: "bridge.fr.tild3.org"
nick: "fr-bridge2"
gateway_irc: "irc.localhost" # Optional (defaults to irc.localhost + biboumi setup
accounts:
#tildechat: { host: "irc.tilde.chat", type: "irc" } # Soon we can remove tag on XMPP side, but not yet
tildechat: { host: "irc.tilde.chat", type: "irc", tag: "~chat" }
jj: { host: "joinjabber.org", tag: "jj" }
disroot: { host: "chat.disroot.org", tag: "disroot" }
jfr: { host: "chat.jabberfr.org", tag: "jfr" }
profiles:
# Profiles are mapping of profile name to a mapping of server name to chatroom name format
jjworkinggroups: { tildechat: "#joinjabber-$room", jj: "$room" }
jfr: { tildechat: "#$room", jfr: "$room" }
disroot: { tildechat: "#$room", disroot: "$room" }
chatbridge:
tilde:
- "fr"
jabberfr:
- anarchisme
#- tilde-fr
#- fédération
# disroot: #
# - anarchism # #anarchism is now relayed by matterbridge operated by anelki on muse.edist.ro
# #- federation
settings:
accounts:
- name: "jabberfr"
type: "xmpp"
login: "matrixbot@jabber.fr"
server: "chat.jabberfr.org"
- name: "disroot"
type: "xmpp"
login: "matrixbot@jabber.fr"
server: "chat.disroot.org"
- name: "joinjabber"
type: "xmpp"
login: "matrixbot@jabber.fr"
server: "joinjabber.org"
- name: "tildechat"
type: "irc"
login: "bridge"
server: "irc.tilde.chat"
profiles:
jabberfr:
tildechat: "#$room"
jabberfr: "$room"
joinjabber: "test-$room"
tilde:
tildechat: "#$room"
jabberfr: "tilde-$room"
# disroot:
# tildechat: "#$room"
# disroot: "$room"
# joinjabber: "test-$room"
peers:
- name: tilde.netlib.re
client_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHsVZvvVX3VPj2sWxrb8LJrn3650aoLAZgbY7+CB+NU"
server_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUAIuwEhFXTDfOEG+hQ2d/xeUwsgPJQF7oeNYr1ZXnG"
- name: tilde.netlib.re
client_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHsVZvvVX3VPj2sWxrb8LJrn3650aoLAZgbY7+CB+NU"
server_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUAIuwEhFXTDfOEG+hQ2d/xeUwsgPJQF7oeNYr1ZXnG"
packages:
debian: [ subversion, mercurial, htop, tmux, vim, emacs, mutt, weechat, elinks, rsync, dnsutils, make, g++, libssl-dev, mosh, gopher, sl, jq ]
debian: [ subversion, mercurial, htop, tmux, vim, emacs, mutt, weechat, elinks, rsync, dnsutils, make, g++, libssl-dev, mosh, gopher, sl ]
rust: [ lsd ]
custom: [ zola, ttbp ]
users:
- name: tofu
sudo: true
@ -104,27 +17,8 @@ users:
- name: kumquat
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFZ5FBnDlBIGlJ4TI0babTTmS5ECPM3yuDP1AhnNQUDZ"
- name: mspe
sudo: true
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICZAs76kQJ0/Et2NGzhxurK2wE0VhYsG9wl85iCmR9xH"
- name: merry
key: |
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFxPbGSCW0KOOreD0FZGu3PFKMNIEi5VrJHzub8+poVs
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMGEuG1axFQBefQ+nCCj3VihcJWR+izHVnhYM+gXNxAf
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFxPbGSCW0KOOreD0FZGu3PFKMNIEi5VrJHzub8+poVs"
- name: von
key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDBl15MKr9LqpJrohoQ/JBg95o2dFTOx6zEmdVO3peAt4FyrH8IfGj8H8DfOu4FQ9cay17X+/tV6znnW0D3XJro8fEavXWfNmpJ20EYwS2FeJ67lgL/7t4vUA4QGo/QR2vzxrLzP41bE5Bb3a4Rh1wJwvB06e0QCETyZNurxlLJ7E/IS0Axjcc+GTmFuR+I0Tj1dzRBn/DS8EVgRxCpx5eYUrh+uzLIyhKJEq6tGaQzIbSgNYFSorXcdf1IAkWRsV29VCTH8narzFfc8fuwetWhBc60WyRAyzGm4K7m+YNBj1JmWXvCeYGsRdaQQrOqY44fxo0WdVWbjLbQKTZMgPB3Ag34egI/RM4hQqQeOSPjzZM9t2yfSp1Uv5W5gBI6hg71CpVicz0ZkXaBNONxX6RRyav4Dmk62I87R2RrY4YkgwWyX3KrzdOuCV7IZZWWUIolaXpQxXhIA6svQ8/Gs+wwq+o1gPvBife9j4V19UnUxl4gtUwVcnSNfzNJgGYDyvOszz+hjxqVj1jD30UCzjnxe+vuxxY+mgr8RNSyjDvcruBAOGk0VXU292zdtYM9Wi0va5sxnhItgsabxlmjmUr2y1S3mDxnf8HC7yZQ7xMueksqDEavqxpWA1e11BPedWL9W38ZlnDpunx9lg1SClqLGCmYBVPOSMzl2WOxYGjivw== von"
- name: h30x
key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVwgt0+qgrRVVT6Je9JHD2IBIt7pbfeRovGecOtP8H8q5pOOxTmvWwTkLVeOVVWJsWrCso2IQo95z4pUg1yUuXlvassHVCU5oWXlkE2Ax19M/ZjcLZ4eoGwtEJRlcibZwJK/8dD4SffNzd5cmFk+Bs6+hbCvFRG5/tAxdBZnAazB/mK8hUlmQbB0KCcaGXUEB3RYw15Kij7X9+vMAwhyYmYRyQeZmy/mMr0lc49WPQSGWl0oh9B6+rUsV1o8v7OEu3yincnEzUm86RAdgPbFVKMtYX9CEoram6M15Ca5/0fpgIST5ZB3lKc03dSk/cq+yMzLtzmD1XW0u8WHhTnlf2vNMgxOlDH7K1zsZSQkCeJtkOSHff6JFuPqH2zmfMxKWnvnRgIf3J4yZSCImI/Cv4DzRQUE3QS7XNlBKWuIfhaJ67bpwYjyWuFip9BGvBMdYv+htEgSpiSaeIozI55HDVU+zLp0ZpnAKt49/dXdX1OPW9w9GH1/XVuPLYsifrDhs="
- name: vaurora
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM0mpVI7iWm1pQ9Kl7Bjn9ItgVlBn+EX1yv8MCyxwyau"
- name: omz
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIElh8AfX6EQsMjrZPcD5hwAvWlP1Bo6S1CWWUVVASrdM"
- name: bogusoverflow
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAu2hTNKdHe/UjY479VvL5/HdcjYlnA2bOXA0wN6yjNU"
- name: val
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjJoY4XBTTNsxLVF/sUKBI4WGR2AIiR9qfMdspnsRfJ "
- name: dsx
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAm12PbmQ64+BZx2VUUrcVnHUh21zOcp3mC7ZXHMz5u6rQmBwM7c4UJJ39dJBcN1mFKrz4EL7n0AMf/po+fpz+gUj1sB6LKVugN1AOaB75gY2+wSbipu+zFOOVS68lv/VvRppdPNDptOLj+60QshK+Z5QtbDoWBwTvIrDVhdscAmNMUlRpo6syIdS1LiPOHBTVmfXRWrJHSq+0nYalJ219l9vvn/yn9O5r0/u6gHSDVO+++KO9jrGGRjxeNIeO2lZop/qS0DJDir2s/9aWb946+/cSJYeLS2QfziYsYJ5tymab+nacN2iBVamH3vBOsGIenhGvH5y9yQqp3lSDmXMnRQ=="
- name: sebbu
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILPIE7uFg+Kua9B2EgxD2N0F3Ang31iSDK0KH6UUVRe5"
- name: kaliko
key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF9GKMSkGcE4n5pOuO8DwEbV18H47L6hvRwGEuJolrni"

View File

@ -1,6 +1,6 @@
#!/bin/bash
CMD="ANSIBLE_RETRY_FILES_ENABLED=0 ansible-playbook -e @config.yml roles/main.yml"
CMD="ANSIBLE_RETRY_FILES_ENABLED=0 ansible-playbook -e @config.yml roles/recette.yml"
DEPS=("ansible-playbook" "grep")
REMOTE=false

View File

@ -57,7 +57,6 @@ webserver-personal-pages: Setup personal pages
webserver-bucket-size: Configure webserver for long domain names (onions)
# roles/webserver/tasks/onions_perso.yml
webserver-onion-hostname: Read personal onion
webserver-onion-giveuser: Tell user about their onion address
webserver-onion-config: Configure personal onion page
webserver-onion-symlink: Enable personal onion page config
# roles/webserver/tasks/packages.yml
@ -67,11 +66,6 @@ webserver-perso-config: Configure personal pages for webserver
webserver-perso-symlink: Enable personal pages config
webserver-perso-publichtml: Create public_html folder in skel
webserver-perso-onions: Prepare personal pages on onions
webserver-perso-multisite: Enable multisite support by linking to ~/public_html
# roles/webserver/tasks/multisite.yml
webserver-multisite-check: Verify that ~/public/html exists
webserver-multisite-folder: Create ~/public/html
webserver-multisite-symlink: Create symlinks to ~/public_html
#### .debian
# roles/.debian/tasks/main.yml
debian-pkg: Setup Debian packaged défined in config

View File

@ -56,7 +56,6 @@ webserver-personal-pages: Mettre en place les pages perso
webserver-bucket-size: Configurer le serveur web pour les longs domaines (.onion)
# roles/webserver/tasks/onions_perso.yml
webserver-onion-hostname: Récupérer l'onion perso
webserver-onion-giveuser: Indiquer à l'utilisateurice l'adresse de son onion
webserver-onion-config: Configurer les pages perso en onion
webserver-onion-symlink: Activer la configuration des pages perso en onion
# roles/webserver/tasks/packages.yml
@ -65,12 +64,7 @@ webserver-pkg: Installer les paquets pour le serveur web
webserver-perso-config: Configurer les pages perso
webserver-perso-symlink: Activer la configuration des pages perso
webserver-perso-publichtml: Créer le dossier public_html dans /etc/skel
webserver-perso-multisite: Activer le multi-site en pointant vers ~/public_html
webserver-perso-onions: Préparer les pages perso en onion
# roles/webserver/tasks/multisite.yml
webserver-multisite-check: Vérifier si ~/public/html existe
webserver-multisite-folder: Créer ~/public/html/
webserver-multisite-symlink: Créer les liens symboliques vers ~/public_html
#### .debian
# roles/.debian/tasks/main.yml
debian-pkg: Installer les paquets Debian définis dans la config

1
roles

@ -1 +0,0 @@
Subproject commit 42c4547358504d25275eede195b25e9e50095ffa

View File

@ -0,0 +1,5 @@
# Because we are using logrotate for greater flexibility, disable the
# internal certbot logrotation.
max-log-backups = 0
rsa-key-size = 4096
email = southerntofu@thunix.net

View File

@ -0,0 +1,3 @@
HiddenServiceDir /var/lib/tor/{{ item.name }}
HiddenServiceVersion 3
HiddenServicePort 80 127.0.0.1:80

View File

@ -0,0 +1,4 @@
Host *
HostKeyAlgorithms ssh-ed25519
PubkeyAcceptedKeyTypes ssh-ed25519
PasswordAuthentication no

View File

@ -0,0 +1,4 @@
- name: reload tor
service:
name: tor
state: restarted

View File

@ -0,0 +1,40 @@
- name: common-backports
lineinfile:
path: /etc/apt/sources.list.d/backports.list
line: deb http://ftp.debian.org/debian buster-backports main contrib
create: yes
state: present
- name: common-base-pkg
apt:
state: present
name: [ certbot, tor, sudo ]
update_cache: yes
# TODO: configurable contact email from config.yml
- name: common-certbot-setup
copy:
src: ../files/letsencrypt_cli.ini
dest: /etc/letsencrypt/cli.ini
- include: tor.yml
- name: common-users-gen
include_tasks: users/main.yml
when: users is defined
- name: common-peering
include: peering/main.yml
when: peers is defined
- name: common-additional-packages
include_tasks: packages.yml
when: packages is defined
- name: common-roles
include_role:
name: "{{ current_role }}"
loop: "{{ roles }}"
loop_control:
loop_var: current_role
when: roles is defined

View File

@ -0,0 +1,10 @@
# Quand packages est vide, on arrive pas ici
# Les gestionnaires de paquets sont des rôles qui commencent par .
- name: common-package-managers
include_role:
# Chaque gestionnaire de paquets peut estimer que sa liste n'est pas vide
name: ".{{ current_role.key }}"
loop: "{{ packages | dict2items }}"
loop_control:
loop_var: current_role

View File

@ -0,0 +1,15 @@
- name: common-peering-home
file:
path: "/home/peers"
state: directory
- stat:
path: "/home/peers/self"
register: local_peer
- include: setup_local.yml
when: ! local_peer.stat.exists
- name: common-peering-remote
include: setup_peer.yml
loop: "{{ peers }}"

View File

@ -0,0 +1,34 @@
- name: common-peering-local-account
user:
name: "peer"
state: present
skeleton: /etc/skel
shell: /bin/bash
system: no
createhome: yes
home: "/home/peers/self"
- name: common-peering-local-ln
file:
src: /home/peers/self
dest: "/home/peers/{{ hostname }}"
state: link
- file:
path: /home/peers/self/.ssh
owner: peer
group: peer
state: directory
- name: common-peering-local-genkey
become: yes
become_user: peer
command:
creates: /home/peers/self/.ssh/id_ed25519.pub
cmd: ssh-keygen -t ed25519 -f /home/peers/self/.ssh/id_ed25519 -N ""
- name: common-peering-local-confkey
copy:
src: ../files/ssh_config
dest: /home/peers/self/.ssh/config

View File

@ -0,0 +1,24 @@
- name: common-peering-remote-account
user:
name: "{{ item.name }}"
state: present
skeleton: /etc/skel
shell: /bin/bash
system: no
createhome: yes
home: "/home/peers/{{ item.name }}"
- name: common-peering-remote-key
lineinfile:
path: "/home/peers/{{ item.name }}/.ssh/authorized_keys"
line: "{{ item.client_key }}"
create: yes
# TODO: dans authorized_keys pour restreindre le compte à SCP
# no-port-forwarding,no-pty,command="scp source target" ssh-dss ...
# TODO: chroot
- name: common-peering-remote-known
lineinfile:
path: /home/peers/self/.ssh/known_hosts
create: yes
line: "{{ item.name }} {{ item.server_key }}"

View File

@ -0,0 +1,14 @@
- name: common-tor-create
file:
path: /etc/tor/onions
state: directory
owner: debian-tor
group: debian-tor
mode: '0740'
- name: common-tor-config
lineinfile:
path: /etc/tor/torrc
line: "%include /etc/tor/onions"
state: present
notify: reload tor

View File

@ -0,0 +1,20 @@
- include_tasks: setup_user.yml
loop: "{{ users }}"
- stat:
path: "/var/lib/tor/{{ item.name }}/hostname"
loop: "{{ users }}"
register: onion_exists
changed_when: not onion_exists.stat.exists
- name: common-users-tor-reload
service:
name: tor
state: restarted
when: onion_exists.changed
- name: common-users-tor-wait
wait_for:
path: "/var/lib/tor/{{ item.name }}/hostname"
loop: "{{ users }}"
when: onion_exists.changed

View File

@ -0,0 +1,39 @@
- name: common-users-setup-account
user:
name: "{{ item.name }}"
state: present
skeleton: /etc/skel
shell: /bin/bash
system: no
createhome: yes
home: "/home/{{ item.name }}"
register: new_user
- name: common-users-setup-sudo
user:
name: "{{ item.name }}"
group: sudo
when: item.sudo|default(false) == true
- name: common-users-setup-key
authorized_key:
user: "{{ item.name }}"
state: present
key: "{{ item.key }}"
- name: common-users-setup-onion
template:
src: ../../files/onion.conf.j2
dest: "/etc/tor/onions/{{ item.name }}.conf"
- name: common-users-setup-irc
irc:
msg: "{{ irc_announce.msg | default('Bienvenue à ' ~ item.name ~ sur le serveur \\o/') }}"
server: "{{ irc_announce.server | default('irc.tilde.chat') }}"
port: "{{ irc_announce.port | default(6697) }}"
channel: "{{ irc_announce.chan }}"
nick: "{{ irc_announce.nick | default('ansibot') }}"
nick_to: "{{ irc_announce.query | default([]) }}"
use_ssl: "{{ irc_announce.tls | default(true) }}"
timeout: "{{ irc_announce.timeout | default(10) }}"
when: new_user.changed and irc_announce is defined

BIN
roles/.custom/files/zola/zola Executable file

Binary file not shown.

View File

@ -0,0 +1,14 @@
# Pour l'instant, il n'est pas possible d'avoir un paquet qui ne porte pas le nom de son binaire
# parce qu'on vérifie que le binaire est installé
# A terme, ça sera à chaque paquet de vérifier lui-même s'il est installé
# Vérifier quels paquets custom sont installés
- stat:
path: "/usr/local/bin/{{ item }}"
loop: "{{ packages.custom }}"
register: custom_exists
- name: "Installer les paquets custom activés dans la config"
include: "{{ item.item }}/main.yml"
loop: "{{ custom_exists.results | default([]) }}"
when: not item.stat.exists

View File

@ -0,0 +1,26 @@
- stat:
path: /usr/local/bin/ttbp
register: ttbp
- name: custom-ttbp-source
git:
repo: https://tildegit.org/envs/ttbp.git
dest: /tmp/ttbp
when: not ttbp.stat.exists
- name: custom-ttbp-pkg
apt:
name: "python-setuptools"
state: present
- name: custom-ttbp-setup
command:
cmd: "python /tmp/ttbp/setup.py install"
chdir: /tmp/ttbp
when: not ttbp.stat.exists
- name: custom-ttbp-tmp
file:
path: /tmp/ttbp
state: absent
when: not ttbp.stat.exists

View File

@ -0,0 +1,7 @@
# Malheureusement zola compile pas sur debian buster (rustc v1.34 contre 1.36 requis)
# Donc on copie un binaire que j'ai compilé avec amour
- name: custom-zola-setup
copy:
src: ../../files/bin/zola
dest: /usr/local/bin/zola
mode: 0755

View File

@ -0,0 +1,4 @@
- name: debian-pkg
apt:
state: present
name: "{{ packages.debian }}"

View File

@ -0,0 +1,50 @@
- name: rust-setup
apt:
state: present
name:
- rustc
- cargo
- cargo-doc
update_cache: yes
- name: rust-user
user:
name: "rust"
state: present
skeleton: /etc/skel
shell: /bin/bash
system: no
createhome: yes
home: "/home/rust"
- name: rust-cargo-folder
file:
path: /home/rust/.cargo
state: directory
owner: rust
group: rust
- name: rust-bin-ownership
file:
path: /usr/local/bin
state: directory
owner: rust
group: rust
mode: 0755
recurse: yes
- name: rust-bin-symlink
file:
dest: /home/rust/.cargo/bin
src: /usr/local/bin
force: yes
follow: no
state: link
- name: rust-pkg
become:
become_user: rust
command:
cmd: "cargo install {{ item }}"
creates: "/usr/local/bin/{{ item }}"
loop: "{{ packages.rust }}"

1
roles/README.md Normal file
View File

@ -0,0 +1 @@
Les rôles qui commencent par . (.debian, .rust, .custom) sont des gestionnaires de paquets. Il suffit de rajouter une clé dans packages dans la configuration du serveur pour créer un nouveau gestionnaire de paquet qui sera appelé ici.

View File

@ -0,0 +1,231 @@
daemonize = true
pidfile = "/run/prosody/prosody.pid"
-- TODO server name
name = "JabberFR"
min_seconds_between_registrations = 86400
welcome_message = "Bienvenue $username sur le chat $host! Pour toutes vos questions sur Jabber, nous vous recommandons https://{{ hostname }}/\nMerci de NE PAS repondre a ce message automatique."
limits = {
c2s = {
rate = "3kb/s";
burst = "2s";
};
s2sin = {
rate = "10kb/s";
burst = "5s";
};
}
-- For mod_http_list_domains
main_domains = {
"{{ hostname }}";
}
-- For mod_block_registrations
block_registrations_users = {
"admin", "owner", "operator", "webmaster", "postmaster"
}
-- For mod_s2s_blacklist
s2s_blacklist = {
-- From https://github.com/JabberSPAM/blacklist/blob/master/blacklist.txt
"bashtel.ru",
"darkengine.biz",
"hiddenlizard.org",
"jabber.cd",
"jabber.ipredator.se",
"jabber.npw.net",
"jabber.sampo.ru",
"otr.chat",
"paranoid.scarab.name",
"rassnet.org",
"safetyjabber.com",
"sj.ms",
"xmpp.bytesund.biz",
}
-- Prevents clients from hogging all of the fds with unauthed c2s.
c2s_timeout = 120
-- For MAM.
storage = {
archive = "xmlarchive";
muc_log = "xmlarchive";
}
-- For ChatSecure to actually receive push notifications.
-- TODO: translate new message
push_notification_important_body = "Nouveau message."
-- So that every domain get our services.
disco_items = {
{ "chat.{{ hostname }}", "Salons de discussion" };
--{ "irc.{{ hostname }}", "Passerelle IRC" };
{ "proxy.{{ hostname }}", "Partager plus facilement des fichiers" };
{ "upload.{{ hostname }}", "Héberger de petits fichiers" };
}
-- TODO: default MUC for support/feedback
contact_info = {
abuse = { "mailto:root@{{ hostname }}", "xmpp:root@{{ hostname }}" },
admin = { "mailto:root@{{ hostname }}", "xmpp:root@{{ hostname }}" },
--feedback = { "xmpp:jabberfr@chat.{{ hostname }}?join" },
security = { "mailto:root@{{ hostname }}", "xmpp:root@{{ hostname }}" },
--support = { "xmpp:jabberfr@chat.{{ hostname }}?join" },
}
-- Needed for bosh to work at all on the web.
cross_domain_bosh = true
cross_domain_websocket = true
consider_bosh_secure = true
consider_websocket_secure = true
http_interfaces = { "::1" }
https_interfaces = {}
-- Which clients dont need TLS to connect.
secure_interfaces = { "::1", "127.0.0.1" }
-- Ugh, spam…
--firewall_scripts = { "/etc/prosody/spammer.pfw" }
-- TODO: admins
admins = { "root@{{ hostname }}" }
-- For more information see: https://prosody.im/doc/libevent
--use_libevent = true
network_backend = "epoll"
-- TODO: maybe we have to change this?
plugin_paths = { "/usr/lib/prosody/prosody-modules-private"; "/usr/lib/prosody/prosody-modules" }
modules_enabled = {
-- Generally required
"roster"; -- Allow users to have a roster. Recommended ;)
"saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
"tls"; -- Add support for secure TLS on c2s/s2s connections
"dialback"; -- s2s dialback support
"disco"; -- Service discovery
-- Not essential, but recommended
"carbons"; -- Keep multiple clients in sync
"pep"; -- Enables users to publish their avatar, mood, activity, playing music and more
"private"; -- Private XML storage (for room bookmarks, etc.)
"blocklist"; -- Allow users to block communications with other users
"vcard4"; -- User profiles (stored in PEP)
"vcard_legacy"; -- Conversion between legacy vCard and PEP Avatar, vcard
-- Nice to have
"version"; -- Replies to server version requests
"uptime"; -- Report how long server has been running
"time"; -- Let others know the time here on this server
"ping"; -- Replies to XMPP pings with pongs
"mam"; -- Store messages in an archive and allow users to access it
"csi_simple"; -- Simple Mobile optimizations
-- Admin interfaces
"admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
-- HTTP modules
"bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
"websocket"; -- XMPP over WebSockets
-- Other specific functionality
"limits"; -- Enable bandwidth limiting for XMPP connections
"server_contact_info"; -- Publish contact information for this service
"welcome"; -- Welcome users who register accounts
"watchregistrations"; -- Alert admins of registrations
-- prosody-modules
"lastlog"; -- Allows to specify traffic bandwidth limits.
"smacks"; -- Prevents an unreliable connection from eating the battery.
"smacks_offline"; -- Because.
"cloud_notify"; -- For iOS, Android 6+ and WP clients to work properly.
"csi"; -- Optimisations for mobile.
"throttle_unsolicited"; -- Damn spammers!
--"firewall"; -- Ugh, spammers…
"s2s_blacklist"; -- Thanks, spammers.
"secure_interfaces"; -- Insecure local registration.
"auto_answer_disco_info"; -- Answers disco#info on the behalf of the local user.
"inject_ecaps2"; -- Add support for XEP-0390 for all local users.
"ipcheck"; -- Like STUN but over XMPP.
"s2s_bidi"; -- To reduce the amount of s2s.
"bookmarks2"; -- To synchronise bookmarks between XEP-0402 and Private XML.
--"nodeinfo2"; -- For https://the-federation.info
}
certificate = "/etc/prosody/certs/{{ hostname }}.crt"
c2s_require_encryption = true
s2s_require_encryption = true
s2s_secure_auth = true
authentication = "internal_hashed"
archive_expires_after = "1w" -- Remove archived messages after 2 weeks
archive_cleanup_interval = 15
log = {
--debug = "/var/log/prosody/prosody.debug";
info = "/var/log/prosody/prosody.log";
error = "/var/log/prosody/prosody.err";
}
certificates = "certs"
https_certificate = "/etc/prosody/certs/{{ hostname }}.crt"
VirtualHost "jabber.fr"
http_external_url = "https://jabber.fr/"
VirtualHost "anon.{{ hostname }}"
authentication = "anonymous"
allow_anonymous_s2s = false
modules_enabled = {
"muc_ban_ip";
}
modules_disabled = {
"mam";
}
-- TODO: subdomain
Component "chat.{{ hostname }}" "muc"
modules_enabled = {
"s2s_bidi"; -- To reduce the amount of s2s.
"muc_mam";
"muc_badge";
"http_muc_log";
"http_muc_list";
"http_avatar";
"vcard_muc";
"muc_webchat_url";
}
admins = { "tofu@{{ hostname }}" }
muc_room_cache_size = 1024
Component "proxy.{{ hostname }}" "proxy65"
modules_disabled = {
"s2s";
"tls";
}
Component "upload.{{ hostname }}" "http_upload"
modules_enabled = {
"file_management";
}
modules_disabled = {
"s2s";
"tls";
}
http_external_url = "https://upload.{{ hostname }}/"
http_paths = {
upload = "/";
}
http_upload_path = "/srv/http/upload.{{ hostname }}/"
http_upload_file_size_limit = 10 * 1024 * 1024
-- TODO: IRC Gateway
-- Component "irc.{{ hostname }}"
-- component_secret = ":p"

View File

@ -0,0 +1,9 @@
- name: jabber-setup-prosody
apt:
name: prosody
state: present
- name: jabber-config
template:
src: ../files/prosody.cfg.lua.j2
dest: /etc/prosody/prosody.cfg.lua

7
roles/recette.yml Normal file
View File

@ -0,0 +1,7 @@
# Les rôles dont le nom est préfixé d'un . ne sont pas faits pour être activés dans la config
- name: Installer le serveur
hosts: all
roles:
- .common

View File

@ -0,0 +1,37 @@
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html;
location /.well-known/acme-challenge {
try_files $uri $uri/ =404;
}
location / {
return 302 https://$host$request_uri;
}
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/{{ hostname }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ hostname }}/privkey.pem;
server_name _;
root /var/www/html;
index index.html;
location ~ ^/~(.+?)(/.*)?$ {
alias /home/$1/public_html/$2;
autoindex on;
#try_files $2 $2/ = 404;
}
location / {
try_files $uri $uri/ =404;
}
}

View File

@ -0,0 +1,12 @@
server {
listen 80;
listen [::]:80;
server_name {{ web_onion.stdout }};
root /home/{{ item.name }}/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}

View File

@ -0,0 +1,16 @@
# Taken from https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";

View File

@ -0,0 +1,16 @@
{% for user in users %}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/letsencrypt/live/{{ user.name }}.{{ hostname }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ user.name }}.{{ hostname }}/privkey.pem;
server_name {{ user.name }}.{{ hostname }};
root /home/{{ user.name }}/public_html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
{% endfor %}

View File

@ -0,0 +1,2 @@
- name: webserver-reload-nginx
service: name=nginx state=restarted

View File

@ -0,0 +1,10 @@
- name: webserver-certbot-main
command:
creates: /etc/letsencrypt/live/{{ hostname }}/fullchain.pem
cmd: certbot certonly --non-interactive --agree-tos --webroot -w /var/www/html -d {{ hostname }} -d www.{{ hostname }}
- name: webserver-certbot-users
command:
creates: "/etc/letsencrypt/live/{{ item.name }}.{{ hostname }}/fullchain.pem"
cmd: "certbot certonly --non-interactive --agree-tos --webroot -w /var/www/html -d {{ item.name }}.{{ hostname }}"
loop: "{{ users }}"

View File

@ -0,0 +1,7 @@
---
# This playbook contains all of the www config
- include: packages.yml
# TODO: Some certbot is needed before we can load the whole nginx config so we need some intermediary step (bootstrapping process)
- include: nginx.yml
- include: certbot.yml

View File

@ -0,0 +1,27 @@
- name: webserver-default-config
template:
src: ../files/default-site.conf.j2
dest: /etc/nginx/sites-available/default-site.conf
notify: reload-nginx
- name: webserver-default-symlink
file:
src: /etc/nginx/sites-available/default-site.conf
dest: /etc/nginx/sites-enabled/default-site.conf
state: link
- name: webserver-tls-config
copy:
src: ../files/ssl.conf
dest: /etc/nginx/conf.d/ssl.conf
notify: reload-nginx
- name: webserver-personal-pages
include: pages_perso.yml
- name: webserver-bucket-size
lineinfile:
path: /etc/nginx/nginx.conf
line: "server_names_hash_bucket_size 128;"
insertafter: "^http {"
notify: reload-nginx

View File

@ -0,0 +1,22 @@
- stat:
path: "/etc/nginx/sites-available/{{ item.name }}.onion.conf"
register: conf_exists
- name: webserver-onion-hostname
command: "cat /var/lib/tor/{{ item.name }}/hostname"
register: web_onion
when: not conf_exists.stat.exists
- name: webserver-onion-config
template:
src: ../files/onion.conf.j2
dest: "/etc/nginx/sites-available/{{ item.name }}.onion.conf"
notify: reload nginx
when: not conf_exists.stat.exists
- name: webserver-onion-symlink
file:
src: "/etc/nginx/sites-available/{{ item.name }}.onion.conf"
dest: "/etc/nginx/sites-enabled/{{ item.name }}.onion.conf"
state: link
when: not conf_exists.stat.exists

View File

@ -0,0 +1,12 @@
- name: webserver-pkg
apt:
name:
- nginx
- php-fpm
- php-curl
- php-gd
- php-intl
- php-sqlite3
- php-mbstring
state: present
update_cache: yes

View File

@ -0,0 +1,19 @@
- name: webserver-perso-config
template:
src: ../files/users.conf.j2
dest: /etc/nginx/sites-available/users-site.conf
- name: webserver-perso-symlink
file:
src: /etc/nginx/sites-available/users-site.conf
dest: /etc/nginx/sites-enabled/users-site.conf
state: link
- name: webserver-perso-publichtml
file:
path: /etc/skel/public_html
state: directory
- name: webserver-perso-onions
include: onions_perso.yml
loop: "{{ users }}"