This is a simple web scraper in Python using [scrapy](https://docs.scrapy.org/) that writes all the markdown from https://basement.woodbine.nyc/ to disk. Appending `/download` to the end of any hedgedoc page url will return a text file with the markdown. The scraper starts at the markdown version of the homepage and scrapes `[text](hyperlink)` style markdown links. If there are wiki pages that are not linked to from anywhere else this script will not find them. Run like this: $ python -m .venv venv $ source .venv/bin/activate $ pip install -r requirements $ scrapy crawl pages The markdown output will appear in the `None/basement.woodbine.nyc` directory.