Steps to get you up and running on your own VM. Owlkins is not the easiest nor the hardest open source self-hosted service you could run, but there are a lot of steps. We hope we have everything covered here.
Last updated 2024-06-04
(added section 13 on convert_videos
)
The preferred way to run Owlkins is in an Ubuntu virtual machine that has at least 2 cores and 4GB of RAM. You won't utilize these resources all the time so the resources can be shared. Running the web server and framework will take anywhere from 350MB to 600MB depending on how many photos you have.
When a user uploads a video, Owlkins will convert it via an ffmpeg
command which will
utilize max(N-1, 1)
available processors. During video conversion, RAM usage will depend on the
properties of the video such as its length. If your system has a lower amount of RAM, make sure you have enough
swap space to facilitate converting videos, and a bit of patience as well for the website during times of
heavy swapping!
Download, install and configure git on your machine.
Navigate to the location on your machine that you want a new folder to be made with the contents of the repo. We recommend creating a new user whose sole purpose is to run owlkins, and then for example you can install here in that user's home directory.
Once you are ready, you can clone the repo:
git clone https://gitlab.com/owlkins/owlkins
You will now have a directory owlkins
on your local machine which you can change
directory (cd
) into.
Even if the only thing you will run on this machine is Owlkins (for example if you rented a VM in the cloud or installed a new one on your machine), you'll want to make sure Owlkins runs in a virtual environment.
A Python virtual environment allows for the specific package version to be installed without risking conflicts with the python that comes with your operating system.
sudo apt-get install nginx postgresql postgresql-contrib libpq-dev libjpeg-dev libffi-dev ffmpeg zlib1g-dev libmemcached-dev libimage-exiftool-perl gcc python3-dev build-essential memcached libmemcached-tools python3-venv
Next, create the virtual environment
python3 -m venv .venv
Now you can enter the virtual environment
source .venv/bin/activate
Owlkins recommends using PostgreSQL, though Django has other alternatives, which I imagine would work "ok" if you are willing to put up with a non- or less-tested configuration.
Enter the PostgreSQL console by switching to the postgres
user and executing the command psql
:
sudo -u postgres psql
Now you can create the database and the Django user. Come up with a user
and password
, substitute in the
below steps (without the curly braces), and execute all of these lines in the psql console.
You will have to save the user
and password
in the secrets.py
file noted in step 4 below.
CREATE DATABASE {user};
CREATE USER {user} WITH PASSWORD '{password}';
ALTER ROLE {user} SET client_encoding TO 'utf8';
ALTER ROLE {user} SET default_transaction_isolation TO 'read committed';
ALTER ROLE {user} SET timezone TO 'UTC';
GRANT ALL PRIVILEGES ON DATABASE {user} TO {user};
To exit from the PostgreSQL prompt and return to your terminal, type \q
and press enter.
You should now be in the virtual environment named ".venv", check this by observing your current terminal line
has (.venv)
at the start on the far left, perhaps looking like:
(.venv) you@your_machine:~/owlkins
Now you can use the python package installer "pip" to install the current version of all requirements as identified in the requirements.txt file you pulled from the git repo.
pip install -r requirements.txt
There are two Owlkins text files you will have to edit to complete setup. To create the default version of the files
copy the config_defaults.py
and secrets_defaults.py
to config.py
and secrets.py
, respectively:
cp owlkins/config_defaults.py owlkins/config.py
cp owlkins/secrets_defaults.py owlkins/secrets.py
This will create two files: owlkins/secrets.py
and owlkins/config.py
.
Next we will fill in a few variables in the secrets.py
file.
Generate a new Django secret
key with a command like the below and store the result in the secrets.py
file.
python -c 'from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())'
Now, we have some things which we can add to our secrets.py
file:
SECRET_KEY
variable in your secrets.py
file.
DB_USER
and
DB_PASSWORD
variables in the secrets.py
file.
DB_NAME
variable in secrets.py
DB_SERVER
variable in secrets.py
to point
to your PostgreSQL server. If you are running PostgreSQL on the same machine, you can leave this setting blank.Now you are ready to make the initial database migrations via the following command:
python manage.py migrate
If you see any issues or errors, check that the secrets.py
file contains the correct values.
Now that you created all the tables in the database that will allow Owlkins to store relational data, such as information about photos and a link to their url, user information, and everything else other than files such as the actual photos and videos themselves.
Your user table at this point exists but doesn't have anyone in it. To log into Owlkins and be able to access the prompt to create other users, you'll have to create a user that has administrative privileges. You can do this via another Django management command:
python manage.py createsuperuser
You should set your email as the "username" and the Email Address field, since with Owlkins the frontend login flow is email and password based and subsequent normal users won't be able to make a "username" that is different from their email address. The workflow should look something like the following
(.venv) user@your-machine:~/owlkins$ python manage.py createsuperuser
Username (leave blank to use 'user'): YOUR_EMAIL
Email address: YOUR_EMAIL
Password:
Password (again):
Superuser created successfully.
This is the email and password you will then use to log in to your Owlkins installation once up and running. If you forget your password, you can change it with another management command:
manage.py changepassword *email*
The preferred way to reset your own password and that of any other user though is the Forgot Password? workflow on the website itself, though this will require email to be set up first (see step 8. below).
config.py
file changesYou're required to put BASE_URL
to get the server working. If you have your domain now you can add it,
else this will also be discussed in section 6.
Owlkins recommends using Memcached for the caching layer, but there are other Django caching solutions
if you so choose. If you followed the instructions thus far, you will have called sudo apt-get install
with libmemcached-dev memcached libmemcached-tools
, meaning now you have Memcached installed and
configured on your system. You should configure it further such that memcached listens on a socket instead of via
TCP on a local port. First, open your memcache config file
sudo nano /etc/memcached.conf
Comment these lines
#-p 11211
#-l 127.0.0.1
add these lines
# Enablinb SASL
# -S
# Set unix socket which we put in the folder /var/run/memcached and made memcache user the owner
-s /var/run/memcached/memcached.sock
# set permissions for the memcached socket so memcache user and www-data group can execute
-a 0666
# The default maximum object size is 1MB. In memcached 1.4.2 and later, you can change the maximum si>
-I 10m
To allow these settings to take effect you can restart memcached
sudo systemctl restart memcached
Now your Django installation can use memcached.
Owlkins is meant to be installed on a machine you own or have paid to operate, which hosts a website that users can visit. Part of this then is the domain name that you can share with your users which they can type into their web browsers or get links to from your email digest.
If you are not familiar with buying a domain, there are several different providers such as GoDaddy, Namecheap, or others. Personally, as someone with a Gmail account, I've always found Google Domains the easiest and most straightforward to use.
If you choose Google Domains, the process looks like this:
your-baby-name.com
, but there are endless combinations and other Top Level Domains (TLDs) such
that you don't have to use a .com address. Just note that some TLDs cost more per year (like .io)
than others.A
for an IPv4 address (likely) or AAAA
if
your server is at an IPv6 address (less likely).BASE_URL
variable in your config.py
file.If you are behind a residential connection, your IP address may change at a rate that it will be annoying for you to come back to this form and manually update it after you notice your site goes "offline". Nowadays, it may seem like your IP address changes that much, but the "right" answer in this scenario is to setup DDNS instead. One reason I like to use Google Domains is a popular DDNS client DDCLIENT supports Google Domains.
A
for you, and
Data will initially be essentially blank with a value such as 0.0.0.0
Now that we have Owlkins set up and a domain, we can set up the web server.
Setup the gunicorn service file
sudo nano /etc/systemd/system/gunicorn.service
... with the following contents
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User={YOUR_UBUNTU_USER}
Group=www-data
WorkingDirectory=/home/{YOUR_UBUNTU_USER}/owlkins
ExecStart=/home/{YOUR_UBUNTU_USER}/owlkins/.venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
--timeout 240 \
owlkins.wsgi:application
[Install]
WantedBy=multi-user.target
Now create the socket file
sudo nano /etc/systemd/system/gunicorn.socket
...with the following content
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
Now, you can enable Gunicorn
sudo systemctl start gunicorn.service
sudo systemctl enable gunicorn.service
sudo systemctl status gunicorn.service
You should see that gunicorn is Running. Press q
to exit out of the status of gunicorn.
Next we can set up Nginx. First, create a site "owlkins"
sudo nano /etc/nginx/sites-available/owlkins
... and populate it with information for your domain that you want to host the site at. If you don't already
have experience with Nginx or a working setup for your environment, you could check out the
NGINXConfig tool
hosted from DigitalOcean. In addition or in place, you can base your installation off the below example, your
config may look like the following (replace YOUR_DOMAIN_HERE
with your fully qualified domain name):
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name YOUR_DOMAIN_HERE;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
# Detect bad actors
if ($http_host != $server_name){
return 444;
}
if ($host != $server_name){
return 444;
}
fastcgi_buffers 8 4K;
fastcgi_ignore_headers X-Accel-Buffering;
# For certbot acquisition of LetsEncrypt HTTPS certificate
location ~ /.well-known {
allow all;
}
location = /favicon.ico {
return 301 https://static.owlkins.com/favicon.ico;
}
# IF YOU HAVE TO SERVE STATIC CONTENT FROM THIS MACHINE:
# Remove the below "#static#" comments (effectively uncommenting the line) and
# then you will be able to serve static media (images / videos) from this machine.
# Note that:
# - The preferred way to serve media (images / videos) is S3 -> Cloudfront on AWS
# - You should not adjust Django's settings to force Django to serve static content
# See the Architecture page for more info.
#media# location /media/ {
#media# alias /YOUR/OWLKINS_INSTALLATION/PATH/media/;
#media#}
location / {
# NOTES: On NOTE 1 and NOTE 2 below, see the architecture section for more info
# NOTE 1: The below two lines should be used if you are running a public internet facing machine
# that will receive HTTPS requests and pass it to Gunicorn. If this is you, uncomment the two lines
# below (delete "#" and "1" and "#", all three)
#
# Example would be, renting a VM from AWS or DigitalOcean. If you are running on your home residential
# connection behind your router, you will have to go to "NOTE 2" below and comment out these two lines.
#1# include proxy_params;
#1# proxy_pass http://unix:/run/gunicorn.sock;
# NOTE 2: If you need to set up a reverse proxy to handle all incoming traffic,
# say, on a residential connection all HTTP and HTTPS traffic that hits
# your router sent to a central server running Nginx that then passes the request
# on to another server that actually is running your Owlkins software,
# comment the above two lines and uncomment the following "#2#" lines (delete "#" and "2" and "#", all three)
# REPLACE -> "XXX" with your subnet and "XX" with the IP of the machine that
# will run Nginx/Gunicorn/Django
#2# proxy_pass_header Authorization;
#2# proxy_pass http://192.168.XXX.XX$request_uri;
#2# proxy_set_header Connection "";
#2# proxy_buffering off;
#2# proxy_read_timeout 36000s;
}
# NOTE 3: AFTER YOU RUN `certbot` THERE SHOULD BE LINES FOR THESE THREE PARAMETERS HERE
# You don't have to edit these comments, as certbot will modify this file itself.
# ssl_dhparam
# ssl_certificate
# ssl_certificate_key
#
# The two "snakeoil" below will allow you to allow nginx to start listening for your domain without fully
# setting up HTTPS, which is important for you to run `certbot`
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
}
server {
listen 80;
listen [::]:80;
server_name YOUR_DOMAIN_HERE;
root /var/www/html;
index index.nginx-debian.html;
location ~ /.well-known {
allow all;
}
if ($host = server_name) {
return 301 https://$host$request_uri;
}
}
Next you can enable the site by linking the file you just created to the "sites-enabled" directory. Note that you need to do the absolute path here, don't do a relative path based on your current location.
sudo ln -s /etc/nginx/sites-available/owlkins /etc/nginx/sites-enabled/
Before moving forward, it is worthwhile to check your system wide nginx.conf
file to potentially
change the max upload size. This should be one of the first areas to look if a large file fails to upload.
Invoke the following command to enter the nginx.conf
file:
sudo nano /etc/nginx/nginx.conf
client_max_body_size
setting (or if not present, add anywhere
under Basic Settings) and change to something larger:
client_max_body_size 5000M;
Next you can test your configuration to be sure that all is OK. If not, look over the last couple steps and see what needs to be fixed.
sudo nginx -t
Now you are ready to restart nginx before applying for a, SSL certificate
sudo systemctl restart nginx
Now apply for an SSL certificate from Lets Encrypt!
sudo certbot --nginx -d {YOUR_DOMAIN}
Follow the directions to setup the certificate, and optionally allow redirection from HTTP to HTTPS. If you use your default setup then you'll already have redirection. Now, test again and restart nginx and you'll get your first glimpse of Owlkins running on your domain!
sudo nginx -t
sudo systemctl restart nginx
Email is a lot more complex than people think, and can be quite a rabbit hole to go down. Very likely you need an SMTP relay to send messages if you want the emails you send to have a higher chance of skipping a user's spam filter. You absolutely need an SMTP relay if you decide to self-host on a machine at your house behind a residential IP address as any modern spam list should and will include a blanket block (not necessarily even a spam label) on email originating from residential IP addresses. In terms of inbound mail, most of the major providers like XFINITY will block all traffic on the ports that mail servers send and receive mail on so if you were to go so far as to setting up a mailserver "at home" for inbound mail on your domain, you'd have to setup some iptables MASQUERADE rule on some external server before sending to your IP address on some non-standard port. Fortunately, for the scope of Owlkins, inbound mail is not needed at all so you can focus solely on sending high quality mail.
You can use any SMTP relay such as Mailgun, Sendgrid, or Mailjet, but we will discuss AWS Simple Email Service below as I find it nice that you can consolidate both Email and distribution of photos and videos using AWS's CloudFront Content Distribution Network (CDN) into one bill. The downside of AWS SES is they may deny your use case if you are not already approved to send email on your account, especially if you just made an account (see more below on Step 18).
The goal of any SMTP relay is to populate the following values:
in config.py
, populate
EMAIL_HOST
with the smtp relay. For the above example with us-east-2, the value would be
email-smtp.us-east-2.amazonaws.com
EMAIL_DOMAIN
with the domain that you add to SES as a "Verified Identity". You have to prove
that you have the right to send mail from the domain by adding DNS records that SES will generate for you.To get started with SES, sign into your AWS account and navigate to the dashboard for the region you wish to send email from. For example, here is the dashboard for us-east-2 Verified Identites page. Here you setup the domain:
mail
1234._domainkey.yourdomain.com
which when pasted directly into Google's DNS UI will
result in a setting like 1234._domainkey.yourdomain.com.yourdomain.com
which is not correct.mail
and us-east-2, then add
a MX record for subdomain mail with value 10 feedback-smtp.us-east-2.amazonses.com
"v=spf1 include:amazonses.com ~all"
(double quotes included).secrets.py
file:
EMAIL_HOST_USER
variableEMAIL_HOST_PASSWORD
variabletell us how often you send email, how you maintain your recipient lists, and how you manage bounces, complaints, and unsubscribe requests. It is also helpful to provide examples of the email you plan to send so we can ensure that you are sending high-quality content.
Now you are all finished with setting up email! If you find yourself repeatedly rejected by Amazon, try to use one of the other providers, where the steps to authorize a domain via DNS and getting credentials will be pretty similar.
As discussed in the Architecture page, one of the biggest impacts you can have on your user's performance is to serve your photos and videos from a Content Distribution Network (CDN). While completely necessary if hosting Owlkins behind a residential connection, it is still important even if you purchase the biggest, fastest VM with the most I/O that you can find from some provider. Why? You just can't beat a CDN at its own game: getting your content to your users as fast and reliably as possible.
We will also discuss Amazon S3 here, which as you may have guessed is the main driver of the "S3-compatible" software ecosystem that you can now find at many competing vendors. Some S3-compatible solutions also combine the "S3" and "CDN" aspects into one product like DigitalOcean Spaces, however I think the AWS S3 + AWS CloudFront combination is especially important for Owlkins users who want to maintain privacy while also reaping the benefits of a CDN. With S3, you can lock down all content such that only your CloudFront distribution can access it, and then on CloudFront you can setup trusted signers to enforce access to your photos and videos. This feature does take some extra time to setup, but we will go into the steps below.
First we must create the S3 bucket that you will store your photos and videos in. From your AWS console:
AWS_STORAGE_BUCKET_NAME
variable in your config.py
file.Next we must create the CloudFront distribution that will transmit the photos and videos from S3. From your AWS console:
You will need to upload a public key which you will use later. To make the below steps without extra friction, let's make the key first.
openssl
installed, navigate to a folder that you want to store
your keys in. Don't store your keys in a public or easily accessed location.openssl genrsa -out private_key.pem 2048
openssl rsa -pubout -in private_key.pem -out public_key.pem
CLOUDFRONT_PRIVATE_KEY
variable in your secrets.py
file. Note that you will
have to finangle the content a bit if you "paste" it in, as newline characters will have to be
replaced by \n
throughout the file. Alternatively, if you want to read in the contents from your
private_key.pem file, you could do something like this: 'CLOUDFRONT_PRIVATE_KEY': open('/absolute/path/to/your/private_key.pem').read()
CLOUDFRONT_KEYPAIR_ID
field in
your secrets.py
file.When you make your distribution, you will reach a point that you can set up an Alternative domain name (CNAME),
which you will have to do for you to use the keypair ID you just created above. One prerequisite to this step is
to request a certificate for this domain. The domain name you choose should be a subdomain of your main domain.
For example if you purchased example.com
above and plan to host Owlkins at this domain, you could
now choose cdn.example.com
as the subdomain to host your CloudFront distribution at. For this step,
your DNS page open for your domain so that you can quickly copy some CNAME values into your DNS records.
cdn.example.com
Now that you have a keypair ID to sign content and a certificate for your CDN subdomain, we can move on to creating the distribution.
example.com
above and plan to host Owlkins at this domain, and you followed the steps above for
requesting a certificate your value here cdn.example.com
.AWS_CLOUDFRONT_DOMAIN
variable in your
config.py
file. This is not in your secrets file because this will be a very public value.
From the CloudFront dashboard, find your "Domain Name" (which will look like d1XXXXXXX.cloudfront.net). Go to your
DNS provider and add a new CNAME
record that maps the domain you chose above (example was cdn.example.com)
to this CloudFront Domain Name.
Most DNS UIs will have a form that lets you fill in the value
.example.com, where value
in our
example would be cdn
, such that if you were to put cdn.example.com in the form you would actually
create a record for cdn.example.com.example.com
. This mistake would be easy enough to fix but watch out
for that!
Now that you have a working S3 + CloudFront configuration, we need to add a "user" that will allow your Owlkins distriution to write photos and videos to it.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_HERE",
"arn:aws:s3:::YOUR_BUCKET_HERE/*"
]
}
]
}
Now that you have a policy created, you can create the user.
AWS_ACCESS_KEY_ID
variable in your
secrets.py
file.AWS_SECRET_ACCESS_KEY
variable
in your secrets.py
file.ARN
variable in config.py
That was quite a few steps, but hopefully you've made it through without any issues. Soon we'll find out when we go to test your Owlkins installation!
If you already are using Vault by Hashicorp you can configure Owlkins
to initialize all secret keys from that instead of using the secrets.py
file.
In config.py
, set VAULT_SEVER
to point to your API endpoint and then configure
the VAULT_KV_STORE
and VAULT_ENVS
variables.
For more information, see the comments in config_defaults.py
.
Anytime there is a change to a file you will need to restart the gunicorn service to pick up the change.
For example, if you change a parameter in config.py
or secrets.py
, or download an update
from the git repo, you will need to invoke the following command:
sudo systemctl restart gunicorn
If you want to test your config.py
and secrets.py
before moving to the whole stack,
you can execute the following from the root of your Owlkins installation (does not have to be within an activated
virtual environment):
python3 test_config.py
your_user@your_server:~/owlkins python3 test_config.py
CRITICAL:test_config: config.py passes checks
CRITICAL:test_config: secrets store passes checks
If there are any errors or warnings, address those before proceeding.
One helpful way to check if everything is running fine is to go into the Django shell. For this, you will need to be in the virtual environment. From your Owlkins root, if you installed the virtual environment with the name we suggested above .venv, you can enter the virtual environment like the following:
source .venv/bin/activate
(.venv) your_user@your_server:~/owlkins
At this point you can enter the Django management console with:
python manage.py shell
(.venv) your_user@your_server:~/owlkins python manage.py shell
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
Once you are successfully in the console, you can exit by typing (note that exit
is a function in the
Python console so you will have to have the parentheses):
exit()
python manage.py migrate
migrate
command will only perform actions on the database if there are
any changes to be made, such as after a recent update.
Therefore given potential database changes combined with python code changes, anytime an update is downloaded it would be safe to execute the following two commands to restart the server and apply any database migrations:
python manage.py migrate
sudo systemctl restart gunicorn
This step could be considered optional if you don't want an email digest sent out to your users, but the truth is, a lot of your users will really appreciate the daily digest. I know some users that exclusively interact with the site through the daily emails and may not know how to get to the site otherwise!
If you are following the recommendation to run Ubuntu, and assuming you're on a modern version, then you're running systemd, and we will setup a systemd timer that will periodically send emails (we recommend once a day).
First we will make the slice file which will determine how much of your resources are available to the process.
sudo nano /etc/systemd/system/daily_email_digest.slice
# /etc/systemd/system/daily_email_digest.slice
[Unit]
Description=Limited resources Slice
DefaultDependencies=no
Before=slices.target
[Slice]
CPUQuota=50%
MemoryLimit=2G
Next we can create the service file which will contain instructions for actually running the email:
sudo nano /etc/systemd/system/daily_email_digest.service
# /etc/systemd/system/daily_email_digest.service
[Unit]
Description=Run Daily Email Digest job for new photos
Wants=daily_email_digest.timer
[Service]
Type=oneshot
User=OWLKINS_INSTALL_USER
ExecStart=/home/OWLKINS_INSTALL_USER/owlkins/.venv/bin/python3 /home/OWLKINS_INSTALL_USER/owlkins/manage.py daily_email_digest
WorkingDirectory=/home/OWLKINS_INSTALL_USER/owlkins
Slice=daily_email_digest.slice
VAULT_USER
and VAULT_PASSWORD
parameters:
EnvironmentFile=/home/OWLKINS_INSTALL_USER/vault_envs
Finally we will create the timer file which will instruct how often and when the service file runs. Create the timer like this:
sudo nano /etc/systemd/system/daily_email_digest.timer
# /etc/systemd/system/daily_email_digest.timer
[Unit]
Description=Run Daily Email Digest for photos
#Requires=daily_email_digest.service
[Timer]
Unit=daily_email_digest.service
OnCalendar=*-*-* 13:00:00
[Install]
WantedBy=timers.target
OnCalendar=*-*-* 08:00:00 America/Chicago
timedatectl list-timezones
Now that your timer is set up you can enable it such that your system will always run the timer even after reboot with:
sudo systemctl enable daily_email_digest.timer
sudo systemctl list-timers
sudo systemctl start daily_email_digest
sudo systemctl status daily_email_digest
While Owlkins can successfully run the ffmpeg
video conversion and upload logic within the web server process
itself for a couple of videos, it will start to "drop" videos if many are uploaded at once (such as 8 or more).
If you plan to upload videos regularly, I recommend you setup a daemon process to handle all video conversion tasks.
If you are following the recommendation to run Ubuntu, and assuming you're on a modern version, then
you're running systemd, and we will setup a systemd
timer that will periodically launch the Owlkins Django management command convert_videos
.
First we can create the service file which will contain instructions for running the command:
sudo nano /etc/systemd/system/convert_videos.service
# /etc/systemd/system/convert_videos.service
[Unit]
Description=Run frequently to convert any pending video uploads from the native format to a more universal MP4 format
Wants=convert_videos.timer
[Service]
Type=oneshot
User=OWLKINS_INSTALL_USER
#EnvironmentFile=/home/OWLKINS_INSTALL_USER/vault_envs
ExecStart=/home/OWLKINS_INSTALL_USER/owlkins/.venv/bin/python3 /home/OWLKINS_INSTALL_USER/owlkins/manage.py convert_all_pending_videos
WorkingDirectory=/home/OWLKINS_INSTALL_USER/owlkins
[Install]
WantedBy=multi-user.target
VAULT_USER
and VAULT_PASSWORD
parameters
(uncomment the line in the serivce file above):
EnvironmentFile=/home/OWLKINS_INSTALL_USER/vault_envs
Finally we will create the timer file which will instruct how often and when the service file runs. Create the timer like this:
sudo nano /etc/systemd/system/convert_videos.timer
# /etc/systemd/system/convert_videos.timer
[Unit]
Description=Run frequently to convert any pending video uploads from the native format to a more universal MP4 format
[Timer]
Unit=convert_videos.service
OnCalendar=*:0/5
[Install]
WantedBy=timers.target
Now that your timer is set up you can enable it such that your system will always run the timer even after reboot with:
sudo systemctl enable convert_videos.timer
sudo systemctl list-timers --all
sudo systemctl start convert_videos
sudo systemctl status convert_videos
Once you have verified that the daemon is properly running, change the line in your config.py
to tell the
web worker processes that this service is now available:
'VIDEO_DAEMON_SET_UP': True,
If you've made it this far, then that's just great. Thanks for being a part of the Owlkins community!