Chatter Back End
Cover Page
DUE Wed, 09/18, 2 pm
At the end of the lab, you’ll be prompted to keep a clean copy of your working solution and it will form the starting point for all subsequent labs in the course.
Server hosting
You need an Internet-accessible server running Ubuntu 20.04 or later. You can use a real physical host or a virtual machine on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, AlibabaCloud, etc. as long as you can ssh to an Ubuntu shell with root privileges and, for grading purposes, it has a public IP and is online at all times (i.e., not your laptop). The setup presented here has been verified to work on Ubuntu 20.04 and 22.04 hosted on AWS, GCP, and a local Linux KVM and physical host. We provide instructions on how to set up the labs’ back-end server on both AWS and GCP. Please click the one you want to use to reveal the instructions.
Which hosting service to use?
If you had used AWS in a different course and wanted to avoid going beyond your free tier, you can use GCP in this course. Similarly, if you planned to host your team’s project on your AWS free tier, you can set up your labs’ back end on GCP. If you plan to build your back end server in Go or Rust, we recommend using GCP instead of AWS. We have observed intermittent service interruptions with Go on AWS, though we haven’t isolated the cause, whether it’s due to Go, atreugo, or AWS itself. GCP also gives you 2GB more disk space than AWS, which Rust needs to build dependencies.
AWS
AWS
The instructions to set up an AWS instance here is adapted from EECS 485’s tutorial.
Create an account
Create an AWS account at the AWS Registration. You should be eligible for their free tier, which means that you will be able to run an instance for free for the duration of the course.
Despite that, you will need to enter a credit card number on the account, even if you only use free tier resources. This is how Amazon charges, in case you request more resources than provided in the free tier. Do not launch any additional instances other than the one we specify in order to avoid additional charges.
Optionally, you may redeem extra benefits as a student, including $100 in AWS credits.
Start instance
Navigate to the AWS Management Console. Select the “Services” dropdown menu, then “EC2”. An EC2 “instance” is a virtual machine running on Amazon AWS hardware.
Click launch an instance. It may take a few minutes to initialize before it’s running.
Select the “Ubuntu Server 22.04.01 [or 20.04] LTS” Amazon Machine Image (AMI).
Select the “t2.micro” instance type. You should see “free tier eligible”. When you create an instance, AWS automatically creates user “ubuntu” for you on the instance.
Create a key pair for user “ubuntu”. The private key will be automatically downloaded for you once the key pair is created. You’ll use this private key later to ssh
to the instance. Click on Create new key pair
:
Then enter “eecs441” as the key pair name and click Create key pair
:
Allow SSH, HTTP, and HTTPS and click Edit
on the top right:
Verify if three security group rules have been added. After this, Click on Add security group rule
.
Edit the “Security group rule 4 (TCP, 0)” section, and add “Custom TCP” for port 8000 to allow Django traffic in and out of your instance. [Thanks to Z. Liang and R. Nagavardhan ‘23.]
Click “Launch Instance”.
Instance status
Navigate to the AWS Management Console. Select the “Services” dropdown menu, then “EC2”. Click “Instances”. Select one of the instances and view its status and Public DNS.
In the remainder of this spec, and this term, we will refer to your “IPv4 Public IP” as YOUR_SERVER_IP
(in the image it’s 3.141.23.144
).
ssh to instance
On your development host (laptop
):
If AWS gave you a
eec441.cer
instead ofeecs441.pem
, just useeecs441.cer
everywhere you seeeecs441.pem
in this spec.
MacOS on Terminal:
# 👇👇👇👇👇👇👇👇👇👇
laptop$ cd YOUR*LABS*FOLDER
laptop$ mv ~/Downloads/eecs441.pem eecs441.pem
laptop$ chmod 400 eecs441.pem
laptop$ ssh -i eecs441.pem ubuntu@YOUR_SERVER_IP
# 👆👆👆👆👆👆👆👆👆
Windows on PowerShell [thanks to Jad Beydoun (F21) for use of icacls
]:
# 👇👇👇👇👇👇👇👇👇👇
PS laptop> cd YOUR*LABS*FOLDER
PS laptop> mv ~\Downloads\eecs441.pem eecs441.pem
PS laptop> icacls eecs441.pem /grant "$($env:username):(r)" /inheritance:r
PS laptop> ssh -i eecs441.pem ubuntu@YOUR_SERVER_IP
# 👆👆👆👆👆👆👆👆👆
On WSL
If you prefer to run Ubuntu shell instead of Windows’ PowerShell, on Ubuntu’s terminal create /etc/wsl.conf
:
laptop$ sudo vi /etc/wsl.conf
and put the following content in it:
[automount]
options = "metadata"
Exit all Ubuntu shells such that Ubuntu is not running and its icon is not showing in your dock (or sign out and sign back in to your Windows account), restart your Ubuntu terminal, and continue with the steps below:
# 👇👇👇👇👇👇👇👇👇
laptop$ cd YOUR*LABS*FOLDER
# 👇👇👇👇👇👇👇👇👇👇👇👇👇
laptop$ mv /mnt/c/Users/YOUR_WINDOWS_USERNAME/Downloads/eecs441.pem eecs441.pem
laptop$ chmod 400 eecs441.pem
laptop$ ssh -i eecs441.pem ubuntu@YOUR_SERVER_IP
# 👆👆👆👆👆👆👆👆👆
In both cases, what the above does:
- change working directory to
YOUR*LABS*FOLDER
, - move the private
ssh
key you created and downloaded earlier intoYOUR*LABS*FOLDER
, - set its permissions to read-only, and
-
ssh
to your AWS instance as user “ubuntu” using the downloaded private key. (Make sure your instance is running. See Instance status.)
Stop instance
DO NOT STOP YOUR INSTANCE. Please leave your EC2 instance running for grading purposes. Stopping your instance will change its alloted IP address and undo some of the customizations you’ve done following this spec. When we’re done with all the labs, after the last lab has been graded, in about 2.5 months, and if you don’t need your instance for your course project, then you can stop your instance, to avoid using your AWS credits.
The AWS free credit refreshes every month. So don’t fret if you get an email from AWS near the end of a month saying you’ve used up 85% of your free credit. It should reset when the new month rolls around.
Check your Instance status.
Right click on your instance Instance State > Stop
.
You should now see that your instance is stopped.
Appendix
Command line tools
To administer AWS EC2 instance from the Ubuntu command line, install the following:
server$ sudo apt install cloud-utils awscli
Useful commands:
server$ ec2metadata
server$ aws configure
server$ aws ec2 help
The command ec2metadata
shows the instance’s public-ip
and public-hostname
.
The command aws configure
asks for AWS Access Key ID
, which can be obtained from:
server$ aws iam list-access-keys
It also asks for AWS Secret Access Key
, which is shown only at creation time at the IAM console.
The Default region name
is listed in the public-hostname
following the public-ip
.
The command aws ec2
is the main interface to configure ec2. The help
sub-command lists all the sub-commands such as describe-security-groups
, from which one can obtain the group name/id needed to run sub-command authorize-security-group-ingress
, for example.
To add IPv6 CIDR block use --ip-permissions
, e.g.,
server$ aws ec2 authorize-security-group-ingress --group-id YOUR_GROUP_ID --ip-permissions IpProtocol=tcp,FromPort=8000,ToPort=8000,Ipv6Ranges=[{CidrIpv6=::/0}]
GCP
GCP
Google Cloud Platform has a free-tier, with free credits that are easy to qualify for.
Login to Google account
You’ll need a personal Google account to use the Google Cloud Platform. Do not use your umich email address. The following steps are adapted from Google Cloud’s Quickstart using a Linux VM, though they have been adapted to the course. For example, we choose an E2 instance that is eligible for the free-tier.
Create project
Go to the Google Cloud Platform Project Selector page
and create a project. Click AGREE AND CONTINUE
if you agreed to Google’s Terms of Service.
Create a project by clicking on the Create Project
button.
Give your project a name unique to you and click Create
.
Add billing
Add a billing method to your project.
The side menu may have additional items pre-pinned, however the two items we need Billing
and Compute Engine
are easily identifiable. Please consult the teaching staff if you couldn’t find any
menu items referenced in this spec.
When you fill out your billing information, select “individual” as account type. Make sure you see something like this:
Add a credit or debit card. If your back end qualifies for free-tier (it should), this card will not be charged. Select START MY FREE TRIAL
. Return to the project console.
Enable Compute Engine API
Visit GCP’s Compute Engine API site and select ENABLE
.
Create VM instance
Return to the console. Hover over Compute Engine
on the left navigation bar and select VM Instances
.
Select CREATE INSTANCE
.
Review the free tier options at the Google Cloud Free Program page by scrolling to the section titled “Free Tier usage limits”. Look under the “Compute Engine” section and check regions eligible for free tier. Free tier usage is governed by time used. Currently, an E2 in Oregon, Iowa, or South Carolina is eligible for free tier if used for the number of hours in a month.
Give the instance a name and carefully select the regions that are available as free tier with an e2-micro
configuration.
The monthly estimate does not factor in free tier. If the steps are followed, your account should not be billed.
Scroll down until you see the **Boot disk**
** section. Click “CHANGE” under Boot disk and configure it
to Ubuntu 22.04.01 (or 20.04) LTS. Be sure to select STANDARD PERSISTENT DISK. Any other Boot Disk type option will cost you.
After you’ve chosen “Ubuntu 22.04.01 [or 20.04] LTS” and “Standard persistent disk”, click the blue SELECT
button.
Back in the “Machine configuration” page, scroll down further, pass the “Boot disk” section, to get to the “Firewall” section. In the “Firewall” section, allow both HTTP and HTTPS traffic. You should see two boxes like this:
Press “CREATE” to create the instance. Wait for the instance to initialize.
When the loading animations are done, write down the external IP address shown on the screen. In the remainder of this spec, and this term, we will refer to your “external IP” as YOUR_SERVER_IP
(in the image it’s 34.138.61.201
).
You’ll never need the internal IP (and GCP doesn’t
provide any public DNS for your instance).
Next select the triple dots on your E2. Select “Network Details”.
Select “Firewall”. We have to change one more firewall setting to allow us to test the web server we’ll be setting up later.
Create a firewall rule.
Give your rule a name. Scroll down to “Targets”. Enter “http-server” into the “Target tags” box. Enter 0.0.0.0/0 into the source IPv4 ranges box. Enter port 8000 into the tcp box. Press “CREATE”.
ssh to instance
The back-end specs in this course will assume that you have a user with username “ubuntu” created on your instance.
The specs further assume that you’re doing all your back-end work, building your server, under user “ubuntu”.
You’re free to build under a different username, e.g., your Google account name, however you will then have to map
between the instructions in the specs and your setup. More importantly, we will NOT be able to help
you should you need help debugging your back-end server. To build your back end as user ubuntu
, please do the following:
Windows on WSL
If you prefer to run Ubuntu shell instead of Windows’ PowerShell, on Ubuntu’s terminal create /etc/wsl.conf
:
laptop$ sudo vi /etc/wsl.conf
and put the following content in it:
[automount]
options = "metadata"
Exit all Ubuntu shells such that Ubuntu is not running and its icon is not showing in your dock (or sign out and sign back in to your Windows account), restart your Ubuntu terminal, and continue with the steps below.
To access your Windows folder from your WSL shell:
# 👇👇👇👇👇👇👇👇👇👇👇👇👇
laptop$ ls /mnt/c/Users/YOUR_WINDOWS_USERNAME/
First generate a public/private key pair for user “ubuntu” in a safe place you can easily remember, for example YOUR*LABS*FOLDER
.:
# 👇👇👇👇👇👇👇👇👇👇
laptop$ cd YOUR*LABS*FOLDER
laptop$ ssh-keygen -C ubuntu
when ssh-keygen
prompts you for the file in which to save the key, enter “eecs441.pem”. It will then
prompt you for a passphrase. Leave it empty. Hit return
or enter
, twice. Your identification (private key) would
have been saved in eecs441.pem
and your public key in eecs441.pem.pub
. You can view the content of
your public key for posting to Google below by:
laptop$ cat eecs441.pem.pub
Go to GCP Metadata page, open the SSH KEYS
tab and click on EDIT
.
On the edit page, click + ADD ITEM
, copy and paste the content of your “eecs441.pem.pub” to the empty box that
+ ADD ITEM
brought up, and hit the blue SAVE
button.
Your SSH KEYS
tab should now list “ubuntu” under Username
with its corresponding public key:
To ssh
to your GCP instance as user “ubuntu” using your private key, eecs441.pem
, you must first set its permissions to read-only. In the following, YOUR_SERVER_IP
always refers to the external IP address you’ve noted down earlier.
laptop$ chmod 400 eecs441.pem
laptop$ ssh -i eecs441.pem ubuntu@YOUR_SERVER_IP
# 👆👆👆👆👆👆👆👆👆
Windows on PowerShell
[Thanks to Jad B. ‘F21 for use of icacls
]
PS laptop> icacls eecs441.pem /grant "$($env:username):(r)" /inheritance:r
PS laptop> ssh -i eecs441.pem ubuntu@YOUR_SERVER_IP
# 👆👆👆👆👆👆👆👆👆
Stop instance
DO NOT STOP YOUR INSTANCE. Please leave your E2 instance running for grading purposes. Stopping your instance will change its alloted IP address and undo some of the customizations you’ve done following this spec. When we’re done with all the labs, after the last lab has been graded, in about 2.5 months, and if you don’t need your instance for your course project, then you can stop your instance, to avoid using your GCP credits.
GCP should have given you the minimum of 90 days and $300 of credit upon signing up. That is, if your E2 runs more than 3 months and is not eligible for free tier after that (this should not happen anyways) you will get billed a small amount.
Head to your E2 dashboard. Select “Compute Engine”.
When you are completely done with your E2, delete it to ensure you are not charged. A day or two later, ensure that there are no charges for your E2 at all.
Updating packages
Every time you ssh to your server, you will see something like:
N updates can be installed immediately.
if N
is not 0, run the following:
server$ sudo apt update
server$ sudo apt upgrade
Failure to update your packages could lead to the lab back end not performing correctly and also makes you vulnerable to security hacks.
If you see *** System restart required ***
when you ssh to your server, please run:
server$ sync
server$ sudo reboot
Your ssh session will be ended at the server. Wait a few minutes for the system to reboot before you ssh to your server again.
Clone your 441
repo
Clone your 441
GitHub repo to enable pushing your
back-end files for submission:
- First, on your browser, navigate to your
441
GitHub repo - Click on the green
Code
button and copy the URL to your clipboard by clicking the clipboard icon next to the URL - Then on your back-end server:
server$ cd ~ server$ git clone <paste the URL you copied above> 441
If you haven’t, you would need to create a personal access token to use HTTPS Git.
If all goes well, your 441
repo should be cloned to ~/441
. Check that:
server$ ls ~/441
shows the content of your 441
git repo, including your chatter
lab front end.
Preparing for HTTPS
Starting in 2017, Apple required apps to use HTTPS, the secure version of HTTP. Android has followed suite in Aug. 2018 and defaulted to blocking all cleartext (HTTP) traffic.
Obtaining a public key
To support HTTPS, we first obtain a public key signed by a Certification Authority (CA). Since obtaining such a certificate requires a host with a fully qualified domain name (FQDN), such as www.eecs.umich.edu
. Our hosted server does not have an FQDN without extra set up. Instead, we have decided to be our own CA and generate and use a self-signed certificate in this course. A self-signed certificate can only be used during development.
Starting with iOS 13 (and macOS 10.15 Catalina), Apple added some security requirements that all server certificates must comply with. To support both iOS and Android clients, a back-end server must thus comply with these security requirements also.
-
To generate a self-signed certificate that satisfies the new requirements, you first need to add them to the openssl configuration file on the server:
server$ cd /etc/ssl server$ sudo cp openssl.cnf selfsigned.cnf
-
Open
selfsigned.cnf
(withsudo
):server$ sudo vi selfsigned.cnf
When asked to open and/or edit a file on your back-end server, use your favorite editor, such as
nano
orvi
(orvim
ornvim
). In this and all subsequent labs, we will assumevi
because it has the shortest name 😊. You can replacevi
with your favorite editor.Nano
has on-screen help and may be easier to pick up. -
In the
selfsigned.cnf
file, search for the labelv3_ca
and in the[v3_ca]
section add the following two lines:extendedKeyUsage = serverAuth # 👇👇👇👇👇👇👇👇👇 subjectAltName = IP:YOUR_SERVER_IP # or DNS:YOUR_SERVER_FQDN
Due to a bug in
openssl
, it is CRUCIAL that the above two lines be added in the[v3_ca]
section and not outside of it or in any other section of the file.DNS instead of IP
If your server has a fully qualified domain name (FQDN, e.g.,
eecs441.eecs.umich.edu
and not the public DNS AWS assigned you), you can use it instead, tagging it withDNS
instead ofIP
. If you specify your IP address as thesubjectAltName
, you can only access your server using its IP address, not by its FQDN, and vice versa. -
Next, search for
copy_extensions
inselfsigned.cnf
and uncomment it:# Extension copying option: use with caution. copy_extensions = copy
-
Now create a self-signed key and certificate pair with OpenSSL using the following command:
server$ sudo openssl req -x509 -days 100 -nodes -newkey rsa:2048 -config selfsigned.cnf -keyout private/selfsigned.key -out certs/selfsigned.cert
You will be asked to fill out a series of prompts, which will look something like:
Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:MI Locality Name (eg, city) []:AA Organization Name (eg, company) [Internet Widgits Pty Ltd]:UM Organizational Unit Name (eg, section) []:CSE # 👇👇👇👇👇👇👇👇👇 Common Name (e.g. server FQDN or YOUR name) []:YOUR_SERVER_IP Email Address []:admin@your_domain.com
The most important information is
Common Name (e.g. server FQDN or YOUR name)
. Use YOUR_SERVER_IP address. -
Verify that the generated certificate has the right entries:
server$ sudo openssl x509 -text -in certs/selfsigned.cert -noout
It must have the following lines:
X509v3 extensions: X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Subject Alternative Name: IP Address:YOUR_SERVER_IP # or DNS:YOUR_SERVER_FQDN
Congrats! You have generated a public-key and put it inside a self-signed certificate!
DER for front-end
For your front-end app to communicate with your back end, you’d need to install your self-signed certificate above to your device (or emulator or simulator). Some versions of the front-end OS require the certificate to be in binary (DER) format. To convert the certificate to the DER format do:
server$ cd ~/441/
server$ sudo openssl x509 -inform PEM -outform DER -in /etc/ssl/certs/selfsigned.cert -out selfsigned.crt
server$ sudo chown ubuntu selfsigned.crt
PostgreSQL
We will be using the PostgreSQL relational database management system (RDBMS) to store chatt
s posted by the front end. First we need to install PostgreSQL (and curl
and wget
):
server$ sudo apt update
server$ sudo apt install libpq-dev postgresql postgresql-contrib curl wget
Once PostgreSQL is installed:
-
Log into an interactive Postgres (
psql
) session as userpostgres
:server$ sudo -u postgres psql
You may receive the message: “could not change directory to “/root”: Permission denied”. You can safely ignore this message.
Your command-line prompt should change to
postgres=#
. - Check the version of your PostgreSQL:
SELECT version();
If the result is
PostgreSQL
lower than version 13, run [thanks to Karan A. ‘F24]:CREATE EXTENSION pgcrypto;
-
Create a database user for your project. Make sure to select a secure password.
CREATE USER chatter WITH PASSWORD 'chattchatt';
TIP: Forgetting to do this is a common cause of getting HTTP error code
500 Internal Server Error
.All SQL commands must end with a
;
. -
Create a database for your project and change its owner to
chatter
:CREATE DATABASE chatterdb; ALTER DATABASE chatterdb OWNER TO chatter;
-
Connect to the database you just created:
\connect chatterdb
\connect
may be shortened to\c
.Your command-line prompt should now change to
chatterdb=#
. -
We next use SQL command to create a
chatts
table in thechatterdb
database. The table consists of three columns:username
,message
, andtime
, withtime
being automatically filled in by the database when an entry is added:CREATE TABLE chatts (username varchar(255) not null, message varchar(255) not null, id UUID not null, time timestamp with time zone DEFAULT CURRENT_TIMESTAMP(0));
-
Give user
chatter
access to administer the new database, including querying and inserting new data:GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO chatter;
- Insert and view a sample entry:
INSERT INTO chatts VALUES ('testuser1', 'Hello world', gen_random_uuid()); SELECT * from chatts;
You should see something like:
username | message | id | time | -----------+-------------+--------------------------------------+----------------------------| testuser1 | Hello world | 20f713af-015d-48c8-8b09-3036cf104134 | 2024-08-03 10:54:58.362108 | (1 row)
TIP: Trying to send
message
longer than 255 characters is another common cause of getting HTTP error code500 Internal Server Error
.To delete all entries from a table:
TRUNCATE TABLE chatts;
-
You can issue the
\dt
command to list all tables:\dt List of relations Schema | Name | Type | Owner --------+--------+-------+---------- public | chatts | table | postgres (1 rows)
-
When you are finished, exit PostgreSQL:
\q
or hit
Ctl-d
(^d
).
Chatter
’s API
Chatter
is a simple CRUD app. We’ll use the chatts
table just created to hold submitted chatt
s. We associate a username
with each message
in a chatt
.
To start with, chatter
has only two APIs:
-
getchatts
: use HTTP GET to query the database and return all foundchatts
-
postchatt
: use HTTP POST to post achatt
as JSON object
Chatter
does not provide “replace” (HTTP PUT) and “delete” (HTTP DELETE) functions.
The protocol handshakes:
url
<- request
-> response
/getchatts/
<— HTTP GET {}
—> { array of chatts } 200 OK
/postchatt/
<- HTTP POST { username, message }
—> {} 200 OK
Data formats
The getchatts
API will send back all accumulated chatts in the form of a JSON object with the key "chatts"
and the value an array of string arrays. Each string array consists of four elements “username”, “message”, “id”, and “timestamp”. For example:
{
"chatts": [["username0", "message0", "id0", "timestamp0"],
["username1", "message1", "id1", "timestamp1"],
...
]
}
Each element of the string array may have a value of JSON null
or the empty string (""
).
To post a chatt
with the postchatt
API, the front-end client sends a JSON object
consisting of "username"
and "message"
. For example:
{
"username": "ubuntu",
"message": "Hello world!"
}
Web server framework
We provide instructions on setting up the Chatter
back-end service using three different back-end stacks. Please click the one you want to use to reveal the instructions.
Which back-end stack to use?
🔸 Go is a popular back-end language due to its easy learnability. The atreugo web framework used is built on the Fast HTTP alternative to Go’s standard net/http library. Fast HTTP makes zero memory allocation on the fast path and the atreugo-based stack ends up in the top 5% of the TechEmpower Web Framework Benchmarks (TFB).
As with most benchmarks, TFB entries do not always reflect common, casual usage, and its results should be taken with a pinch of salt. Personally though, given a choice, I would not choose to develop in Go.
🔸 If you plan to use any ML-related third-party libraries in your project’s back end, a Python-based back-end stack could make integration with such libraries easier. For production use outside of this course, be aware that in terms of performance the Django stack comes in near the bottom (88%) of the TFB ranking. It is also more involved to set up, as you can see from the instructions below.
If you plan to build your back end server in Go, know that both students and GSIs have reported intermittent connection issues running the Go backend on AWS. If you must build in Go, we recommend that you use GCP.
🔸 Rust does static type checking and data-flow analysis, resulting in a language that allows you to write safe and performant code, two goals that are hitherto perceived to be antithetical to each other. Alone amongst the three choices here, Rust does not rely on garbage collection for memory management. The axum web framework is a coherent framework built on the tokio asynchronous stack. This setup ranks in the top 2% of the TFB list. The back end running on mada.eecs.umich.edu is the Rust version. However, Rust does have a reputation for being hard to learn and frustrating to use, especially if user is not well versed in the intricacies of memory usage scoping and trait conformance checking.
If you plan to build your back end server in Rust, know that GCP gives you 2GB more disk space than AWS, which you’ll need to build Rust dependencies.
Should you decide to switch from one back-end stack to another during the term, be sure to disable the previous one completely before enabling the new one or you won’t be able to start the new one due to the HTTP and HTTPS ports being already used:
server$ sudo systemctl disable nginx gunicorn chatterd
server$ sudo systemctl stop nginx gunicorn chatterd
Note: in this and subsequent labs, we will assume your folders/directories are named using the “canonical” names listed here. For example, we will always refer to the project directory as ~/441/chatterd
, without further explanations, from this lab to the last lab. If you prefer to use your own naming scheme, you’re welcome to do so, but be aware that you’d have to map your naming scheme to the canonical one in all the labs—plus we may not be able to grade your labs correctly, requiring back and forth to sort things out.
Go with atreugo
Go with atreugo
Install Go
-
ssh
to your server and download the latest version of Go: Check Go’s Downloads page for the current latest version. As of the time of writing, the latest version is 1.23.0.server$ cd /tmp server$ wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz
The version of Go distributed with the Ubuntu package manager
apt
was not the latest version and was incompatible with our solution, which requires Go version 1.18 or later. -
Go to Go’s download and install page.
-
Skip the first section on
Go download
, you’ve already downloaded Go. -
Click on the
Linux
tab in the second (Go install
) section of the instructions. - Follow the instructions.
To update to a later version of Go, follow the instructions in Managing Go installations—which would have you manually delete the existing Go folder (usually /usr/local/go/
), so don’t put any custom files in there.
chatterd module
First create and change into a directory where you can keep your Go module files:
server$ mkdir ~/441/chatterd
server$ cd ~/441/chatterd
Create a Go module called chatterd
to house the Chatter
back end:
server$ go mod init chatterd
# output:
go: creating new go.mod: module chatterd
Create a file called main.go
:
server$ vi main.go
Put the following lines in your main.go
:
package main
import(
"log"
"chatterd/router"
"chatterd/views"
)
func main() {
views.New()
go func() { log.Fatal(router.Redirect().ListenAndServe()) }()
log.Fatal(router.New().ListenAndServe())
}
We imported two Go packages chatterd/router
and chatterd/chatter
and instantiated a chatter
and two router
s. One router
serves as the HTTPS web server by calling the ListenAndServe()
function of router
. The other router
redirects HTTP to HTTPS and runs on a separate goroutine.
chatterd/router package
We set up url routing in the router
package. Create the directory router
and add the file router.go
to this directory:
server$ cd ~/441/chatterd
server$ mkdir router
server$ vi router/router.go
Put the following in router.go
:
package router
import (
"chatterd/views"
"net/http"
"github.com/savsgio/atreugo/v11"
)
type Route struct {
HTTPMethod string
URLPath string
URLHandler atreugo.View
}
var routes = []Route {
{"GET", "/getchatts/", views.GetChatts},
{"POST", "/postchatt/", views.PostChatt},
}
func New() *atreugo.Atreugo {
router := atreugo.New(atreugo.Config{
Addr: ":443",
TLSEnable: true,
CertKey: "/etc/ssl/private/selfsigned.key",
CertFile: "/etc/ssl/certs/selfsigned.cert",
//Prefork: true,
Reuseport: true,
})
router.RedirectTrailingSlash(true)
for _, route := range routes {
router.Path(route.HTTPMethod, route.URLPath, route.URLHandler)
}
return router
}
func Redirect() *atreugo.Atreugo {
return atreugo.New(atreugo.Config{
Addr: ":80",
Reuseport: true,
NotFoundView: func(c *atreugo.RequestCtx) error {
views.LogHTTP(http.StatusPermanentRedirect, c)
return c.RedirectResponse("https://"+string(c.Host())+string(c.RequestURI()), http.StatusPermanentRedirect)
},
})
}
The code above sets up the router to serve Chatter
’s two APIs, getchatts
and postchatt
. It routes HTTP GET requests with path /getchatts/
to the GetChatts()
function of the views
package and HTTP POST requests with path /postchatt/
to the PostChatt()
function of the views
package.
It also created two atreugo
instantiation functions: New()
creates a router to serve HTTPS requests following the two paths above, while Redirect()
creates a router that permanently redirects all HTTP requests to the HTTPS server.
We have disabled
Prefork
above. If enabled, 8 servers will be preforked by default. To control the number of preforked servers, set the shell environment variableGOMAXPROCS
prior to starting the server, for example:server# export GOMAXPROCS=3
This will prefork 3 instances of the HTTPS server and 3 instances of the HTTP redirect server.
chatterd/views package
We implement the URL path API handlers in the views
package.
First create the package directory:
server$ mkdir views
Create and edit a file called views.go
:
server$ vi views/views.go
with the following imports:
package views
import (
"context"
"encoding/json"
"log"
"net/http"
"time"
"github.com/savsgio/atreugo/v11"
"github.com/jackc/pgx/v4/pgxpool"
)
and struct
definitions:
type Chatt struct {
Username string `json:"username"`
Message string `json:"message"`
Id string `json:"id"`
Timestamp time.Time `json:"timestamp"`
}
var emptyJSON = struct {
Empty *string `json:"-"`
}{}
We set up the backend with a log function and a database connection pool:
func LogHTTP(sc int, c *atreugo.RequestCtx) {
log.Println("[ATR] |", sc, `|`, c.RemoteAddr().String(), `|`, string(c.Method()), string(c.RequestURI()))
}
var ctx = context.Background()
var chatterDB *pgxpool.Pool
func New() {
var err error
const (
psqlUser = "chatter"
psqlPasswd = "chattchatt"
psqlDB = "chatterdb"
)
chatterDB, err = pgxpool.Connect(ctx, "host=localhost user="+psqlUser+" password="+psqlPasswd+" dbname="+psqlDB)
if err != nil {
panic(err)
}
}
The New()
function instantiates a new views
by allocating a pool of open connections to our PostgreSQL chatterdb
database. Maintaining a pool of open connections avoids the cost of opening and closing a connection on every database operation. The psqlPasswd
here must match the one you used when setting up Postgres earlier.
The URL handler GetChatts()
uses the pool to query the database for stored chatt
s and returns them to the client in the expected JSON format. Similarly, PostChatt()
receives a posted chatt
in the expected JSON format, unmarshalls the Chatt
struct, and inserts it into the database through the pool. The UUID and time stamp of each chatt are automatically generated at insertion time.
func GetChatts(c *atreugo.RequestCtx) error {
var chattArr [][]any
var chatt Chatt
rows, err := chatterDB.Query(ctx, `SELECT username, message, id, time FROM chatts ORDER BY time DESC`)
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
rows.Scan(&chatt.Username, &chatt.Message, &chatt.Id, &chatt.Timestamp)
chattArr = append(chattArr, []any{chatt.Username, chatt.Message, chatt.Id, chatt.Timestamp})
}
LogHTTP(http.StatusOK, c)
return c.JSONResponse(map[string][][]any{"chatts": chattArr}, http.StatusOK)
}
func PostChatt(c *atreugo.RequestCtx) error {
var chatt Chatt
if err := json.Unmarshal(c.Request.Body(), &chatt); err != nil {
LogHTTP(http.StatusUnprocessableEntity, c)
return c.JSONResponse([]byte(err.Error()), http.StatusUnprocessableEntity)
}
_, err := chatterDB.Exec(ctx, `INSERT INTO chatts (username, message, id) VALUES ($1, $2, gen_random_uuid())`, chatt.Username, chatt.Message)
if err != nil {
LogHTTP(http.StatusInternalServerError, c)
return c.JSONResponse([]byte(err.Error()), http.StatusInternalServerError)
}
LogHTTP(http.StatusOK, c)
return c.JSONResponse(emptyJSON, http.StatusOK)
}
Build and run
To build your server:
server$ go get
server$ go build
Go
is a compiled language, like C/C++ and unlike Python, which is an interpreted language. This means you must run go build
each and every time you made changes to your code, for the changes to show up in your executable.
To run your server:
server$ sudo ./chatterd
If you had
Prefork
enabled and wanted to prefork 3 servers of HTTPS and HTTP redirect each, do instead:server$ sudo GOMAXPROCS=3 ./chatterd
You can test your implementation following the instructions in the Testing Chatter
APIs section.
References
-
Working with Go
- Golang tutorial series gentle and clear, though a bit out of date in parts now.
- Running multiple HTTP servers in Go
- Making a RESTful JSON API in Go
- Golang Json Marshal Example
- HTTP Status Code
- Go: Format a time or date
- Dipping Your Feet Into Golang Servers with Fiber
- atreugo
Python with Django
Python with Django
The following is based on DigitalOcean’s How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 20.04 and
How to Create a Self-Signed SSL Certificate for Nginx in Ubuntu 18.04, though the instructions have been customized to the specifics of our Chatter
project, especially when it comes to directory and variable naming, for the sake of narrative consistency across all our labs.
Install Python
We will first use the apt
package manager to install the following:
- Python 3.8 or later (up to 3.12.3 tested)
- Nginx (latest version)
Then we’ll install the following using the Python package manager pip
:
- Django 4.1 or later (up to 5.1 tested)
- Gunicorn (latest version)
server$ sudo apt update
server$ sudo apt install python3-pip python3-dev python3-venv nginx
server$ sudo ln -s /usr/bin/python3 /usr/bin/python
Confirm that you’ve installed python version 3.8 or later:
server$ python --version
# output:
Python 3.8 # or later
Troubleshooting Python
-
If your shell doesn’t recognize the command or the output doesn’t say
Python 3.8
or later, you’d need to switch your python to a later version. If you don’t know how to switch to a different version of python, try this tutorial. -
If you get any error message, try:
server$ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 1 # output: update-alternatives: using /usr/bin/python3 to provide /usr/bin/python (python) in auto mode server$ sudo update-alternatives --list python # output: /usr/bin/python3 server$ python --version # output: Python 3.8 # or later
Python virtual environment
Next install Python within a virtual environment for easier management.
-
Create and change into a directory where you can keep your project files:
server$ mkdir ~/441/chatterd server$ cd ~/441/chatterd
-
Within the project directory, create a Python virtual environment:
server$ python -m venv env
This will create a directory called
env
within yourchatterd
directory. Inside, it will install a local version of Python and a local version ofpip
. We can use this to install and configure an isolated Python environment for our project. -
Activate your virtual environment:
server$ source env/bin/activate
Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this:
(env) ubuntu@YOUR_SERVER_IP:~/441/chatterd$
.Henceforth we will represent the prompt as just
(env):chatterd$
to indicate being inside the python virtual environment. -
Within your activated virtual environment, install
django
,gunicorn
, and thepsycopg
PostgreSQL adaptor using the local instance ofpip
:(env):chatterd$ pip install django gunicorn "psycopg[binary]"
You should now have all of the software packages needed to start a Django project.
You can check whether Django is installed and which version is installed with:
server$ python -m django --version # output: 4.1 # or later
Django web framework
-
Create a Django project:
(env):chatterd$ django-admin startproject routing ~/441/chatterd
At this point your project directory (
~/441/chatterd
) should have the following content:-
~/441/chatterd/env/
: the virtual environment directory we created earlier. -
~/441/chatterd/manage.py
: the Django project management script. -
~/441/chatterd/routing/
: the Django project package. This should contain the__init__.py
,asgi.py
,settings.py
,urls.py
, andwsgi.py
files.
-
-
Edit the project settings:
(env):chatterd$ vi routing/settings.py
At the top of the file add:
import os
Next locate the
ALLOWED_HOSTS
directive. This defines a list of addresses or domain names clients may use to connect to the Django server instance. Any incoming requests with a Host header that is not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.In the square brackets, list the IP addresses or domain names that are associated with your Django server. Each item mut be listed in single quotes, with entries separated by a comma. In the snippet below, a few commented out examples are provide as examples. Those with a period prepended to the beginning of an entry serves an entire domain and its subdomains.
Note: Be sure to include ‘localhost’ as one of the options, for testing. You can also add ‘127.0.0.1’, the IP address indicating localhost. Some online examples may use it instead of ‘localhost’.
. . . # The simplest case: just add the domain name(s) and IP addresses of your Django server # ALLOWED_HOSTS = [ 'example.com', '203.0.113.5'] # To respond to 'example.com' and any subdomains, start the domain with a dot # ALLOWED_HOSTS = ['.example.com', '203.0.113.5'] # ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . ., 'localhost'] # 👇👇👇👇👇👇👇👇👇 ALLOWED_HOSTS = ['YOUR_SERVER_IP', 'localhost', '127.0.0.1']
YOUR_SERVER_IP
will be your external IP address. Don’t list your internal IP address nor your DNS.It is a common bug not to replace
YOUR_SERVER_IP
with your external IP address in the above file.Next find the
DATABASES
configuration. The default configuration in the file is for a SQLite database. We want to use a PostgreSQL database for our project, so we need to change the settings to our PostgreSQL database information. We need to give the database name, the database username, the database user’s password, and then specify that the database is located on the local computer. ThePASSWORD
here must match the one you used when setting up Postgres earlier. You can leave thePORT
setting as an empty string:DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'chatterdb', 'USER': 'chatter', 'PASSWORD': 'chattchatt', 'HOST': 'localhost', 'PORT': '', } }
Scroll to the bottom of the file and add a setting indicating where the static files should be placed. The Nginx web server is optimized to serve static files fast, calling your python code only when needed to serve dynamic content. Here we’re telling Django to put static files for Nginx in a directory called
static
in the base project directory (~/441/chatterd/static/
):STATIC_URL = '/static/' STATIC_ROOT = BASE_DIR / 'static' # added line
Save and close the
settings.py
file when you’re done. -
We can now migrate Django’s administrative database schema to our PostgreSQL database using the management script (and the exptected output):
(env):chatterd$ ./manage.py makemigrations # output: No changes detected (env):chatterd$ ./manage.py migrate # output: Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK ...
-
Create an administrative user for the project:
(env):chatterd$ ./manage.py createsuperuser
You can use your uniqname@umich.edu, and choose and confirm a password.
-
Collect all static content into the static directory we configured:
(env):chatterd$ ./manage.py collectstatic # output: 130 static files copied to '/home/ubuntu/chatterd/static'.
Testing Django
If you’re on AWS, first allow access to port 8000 which we’ll be using for testing (this step is not needed for GCP):
(env):chatterd$ sudo ufw allow 8000
To run and test your development server, on either platform:
(env):chatterd$ ./manage.py runserver 0.0.0.0:8000
In your web browser on your laptop
, visit http://YOUR_SERVER_IP:8000/
.
You should see the default Django index page:
You can also test from the server host using
curl
, but you’ll see only the HTML source of the graphical page:server$ curl http://localhost:8000/ # output: <!doctype html> <html lang="en-us" dir="ltr"> <head> <meta charset="utf-8"> <title>The install worked successfully! Congratulations!</title> ...
When you are satisfied that Django is working, hit Ctl-C
in server
’s terminal window to shut down the development server.
Testing Django with Gunicorn
-
The last thing we want to do before leaving our virtual environment is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using
gunicorn
to load the project’s WSGI (Web Server Gateway Interface) module:(env):chatterd$ gunicorn --bind 0.0.0.0:8000 routing.wsgi
This will start Gunicorn on the same interface that the Django development server was running on. You should see output similar to this:
[2022-07-21 21:43:14 -0400] [32224] [INFO] Starting gunicorn 20.1.0 [2022-07-21 21:43:14 -0400] [32224] [INFO] Listening at: http://0.0.0.0:8000 (32224) [2022-07-21 21:43:14 -0400] [32224] [INFO] Using worker: sync [2022-07-21 21:43:14 -0400] [32227] [INFO] Booting worker with pid: 32227
-
When you are done testing, hit
Ctl-C
in theserver
’s terminal window to stop Gunicorn. -
We’re done configuring Django. Exit the virtual environment:
(env):chatterd$ deactivate
Setup Gunicorn
We use the Unix tool systemd
to run Gunicorn whenever a connection attempt is made (by Nginx) to a socket we associate with Gunicorn. The socket will be created by systemd
at Gunicorn service start time. We set up Gunicorn service configuration to have it started by systemd
automatically on system boot.
-
Start by creating and opening a
systemd
socket configuration file for Gunicorn with sudo privileges:server$ sudo vi /etc/systemd/system/gunicorn.socket
-
Inside, we create a
[Unit]
section to describe the socket, a[Socket]
section to specify the socket location, and an[Install]
section to make sure the socket is created at the right time:[Unit] Description=gunicorn socket [Socket] ListenStream=/run/gunicorn.sock [Install] WantedBy=sockets.target
Save and close the file when you are finished.
-
Next, create and open a
systemd
service file for Gunicorn, withsudo
privileges. The service file name must match the socket file name, except for the extension:server$ sudo vi /etc/systemd/system/gunicorn.service
-
Enter the following into your
gunicorn.service
file:[Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=ubuntu Group=www-data WorkingDirectory=/home/ubuntu/441/chatterd ExecStart=/home/ubuntu/441/chatterd/env/bin/gunicorn \ --access-logfile - \ --workers 3 \ --bind unix:/run/gunicorn.sock \ routing.wsgi:application [Install] WantedBy=multi-user.target
With that, our Gunicorn service configuration is done. Save and close the file.
-
We can now start and enable the Gunicorn socket. This will create the socket file at
/run/gunicorn.sock
now and at boot. When a connection is made to that socket,systemd
will automatically start thegunicorn.service
to handle it:server$ sudo systemctl start gunicorn.socket server$ sudo systemctl enable gunicorn.service server$ sudo systemctl start gunicorn
If you subsequently made changes to the
/etc/systemd/system/gunicorn.service
file, you’d need to reload the daemon to reinitialize the service configuration and restart the Gunicorn process:server$ sudo systemctl daemon-reload server$ sudo systemctl restart gunicorn
We can confirm that the operation was successful by checking for the socket file.
server$ file /run/gunicorn.sock # output: /run/gunicorn.sock: socket
and Gunicorn’s status:
server$ systemctl status gunicorn # output: ● gunicorn.service Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) **Active: active (running)** since Wed 2022-05-27 16:47:07 UTC; 18s ago . . . 👆👆👆👆👆👆👆👆👆👆
Confirm that Gunicorn’s status reported on the third line is
Active: active (running)
.
TIP: sudo systemctl status gunicorn
is your BEST FRIEND in debugging Django. If you get an HTTP error code 500 Internal Server Error
or if you just don’t know whether your HTTP request has made it to the server, first thing you do is run sudo systemctl status gunicorn
on your server and study its output. It also shows error messages from your python code, including any debug printouts from your code. The command systemctl status gunicorn
is by far the most useful go-to tool to diagnose Django back-end server problem.
Nginx to Gunicorn
Now that Gunicorn is set up, we configure Nginx to pass traffic to it.
-
Start by creating and opening a web site configuration file we’ll call
chatterd
in Nginx’ssites-available
directory:server$ sudo vi /etc/nginx/sites-available/chatterd
-
Inside, open up a new server block. We specify that the server will listen on the default HTTPS port 443, for both IPv4 and IPv6. We use the private key and certificate you created earlier to serve HTTPS traffic:
server { listen 443 ssl; listen [::]:443 ssl; # add support for IPv6 ssl_certificate /etc/ssl/certs/selfsigned.cert; ssl_certificate_key /etc/ssl/private/selfsigned.key; ssl_protocols TLSv1.2; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/ubuntu/441/chatterd; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } }
-
So that visitors to your web site who accidentally entered
http
instead ofhttps
wouldn’t be confronted with an error message, we automatically redirect them tohttps
permanently. In your web site configuration file above, create a second server block after the one above. Remember to replaceYOUR_SERVER_IP
with your external IP:# . . . server { listen 80; listen [::]:80; # IPv6 # 👇👇👇👇👇👇👇👇👇 server_name YOUR_SERVER_IP; return 308 https://$server_name$request_uri; # permanent redirect }
-
Save and close the file. Now, we can enable the file by linking it to the
sites-enabled
directory:server$ sudo ln -s /etc/nginx/sites-available/chatterd /etc/nginx/sites-enabled
-
Test your Nginx configuration for syntax errors:
server$ sudo nginx -t # output: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
-
If no errors are reported, go ahead and restart Nginx:
server$ sudo systemctl restart nginx
Be sure to do this every time you make changes to the
chatterd
server configuration file.To check that your nginx is running:
server$ sudo systemctl status nginx
-
Finally, we need to open up our firewall to normal traffic on port 80:
server$ sudo ufw allow 'Nginx Full'
- To test, you can use
curl
:laptop$ curl --insecure https://YOUR_SERVER_IP/
or, from your Chrome browser on your laptop, browse to
https://YOUR_SERVER_IP/
. It will warn you that “Your connection is not private”. Click theAdvanced
button. Then bravely clickProceed to YOUR_SERVER_IP (unsafe)
. You should see the Nginx rocket.On macOS, Safari won’t allow you to visit the site, you have to use Chrome.
-
You can verify that your redirect for HTTP to HTTPS functions correctly by accessing
http://YOUR_SERVER_IP/
(note use ofhttp
nothttps
). Again, you must do this from a Chrome browser orcurl --insecure
.Troubleshooting
If this last step does not show your application, consult DigitalOcean’s tutorial and search for “Troubleshooting Nginx and Gunicorn” or consult the teaching staff.
Summary: configuration files
In summary, here are the configuration files for the three packages we rely on for a Python-based back-end stack:
Nginx: web server that listens on port 80
file: /etc/nginx/sites-enabled/chatterd
, if modified run:
server$ sudo nginx -t
server$ sudo systemctl restart nginx
Gunicorn: serves Django project
file: /etc/systemd/system/gunicorn.service
, if modified run:
server$ sudo systemctl daemon-reload
server$ sudo systemctl restart gunicorn
Django: framework to route HTTP requests to your python code
directory: ~/441/chatterd/routing/
, in particular urls.py
(see below). If modified run:
server$ sudo systemctl restart gunicorn
Congratulations! Your server is all set up! Now to implement Django’s views
, which comprises our Chatter
back-end code.
Chatter back end
Start by creating the model-view-controller (MVC) framework expected by Django for all python projects:
server$ cd ~/441/chatterd
server$ source env/bin/activate
(env):chatterd$ ./manage.py startapp app
(env):chatterd$ deactivate
This will create a directory ~/441/chatterd/app
with the necessary python files in it.
In your ~/441/chatterd
project directory, you should now have two directories that were
created by Django:
-
~/441/chatterd/routing/
created withstartproject
earlier. It contains the Django web framework for your app. We will be modifyingsettings.py
andurls.py
in this directory. -
~/441/chatterd/app/
that we just created withstartapp
. It contains your app’s domain/business logic and views and controllers. We will be modifyingviews.py
in this directory.
These are distinct directories and both must be retained (don’t delete or merge them!).
urls.py
We set up URL path API routing in the routing/urls.py
file. Open and edit the file to add the following import line below the two existing ones:
from app import views
Next add to the contents of urlpatterns
array:
path('getchatts/', views.getchatts, name='getchatts'),
path('postchatt/', views.postchatt, name='postchatt'),
The code above sets up Django’s router to serve Chatter
’s two APIs, getchatts
and postchatt
. It routes HTTP GET requests with path getchatts/
to the getchatts()
function of the views
module and HTTP POST requests with path postchatt/
to the postchatt()
function of the views
module.
The Chatter
APIs, getchatts
and postchatt
will be implemented in ~/441/chatterd/app/views.py
.
views.py
Add to ~/441/chatterd/app/views.py
:
from django.http import JsonResponse, HttpResponse
from django.db import connection
from django.views.decorators.csrf import csrf_exempt
import json
def getchatts(request):
if request.method != 'GET':
return HttpResponse(status=404)
with connection.cursor() as cursor:
cursor.execute('SELECT username, message, id, time FROM chatts ORDER BY time DESC;')
rows = cursor.fetchall()
response = {}
response['chatts'] = rows
return JsonResponse(response)
In getchatts()
, we use the database cursor to retrieve chatts
. The cursor in psycopg
is a with
block is closed. Once you have retrieved all the rows from the database, you need to insert it into the response
dictionary to be returned to the front end.
For postchatt()
, by default, Django wants to see CSRF (cross-site request forgery) cookies for posting. Since we’re not implementing csrf
, we ask for exemption. In views.py
add:
@csrf_exempt
def postchatt(request):
if request.method != 'POST':
return HttpResponse(status=404)
json_data = json.loads(request.body)
username = json_data['username']
message = json_data['message']
with connection.cursor() as cursor:
cursor.execute('INSERT INTO chatts (username, message, id) VALUES '
'(%s, %s, gen_random_uuid());', (username, message))
return JsonResponse({})
For more Python-PostgreSQL interaction, see Passing parameters to SQL queries.
As before, everytime you make changes to either app/views.py
or routing/urls.py
, you need to restart Gunicorn:
server$ sudo systemctl restart gunicorn
Leave your nginx
and gunicorn
running until you have received your lab grade.
References
- Nginx+Gunicorn+Django+PostgreSQL setup
- AWS Services You Should Know When Deploying Your Django App
- Django introduction
- To read more about how nginx, gunicorn, and django work together:
Rust with axum
Rust with axum
Install Rust
Note that GCP gives you 2 GB more disk space than AWS, which allows Rust to build dependencies without complaining of running out of space.
ssh
to your server and install Rust:
server$ sudo apt install gcc # cargo depends on gcc's linker
server$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
server$ rustup update
If you see:
rustup: Command not found.
try to logout from your server and ssh back to it again. If the problem persisted and you need help updating your PATH
shell environment variable, please see a teaching staff.
The command rustup update
is also how you can subsequently update your installation of Rust to a new version.
chatterd package
First create and change into a directory where you can keep your Rust chatterd
package files:
server$ cd ~/441
server$ cargo new chatterd
# output:
Created binary (application) `chatterd` package
This will create the ~/441/chatterd/
directory for you. Change to this directory and edit the file Cargo.toml
to list all the 3rd-party libraries (crates
in Rust-speak) we will be using.
server$ cd chatterd
server$ vi Cargo.toml
In Cargo.toml
, add the following below the [dependencies]
tag:
axum = "0.7.5"
axum-server = { version = "0.7.1", features = ["tls-rustls"] }
bb8 = "0.8.3"
bb8-postgres = "0.8.1"
chrono = { version = "0.4.38", features = ["serde"] }
postgres = { version = "0.19.7", features = ["with-chrono-0_4", "with-uuid-1"] }
serde = { version = "1.0.197", features = ["derive"] }
serde_json = "1.0.117"
tokio = { version = "1.37.0", features = ["full"] }
tokio-postgres = "0.7.10"
tracing = "0.1.40"
tracing-subscriber = { version = "0.3.18", features = ["env-filter"] }
uuid = { version = "1.10.0", features = ["v4", "macro-diagnostics", "serde"] }
In ~/441/chatterd/src/
a file main.rs
has also been created for you. Edit the file:
server$ vi src/main.rs
and replace the existing lines in main.rs
with the following:
#![allow(non_snake_case)]
use axum::{
extract::Host,
handler::HandlerWithoutStateExt,
http::{StatusCode, Uri},
response::Redirect,
routing::{get, post},
BoxError,
Router,
};
use axum_server::{
tls_rustls::RustlsConfig,
Server,
};
use bb8::Pool;
use bb8_postgres::PostgresConnectionManager;
use std::{net::SocketAddr};
use tokio_postgres::NoTls;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
pub mod handlers;
#[tokio::main]
async fn main() {
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::new("chatterd=trace"))
.with(tracing_subscriber::fmt::layer())
.init();
// run http to https redirect
tokio::spawn(http_redirect());
// setup connection pool for PostgreSQL
let pgmanager = PostgresConnectionManager::new_from_stringlike(
"host=localhost user=chatter password=chattchatt dbname=chatterdb",
NoTls,
)
.unwrap();
let pgpool = Pool::builder().build(pgmanager).await.unwrap();
let router = Router::new()
.route("/getchatts/", get(handlers::getchatts))
.route("/postchatt/", post(handlers::postchatt))
.with_state(pgpool); // must always be last line in Router set up
// port number the HTTPS server will bind to:
let addr = SocketAddr::from(([0, 0, 0, 0], 443));
tracing::debug!("https server listening on {}", addr);
// certificate and private key used with HTTPS
let certkey = RustlsConfig::from_pem_file(
"/etc/ssl/certs/selfsigned.cert",
"/etc/ssl/private/selfsigned.key",
)
.await
.unwrap();
// run the HTTPS server
axum_server::bind_rustls(addr, certkey)
.serve(router.into_make_service_with_connect_info::<SocketAddr>())
.await
.unwrap();
}
After listing all our imports, we export a module handlers
, which we will define later.
In main()
, we enable logging (tracing
) and spawn an asynchronous function to redirect
all HTTP requests to our HTTPS server. Next we set up a pool of open connections to our PostgreSQL chatterdb
database. Maintaining a pool of open connections avoids the cost of opening and closing a connection on every database operation. The password
used in creating pgmanager
must match the one you used when setting up Postgres earlier. The connection pool is passed to both URL path API handlers as Router State
.
The code above also sets up the axum server to route Chatter
’s two APIs, getchatts
and postchatt
. It routes HTTP GET requests with path /getchatts/
to the getchatts()
function of the handlers
module and HTTP POST requests with path /postchatt/
to the postchatt()
function of the handlers
module.
It then starts the axum_server
bound to the default HTTPS port, with the given certificate and private key, and the provided URL path API routing.
Earlier, we spawned an asynchronous function to redirect all HTTP requests to this HTTPS server. Here’s the code to perform the redirection (from axum’s example-tls-rustls). Add it to the end of main.rs
:
async fn http_redirect() {
fn make_https(host: String, uri: Uri) -> Result<Uri, BoxError> {
let mut parts = uri.into_parts();
parts.scheme = Some(axum::http::uri::Scheme::HTTPS);
if parts.path_and_query.is_none() {
parts.path_and_query = Some("/".parse().unwrap());
}
parts.authority = Some(host.parse()?);
Ok(Uri::from_parts(parts)?)
}
let redirect = move |Host(host): Host, uri: Uri| async move {
match make_https(host, uri) {
Ok(uri) => Ok(Redirect::permanent(&uri.to_string())),
Err(error) => {
tracing::warn!(%error, "failed to convert URI to HTTPS");
Err(StatusCode::BAD_REQUEST)
}
}
};
let addr = SocketAddr::from(([0, 0, 0, 0], 80));
tracing::debug!("http redirect listening on {}", addr);
Server::bind(addr)
.serve(redirect.into_make_service())
.await
.unwrap();
}
handlers module
We implement the URL path API handlers in the module handlers.rs
:
server$ vi src/handlers.rs
Add the following contents:
#![allow(non_snake_case)]
use axum::{
extract::{ConnectInfo, Json, State},
http::{Method, StatusCode, Uri},
};
use bb8::Pool;
use bb8_postgres::PostgresConnectionManager;
use chrono::{DateTime, Local};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use std::{net::SocketAddr, str};
use tokio_postgres::NoTls;
use uuid::Uuid;
type PGPool = Pool<PostgresConnectionManager<NoTls>>;
#[derive(Debug, Serialize, Deserialize)]
pub struct Chatt {
username: String,
message: String,
id: Option<String>,
timestamp: Option<DateTime<Local>>,
}
pub async fn getchatts(
State(pgpool): State<PGPool>,
ConnectInfo(clientIP): ConnectInfo<SocketAddr>,
method: Method,
uri: Uri,
) -> Json<Value> {
let chatterDB = pgpool
.get()
.await
.unwrap();
let mut chattArr: Vec<Vec<Option<String>>> = Vec::new();
for row in chatterDB
.query(
"SELECT username, message, id, time FROM chatts ORDER BY time DESC",
&[],
)
.await
.unwrap()
{
chattArr.push(vec![
row.get(0),
row.get(1),
Some(row.get::<usize, Uuid>(2).to_string()),
Some(row.get::<usize, DateTime<Local>>(3).to_string()),
]);
}
tracing::debug!(
" {:?} | {:?} | {:?} {:?}",
StatusCode::OK,
clientIP,
method,
uri.path()
);
Json(json!({ "chatts": chattArr }))
}
pub async fn postchatt(
State(pgpool): State<PGPool>,
ConnectInfo(clientIP): ConnectInfo<SocketAddr>,
method: Method,
uri: Uri,
Json(chatt): Json<Chatt>,
) -> (StatusCode, Json<Value>) {
let chatterDB = pgpool
.get()
.await
.unwrap();
let dbStatus = chatterDB
.execute(
"INSERT INTO chatts (username, message, id) VALUES ($1, $2, gen_random_uuid())",
&[&chatt.username, &chatt.message],
)
.await
.map_or_else(|err| (StatusCode::INTERNAL_SERVER_ERROR, Json(json!(err.to_string()))), |_| (StatusCode::OK, Json(json!({}))));
tracing::debug!(
" {:?} | {:?} | {:?} {:?}",
dbStatus.0,
clientIP,
method,
uri.path()
);
dbStatus
}
The handler getchatts()
uses the connection pool to query the database for stored chatt
s and returns them to the client in the expected JSON format. Similarly, postchatt()
receives a posted chatt
in the expected JSON format, has it deserialized into the Chatt
struct, and inserts it into the database through the connection pool. The UUID and time stamp of each chatt are automatically generated at insertion time.
Build and run
To build your server:
server$ cargo build --release
server$ ln -s target/release/chatterd chatterd
The first time around, it will take some time to download and build all the 3rd-party crates. Be patient.
Build release version?
We would normally build for development without the --release
flag, but due to the limited disk space on the AWS virtual host, cargo build
for debug version often runs out of space. The release version at least doesn’t keep debug symbols around.
Linking error with cargo build
When running cargo build --release
, if you see:
error: linking with cc failed: exit status: 1
note: collect2: fatal error: ld terminated with signal 9 [Killed]
below a long list of object files, try running cargo build --release
again. It usually works the second time around, when it will have less remaining linking to do. If the error persisted, please talk to the teaching staff.
To run your server:
server$ sudo ./chatterd
You can test your implementation following the instructions in the Testing Chatter
APIs section.
References
- The Rust Programming Language the standard and best intro to Rust.
- axum
- axum_server
- axum_server::tls_rustls
- axum examples
-
http::StatusCode
see the list of
Associated Constants
on the left menu. - axum::extract
- axum::Extension
- Serde JSON
- Postgres with Rust
- postgres::types::FromSql
- chrono::DateTime
- http::Uri
Testing Chatter
APIs
There are several ways to test HTTP POST. You can use a REST API cient with a graphical interface or you can use a command-line tool.
with a REST API client
To test HTTP POST graphically, we could use a REST API client such as Insomnia
, Postman
, or, if you use VSCode, Thunder Client
extension.
Which REST API client to use?
Postman
was the earliest REST API client but is, unfortunately, showing its age. Mainly, it has not been updated to support HTTP/2, which is used by both Android and iOS. We strongly encourage you to use Insomnia
instead of Postman
in this course. If you prefer to stay in VSCode, Thunder Client
is equally servicable.
When you first launch Insomnia
, it will ask you to login and setup E2EE to keep your data safe. I don’t need to keep my data in the backend and just click on the Use the local Scratch Pad
option (screenshot).
To test with Insomnia
, first click the Preferences
gear on the extreme lower left corner, scroll down to the Request/Response
section and uncheck Validate Certificates
(screenshot). Your certificate wasn’t signed by a trusted certification authority, it was self signed.
-
You should see three panes in
Insomnia
, with the middle pane showing a dropdown withGET
selected. -
Click on
GET
to show the drop down menu and selectPOST
. -
Enter
https://YOUR_SERVER_IP/postchatt/
in the field next toPOST
.Remember to replace
YOUR_SERVER_IP
with your server’s external IP. -
Below that there’s a button with the title
Body
. Click onBody
and of the options in the submenu that come up, selectJSON
. -
You can now enter the following in the box under the submenu:
{ "username": "Insomnia", "message": "Are you sleeping?" }
and click the big purple
Send
button.If everything works as expected, the right pane of
Insomnia
should have a header with a green button saying200 OK
, and the pane itself should simply display{}
underPreview
. -
On the left pane of
Insomnia
, you can create a new request by clicking on the small plus-sign icon, selectHTTP Request
in the drop down menu (screenshot). In the newly created request (middle pane), you can doGET
withhttps://YOUR_SERVER_IP/getchatts
and click the big purpleSend
button. It should return something like:{ "chatts": [ [ "Insomnia", "Are you sleeping?", "9249b958-6e46-44b2-8004-9bccf0e8f1c1", "2024-07-22T17:33:25.947" ] ] }
with a command-line tool
curl
To test HTTP POST
(or HTTP PUT
or other) requests with curl
:
laptop$ curl -X POST -d '{ "username": "Curly", "message": "Hello World" }' --insecure https://YOUR_SERVER_IP/postchatt/
The --insecure
option tells curl
not to verify your self-signed certificate.
To retrieve the posted chatt
:
laptop$ curl --insecure https://YOUR_SERVER_IP/getchatts/
HTTPie
You can also use HTTPie instead of curl
to test on the command line:
laptop$ echo '{ "username": "weepie", "message": "Yummy!" }' | http --verify=no POST https://YOUR_SERVER_IP/postchatt/
# output:
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 2
Content-Type: application/json
Date: Wed, 22 Jul 2022 17:45:53 GMT
Server: nginx/1.14.0 (Ubuntu)
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
{}
The --verify=no
option tells HTTPie not to verify your self-signed certificate.
You can also use HTTPie to test getchatts
:
laptop$ http --verify=no https://YOUR_SERVER_IP/getchatts/
# output:
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 116
Content-Type: application/json
Date: Wed, 22 Jul 2022 17:46:32 GMT
Server: nginx/1.14.0 (Ubuntu)
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
{
"chatts": [
[
"weepie",
"Yummy!",
"d6e5f8a6-e3cc-4501-9359-517af9f64cda",
"20242-07-22T17:45:53.177"
]
]
}
Automatic chatterd
restart
This section is applicable only to Go- or Rust-based back-end server. You can skip this section and proceed to Submission guidelines
if you have a Python-based server.
Once you have your chatterd
tested, to run it automatically on system reboot or on failure, first create the service configuration file:
server$ sudo vi /etc/systemd/system/chatterd.service
and put the following in the file:
[Unit]
Description=EECS441 chatterd
Requires=postgresql.service
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=on-failure
RestartSec=1
User=root
Group=www-data
ExecStart=/home/ubuntu/441/chatterd/chatterd
[Install]
WantedBy=multi-user.target
To test the service configuration file, run:
server$ sudo systemctl start chatterd
server$ systemctl status chatterd
# first 3 lines of output:
● chatterd.service - EECS441 chatterd
Loaded: loaded (/etc/systemd/system/chatterd.service; disabled; vendor preset: enabled)
Active: active (running) since Thu 2022-08-25 01:28:56 EDT; 2min 30s ago
. . . 👆👆👆👆👆👆👆👆👆👆
The last line should say, Active: active (running)
.
To have the system restart automatically upon reboot, run:
server$ sudo systemctl enable chatterd
server$ systemctl status chatterd
# first 2 lines of output:
● chatterd.service - EECS441 chatterd
Loaded: loaded (/etc/systemd/system/chatterd.service; enabled; vendor preset: enabled)
. . . 👆👆👆👆👆
The second field inside the parentheses in the second line should now say “enabled”.
To view chatterd
’s console output, run with sudo
:
server$ sudo systemctl status chatterd
If you subsequently edit the chatterd
service configuration file, run:
server$ sudo systemctl daemon-reload
before starting the service again.
To turn off auto restart:
server$ sudo systemctl disable chatterd
That’s all we need to do to prepare the back end. Before you return to work on your front end, wrap up your work here by submitting your files to GitHub.
Submitting your back end
We will only grade files committed to the main
branch. If you use multiple branches, please merge them all to the main branch for submission.
Navigate to your 441 folder:
server$ cd ~/441/
If you have a Python-based back end, run:
server$ cp /etc/nginx/sites-available/chatterd chatterd/etc-chatterd
Otherwise, run:
server$ cp /etc/systemd/system/chatterd.service chatterd/etc-chatterd
Then run:
server$ git add selfsigned.crt chatterd
You can check git status by running:
server$ git status
You should see the newly added files.
Commit changes to the local repo:
server$ git commit -am "chatter back end completed"
and push them to the remote GitHub repo:
server$ git push
If git push
failed due to changes made to the remote repo by your lab partner, you must run git pull
first. Then you may have to resolve any conflicts before you can git push
again.
Go to the GitHub website to confirm that your back-end files have been uploaded to your GitHub repo.
Leave your chatterd
or nginx
and gunicorn
running until you have received your lab grade.
You can now return to complete the front end: Android | iOS.
References
Setup
- Ubuntu setup
- GCP Instructions to set up ssh access with public key pair.
chmod
in WSL- Creating a Linux service with systemd
Intro
Security
- What is HTTPS
- Everything about HTTPS and SSL
- New self-signed SSL Certificate for iOS 13
- Self-Signed SSL Certificate for Nginx
- Connecting mobile apps (iOS and Android) to backends for development with SSL
- How to do the CHMOD 400 Equivalent Command on Windows
Prepared for EECS 441 by Tiberiu Vilcu, Wendan Jiang, Alexander Wu, Benjamin Brengman, Ollie Elmgren, Luke Wassink, Mark Wassink, Nowrin Mohamed, Chenglin Li, Yibo Pi, and Sugih Jamin | Last updated September 21st, 2024 |