πŸ¦€ REST API

April 4, 2022 by CryptoPatrick Engineering Rust

At the time of writing this post (2023-08-10) Axum sits at version 0.6.20.

1. Create a new Rust project:

cargo new rest_api
cd rest_api

2. Import dependencies:

[package]
name = "rest_api"
version = "0.1.0"
edition = "2021"
author = "CryptoPatrick"


[dependencies]
axum = "0.6.20" # Web Framework
sqlx = {version = "0.7.1", features = ["json","postgres","runtime-tokio-native-tls"]}
tokio = "1.30.0" # Async Runtime
serde = "1.0.183" # (De)serialization
serde_json = "1.0.104" # (De)serialization json

[dev-dependencies]
anyhow = "1.0.72" # Error handling

use cargo build to import all the dependencies.

Gotcha: In order to use the #[tokio::main] procedural macro we need to import either tokio’s full feature set, or at least cargo add tokio --features macros,rt-multi-thread.

3. Wire up Axum:

We use Axum’s “Hello World!” example as our jump-off point, and add a simple route:

use std::net::SocketAddr;
use axum::{
    routing::{get,post},
    Router,
};

#[tokio::main]
async fn main() -> anyhow::Result {
    // build our application with a single route
    let app = Router::new()
        .route("/", get(root))
        .route("/ping", post(ping));

    // run it with hyper on localhost:3000
    let addr = SocketAddr::from(([0.0.0.0], 3000));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();

    // We return Ok(()) since anyhow demands it.
    Ok(())
}

async fn root() {}
async fn ping() -> impl IntoResponse {
    "Server was pinged.".to_owned()
}

4. Build the crate to pull all dependencies, then run it:

cargo build
cargo run

5. Check that the server is running by curl:ing the status codes:

# If this curl returns a status code of 200, then we're OK:
curl -o /dev/null -s -w "%{http_code}\n" http://localhost:3000
curl -o /dev/null -s -w "%{http_code}\n" http://localhost:3000/ping

6. Refactor by improving our project structure:

Let’s add files and folders until we have the following project structure:

# We use the unix tree command with the -I flag to ignore the target and .git directories.
➜ tree rest_api --gitignore -a -I target -I '.git
rest_api
β”œβ”€β”€ .env
β”œβ”€β”€ .gitignore
β”œβ”€β”€ Cargo.lock
β”œβ”€β”€ Cargo.toml
β”œβ”€β”€ sql
β”œβ”€β”€ src
β”‚Β Β  β”œβ”€β”€ controllers
β”‚Β Β  β”‚Β Β  └── issue.rs
β”‚Β Β  β”œβ”€β”€ errors.rs
β”‚Β Β  β”œβ”€β”€ main.rs
β”‚Β Β  β”œβ”€β”€ models
β”‚Β Β  β”‚Β Β  └── issue.rs
β”‚Β Β  └── views
└── tests

Our REST API will follow a simple Model View Controller pattern.

7. Database Setup

We want our API to be resilient - in today’s terms that translates to being Cloud Native. The two qualities we’re going for are high availability and fault tolerance, meaning our application running distributed on multiple machines and scaling up and down in response to user demand or hardware failures.

Okay, how?

For starters, our application needs to be stateless in order to avoid losing data in the event of hardware failure - so we need a database. Ideally, we want to check our code for errors at compile-time (not runtime).

We use the Rust crate sqlx because it:

  • provides compile-time checks
  • is asynchronous, leveraging all available CPUs
  • and enable us to write raw SQL, making our code highly portable

Migrations

Using a database managing system with a graphical user interface, to design and modify our database, can cause some challenges when collaborate with others. Using migrations enable us to do everything in code, and above all - share that code with others so that they can generate the exact same database setup that we have. So, let there be migrations.

The tool we’re using is the nifty CLI sqlx-cli.

Please note that we are using cargo install sqlx-cli.

Create an .env to hold our environment variables:

touch .env
echo "DATABASE_URL=postgresql://user:password@localhost:5432/issues_db" > .env
# Make sure we have Docker installed
docker -v
Docker version 24.0.2, build cb74dfc

7.1 Create the model

//! src/model/issue.rs
use serde::{Deserialize, Serialize};

#[derive(sqlx::FromRow, Deserialize, Serialize)]
pub struct Issue { 
    id:  i32, 
    issue: String,
};

#[derive(sqlx::FromRow, Deserialize, Serialize)]
pub struct NewIssue {
    pub issue: String,
}

7.2 Store environment variables in ./.env

# In root, create a file .env to store our db url variable:
```bash
# In the root of our project.
touch .env

Add the following to .env

DATABASE_USER="admin"
DATABASE_PASSWORD="admin"
DATABASE_HOST="127.0.0.1"
DATABASE_PORT=4000
DATABASE_NAME="issues_db"
DATABASE_URL="postgres://${DATABASE_USER}:${DATABASE_PASSWORD}@${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_NAME}"

7.3 Install sqlx-cli

# In the project root.
# Install sqlx-cli tool for Postgres specifically:
cargo install sqlx-cli --no-default-features --features native-tls,postgres

7.4 Use sqlx-cli to create a database with DATABASE_URL

#Β sqlx will use the DATABASE_URL specified in ./.env, to create our Postgres db.
sqlx database create

7.5 Add a migration schema

The following command will create a new file in ./migrations/<timestamp>-issue.sql.

# The -r flag is used to create a reversible migration.
# This means both *up.sql and *down.sql files will be created.
sqlx migrate add -r issue

7.6 Create our SQL in ./database/issue.sql:

Write our SQL in the migration file, which was created in the previous step.

CREATE TABLE issue (
    id SERIAL PRIMARY KEY,
    issue VARCHAR(255) NOT NULL,
)

7.7 Run the migration

sqlx migrate run
# We should get a confirmation that all went well:
Applied <timestamp>issue.sql

# To revert a migration we use the command:
# sqlx migrate revert

7.8 Connect our database to main.rs

We use a database connection pool to cache incoming connections. We use std::fs to read our DATABASE_URL from our .env file, and store it in a variable that we pass to sqlx::PgPoolOptions.

//! ./src/main.rs
use sqlx::postgres::PgPoolOptions;
use std::fs;
use anyhow::Context;

use axum::{
    Extension,
    routing::{get, post},
    Router,
    SystemCodes
};

async fn main() -> anyhow::Result<()> {
    let env = fs::read_to_string(".env").unwrap();
    let (key, database_url)Β =Β env.split_once('=').unwrap();
    assert_eq!(key, "DATABASE_URL");

    let pool = PgPoolOptions::new()
        .max_connections(50)
        .connect(&database_url)
        .await
        .context("Unable to connect to db.")?;

    let app = Router::new()
        .route("/", get(root))
        .route("/ping", post(ping));

    // run it with hyper on localhost:3000
    let addr = SocketAddr::from(([0.0.0.0], 3000));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();

    // We return Ok(()) since anyhow demands it.
    Ok(())
}

8. Controllers

In ./controllers/issue.rs we are going to do our entire CRUD. From axum we need:

  • Extensions
  • Json
  • StatusCode
  • IntoResponse

GET all

We want to return all issues stored in our database.
Our function all_issues takes a struct representing our database pool as Axum’s Extensions type.

First, we create an SQL statement to SELECT everything * from the issue table.
Next, we use sqlx to query our database with the created SQL statement.
Finally, we return a status code together with the Json:ified result as an IntoResponse type.

query_as, fetch_all

use axum::response::IntoResponse;
use axum::http::StatusCode;
use axum::{Extension, Json};
use sqlx::PgPool;
use crate::model::issue;

pub async fn all_issues(
    Extension(pool): Extension<PgPool>) 
    -> impl IntoResponse {
    let sql = "SELECT * FROM issue ".to_string();
    let issues = sqlx::query_as::<_, issue::Issue>(&sql)
        .fetch_all(&pool)
        .await.unwrap();

    (StatusCode::OK, Json(issues))
}

GET by id

We use sqlx::fetch_one().


pub async fn issue(
    Path(id):Path<i32>, 
    Extension(pool): Extension<PgPool>
    ) 
    -> impl IntoResponse {
    let sql = "SELECT * FROM issue WHERE id=$1".to_string();
    let issue = sqlx::query_as(&sql)
        .bind(id)
        .fetch_one(&pool)
        .await
        .unwrap();

    (StatusCode::OK, Json(issue))
}

POST

pub async fn new_issue(Json(issue): Json<task::NewIssue>,
    Extension(pool): Extension<PgPool>
    ) -> Result<(StatusCode, Json<issue::Issue>), CustomError> {
        if issue.issue.is_empty() {
            return Err()
        }

        let sql = "INSERT INTO issue (issue) VALUES ($1)";
        let _ = sqlx::query(&sql)
            .bind(&issue.issue)
            .execute(&pool)
            .await
            .map_err(|_| {
                CustomError::InternalServerError
            })?;

        Ok((StatusCode::CREATED, Json(issue)))
    }

PUT all

DELETE

pub async fn delete_task(
    Path(id): Path<i32>, 
    Extension(pool): Extension<PgPool>) 
-> Result <(StatusCode, Json<Value>), CustomError> {
    let _find: task::Task = sqlx::query_as("SELECT * FROM task where id=$1")
        .bind(id)
        .fetch_one(&pool)
        .await
        .map_err(|_| {
            CustomError::TaskNotFound
        })?;

    sqlx::query("DELETE FROM task WHERE id=$1")
        .bind(id)
        .execute(&pool)
        .await
        .map_err(|_| {
            CustomError::TaskNotFound
        })?;

    Ok((StatusCode::OK, Json(json!({"msg": "Task Deleted"}))))
}

'I write to understand as much as to be understood.' β€”Elie Wiesel
(c) 2024 CryptoPatrick