Flood API is a RESTful service that provides river level and rainfall data for flood monitoring.
Flood API is designed to programmatically access river level and rainfall data as part of the DIY Flood API challenge from Learn by Doing. It implements the provided OpenAPI contract, reads from an optimised PostgreSQL database, and ensures efficient data retrieval with performance optimizations.
The API allows users to query historical river levels and rainfall measurements from various stations in Northumberland, UK, collected over more than two years from Defra's flood monitoring API.
When a request is made to the API:
- The client sends a GET request to one of the endpoints with optional query parameters for filtering and pagination.
- The API handler parses the parameters and validates them.
- It queries the PostgreSQL database using optimized, type-safe queries generated by sqlc.
- The service layer retrieves the data, applying any date filters and pagination.
- The results are returned as JSON, sorted chronologically.
Performance is enhanced through database schema optimizations, indexing, and efficient querying to handle large datasets quickly.
The data includes:
- River Readings: Timestamped water level measurements for the River Rede at Rede Bridge.
- Rainfall Readings: Timestamped rainfall measurements from various stations in Northumberland.
Stations available (as per OpenAPI enum):
- acomb-codlaw-hill
- allenheads-allen-lodge
- alston
- catcleugh
- chirdon
- garrigill-noonstones-hill
- haltwhistle
- hartside
- hexham-firtrees
- kielder-ridge-end
- knarsdale
Timestamps are in the format "YYYY-MM-DD HH:MM:SS", levels are floating-point numbers >= 0.
Data is originally from Defra's flood monitoring API, with optimizations applied to the database schema for better query performance.
# Get river readings starting from a specific date, page 1, size 10
curl -X GET "http://localhost:9001/river?start=2022-01-01&page=1&pagesize=10" -H "accept: application/json"
# Get rainfall readings for a station, default pagination
curl -X GET "http://localhost:9001/rainfall/catcleugh" -H "accept: application/json"
{
"readings": [
{
"timestamp": "2022-01-01 00:00:00",
"level": 0.15
}
]
}
For rainfall, the response includes "station" in each reading.
Based on the OpenAPI specification (openapi/flood-api.yaml
):
-
GET /river
Retrieves river level readings sorted chronologically.
Parameters:start
(optional, date in YYYY-MM-DD format): Start date for data.page
(optional, integer, default 1): Page number.pagesize
(optional, integer, default 12): Number of measurements per page.
Response: JSON array of river readings with timestamp and level.
-
GET /rainfall/{station}
Retrieves rainfall readings for a specific measuring station, sorted chronologically.
Path Parameter:station
(required, string): Name of the station (e.g., "catcleugh").
Query Parameters: Same as /river.
Response: JSON array of rainfall readings with timestamp, station, and level.
This is a Go application. To build and run:
- Ensure you have Go installed.
- Run
go mod tidy
to install dependencies. - Apply database migrations if necessary (migrations are in the
migrations/
directory). - Start the server:
make run
The API will be available athttp://localhost:9001
(configurable).
Note: The project uses PostgreSQL in the current implementation (see internal/repository/postgres/
).
The project includes unit and integration tests. Use the Makefile to run them:
- Run unit tests:
make test
- Run unit tests with verbose output:
make test-verbose
- Run unit tests with coverage report:
make test-coverage
(generates coverage.html) - Run integration tests (requires Docker/Colima running):
make test-integration
- Run integration tests with verbose output:
make test-integration-verbose
- Run all tests (unit + integration):
make test-all
For integration tests, ensure Docker is available. On macOS with Colima, start it with colima start
if needed.
The challenge suggests analyzing and optimizing the database for better performance, such as modifying the schema or migrating to another database solution. Current optimizations include performance benchmarks and schema improvements (see migrations/
).
- For database interactions, the project uses sqlc for type-safe queries.
- Logging is handled with slog.
- Refer to
challenge.md
for the original challenge details.