Skip to content

mehdi124/chat-service

πŸ’¬ Chat Service

Go Version License PRs Welcome GitHub Stars

A high-performance, scalable real-time chat service built with Go. Features WebSocket support, message persistence, file uploads, and push notifications.

   ____ _           _     ____                  _          
  / ___| |__   __ _| |_  / ___|  ___ _ ____   _(_) ___ ___ 
 | |   | '_ \ / _` | __| \___ \ / _ \ '__\ \ / / |/ __/ _ \
 | |___| | | | (_| | |_   ___) |  __/ |   \ V /| | (_|  __/
  \____|_| |_|\__,_|\__| |____/ \___|_|    \_/ |_|\___\___|

✨ Features

  • Real-time Messaging - WebSocket-based real-time communication
  • Room Types - Support for Private chats, Groups, and Channels
  • Message Types - Text messages, file attachments, replies, and forwards
  • File Uploads - S3-compatible storage with presigned URLs
  • Push Notifications - Firebase Cloud Messaging integration
  • High Availability - PostgreSQL replication with PgPool load balancing
  • Message Queue - RabbitMQ cluster for reliable message delivery
  • Caching - Redis for connection management and caching
  • JWT Authentication - Secure token-based authentication

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                              NGINX (Load Balancer)                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚                                   β”‚
              β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”                       β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”
              β”‚  HTTP API β”‚                       β”‚ WebSocket β”‚
              β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜                       β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
                    β”‚                                   β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚         Chat Service (Go)         β”‚
                    β”‚   - Fiber HTTP Framework          β”‚
                    β”‚   - Hexagonal Architecture        β”‚
                    β”‚   - Wire Dependency Injection     β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚                           β”‚                           β”‚
    β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”               β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”               β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”
    β”‚  PgPool   β”‚               β”‚  RabbitMQ β”‚               β”‚   Redis   β”‚
    β”‚           β”‚               β”‚  Cluster  β”‚               β”‚           β”‚
    β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚
    β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚              PostgreSQL               β”‚
    β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚   β”‚ Primary β”‚  β”‚Replica 1β”‚  β”‚Replica 2β”‚
    β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“‹ Prerequisites

Required Software

Required Infrastructure Services

Service Purpose Required
PostgreSQL Primary data storage with streaming replication βœ… Yes
Redis WebSocket connection state, chunk upload tracking, distributed locks βœ… Yes
RabbitMQ Message queue cluster for real-time message delivery βœ… Yes
S3-Compatible Storage File uploads (AWS S3, MinIO, Arvan Cloud, etc.) βœ… Yes
Firebase Push notifications for offline users βšͺ Optional

S3 Storage Setup

This service requires an S3-compatible object storage for file uploads. You can use:

  • AWS S3 - Amazon's object storage
  • MinIO - Self-hosted S3-compatible storage
  • Arvan Cloud - Iranian cloud provider (used in development)
  • DigitalOcean Spaces - S3-compatible object storage

Configure in .env:

S3_ACCESS_KEY=your_access_key
S3_SECRET_KEY=your_secret_key
S3_ENDPOINT=https://your-s3-endpoint.com
S3_BUCKET_NAME=your-bucket-name
S3_REGION=default

πŸš€ Quick Start

1. Clone the Repository

git clone https://github.com/mehdi124/chat-service.git
cd chat-service

2. Configure Environment

# Copy the example environment file
cp .env.example .env

# Edit .env with your configuration
# IMPORTANT: Update all placeholder values with secure credentials

Generate Secure Secrets

# Generate JWT secret
openssl rand -base64 32

# Generate Redis password
openssl rand -hex 24

# Generate RabbitMQ Erlang cookie
openssl rand -hex 32

# Generate API key
openssl rand -hex 16

3. Start Infrastructure Services

# Start all infrastructure services (PostgreSQL, Redis, RabbitMQ, etc.)
docker-compose up -d

4. Install Dependencies & Build

# Install Go dependencies and development tools
make install

# Generate Wire dependency injection code
make wire

# Build the application
go build -o main .

5. Run Database Migrations

# Run migrations with your config file
./main migrate up -c .env

6. Start the Application

# Run the chat service
./main app run -c .env

The service will be available at:

  • HTTP API: http://localhost:8000/api/v1
  • WebSocket: ws://localhost:8000/ws/v1

🐳 Docker Deployment

Build Docker Image

docker build -t chat-service:latest .

Run with Docker Compose

The docker-compose.yaml includes all necessary services:

Service Description Ports
PgPool PostgreSQL connection pooler 9999
PostgreSQL Primary Main database -
PostgreSQL Replica 1-2 Read replicas -
Redis Session/cache store 6379
RabbitMQ 1-3 Message queue cluster -
HAProxy RabbitMQ load balancer 5672, 15672
Nginx HTTP/WebSocket proxy 80, 443

πŸ“ Project Structure

chat-service/
β”œβ”€β”€ cmd/                    # CLI commands
β”‚   β”œβ”€β”€ app.go             # Application runner
β”‚   β”œβ”€β”€ migrate.go         # Database migration commands
β”‚   β”œβ”€β”€ root.go            # Root command setup
β”‚   └── token.go           # Token generation utilities
β”œβ”€β”€ config/                 # Configuration loading
β”œβ”€β”€ core/
β”‚   └── app/               # Application bootstrap & DI
β”œβ”€β”€ database/
β”‚   └── migration/         # SQL migration files
β”œβ”€β”€ infra/                  # Infrastructure adapters
β”‚   β”œβ”€β”€ logger.go          # Logging setup
β”‚   β”œβ”€β”€ pg.go              # PostgreSQL connection
β”‚   β”œβ”€β”€ rabbitmq.go        # RabbitMQ connection
β”‚   β”œβ”€β”€ redis.go           # Redis connection
β”‚   β”œβ”€β”€ s3.go              # S3 storage client
β”‚   └── server.go          # HTTP server setup
β”œβ”€β”€ internal/
β”‚   β”œβ”€β”€ chat/
β”‚   β”‚   β”œβ”€β”€ adapter/
β”‚   β”‚   β”‚   β”œβ”€β”€ inbound/   # HTTP, WebSocket, RabbitMQ handlers
β”‚   β”‚   β”‚   └── outbound/  # Repository implementations
β”‚   β”‚   └── core/
β”‚   β”‚       β”œβ”€β”€ domain/    # Domain models & value objects
β”‚   β”‚       β”œβ”€β”€ port/      # Inbound & Outbound port interfaces
β”‚   β”‚       └── service/   # Business logic services
β”‚   └── shared/            # Shared utilities & middleware
β”œβ”€β”€ volumes/               # Docker volume configurations
β”œβ”€β”€ docker-compose.yaml    # Infrastructure services
β”œβ”€β”€ Dockerfile             # Application container
β”œβ”€β”€ Makefile              # Build automation
└── .env.example          # Environment template

πŸ”§ CLI Commands

# Run the application
./main app run -c <config-file>

# Database migrations
./main migrate up -c <config-file>      # Apply migrations
./main migrate down -c <config-file>    # Rollback migrations
./main migrate seed -c <config-file>    # Seed database

Generate Test Tokens (Development Only)

# Generate test JWT tokens for development
./main token init -c .env

⚠️ WARNING: The token init command generates tokens for hardcoded test user IDs and is intended for development and testing only. Do NOT use this in production. For production environments, implement a proper user authentication flow through your application's login endpoint.

πŸ“‘ API Endpoints

Authentication

All API requests require authentication via JWT Bearer token.

Authorization: Bearer <jwt_token>

API Documentation

Full API documentation is available in OpenAPI 3.0 format:

πŸ“„ docs/openapi.yaml

You can view it using:

Room Types

  • PRIVATE - One-to-one direct messages
  • GROUP - Multi-user group chat (max 50 members by default)
  • CHANNEL - Broadcast channel (max 100 members by default)

Message Types

  • TEXT - Plain text message
  • FILE - File attachment
  • REPLIED - Reply to another message
  • FORWARDED - Forwarded message

πŸ”Œ WebSocket Connection

Connect to the WebSocket endpoint for real-time messaging. Authentication is done via JWT token in the query parameter.

const token = 'your_jwt_token';
const ws = new WebSocket(`ws://localhost:8000/ws/v1/chat?token=${token}`);

// Handle incoming messages (MessagePack encoded)
ws.onmessage = async (event) => {
  const buffer = await event.data.arrayBuffer();
  const message = msgpack.decode(new Uint8Array(buffer));
  console.log('Received:', message);
};

Message Encoding Format

WebSocket messages use MessagePack binary format for efficient serialization. MessagePack is a binary-based serialization format that is more compact and faster than JSON.

Install MessagePack Library

# JavaScript/Node.js
npm install @msgpack/msgpack

# Go (already included)
# github.com/vmihailenco/msgpack/v5

Request Structure (Client β†’ Server)

import { encode } from '@msgpack/msgpack';

// Send a text message
const request = {
  Type: 'message',           // 'message' | 'seen' | 'ping'
  RoomID: 'room-uuid',       // Target room UUID
  ReceiverID: 'user-uuid',   // For private messages (optional if RoomID provided)
  Content: 'Hello World',    // Message content or filename
  ContentType: 'text',       // 'text' | 'image' | 'video' | 'audio' | 'file'
  MessageType: 'direct',     // 'direct' | 'replied' | 'forwarded'
  Sign: 'unique-sign-123',   // Unique client-side message identifier
  ParentMessageID: ''        // For replies/forwards
};

ws.send(encode(request));

Response Structure (Server β†’ Client)

import { decode } from '@msgpack/msgpack';

// Response format
{
  Type: 'response',      // 'response' | 'new_message' | 'seen' | 'pong'
  Success: true,
  Error: '',
  Data: {
    ID: 'message-uuid',
    RoomID: 'room-uuid',
    SenderID: 'user-uuid',
    Content: 'Hello World',
    ContentType: 'text',
    Status: 'sent',
    CreatedAt: '2024-01-01T00:00:00Z',
    // ... additional fields
  }
}

Request Types

Type Description
message Send a new message
seen Mark messages as seen in a room
ping Keep connection alive (receives pong)

Go Example (Server-side)

import "github.com/vmihailenco/msgpack/v5"

// Decode incoming message
var req ChatRequest
if err := msgpack.Unmarshal(msg, &req); err != nil {
    return err
}

// Encode response
response := ChatResponse{
    Type:    "response",
    Success: true,
    Data:    message,
}
encoded, _ := msgpack.Marshal(response)

πŸ“€ Chunked File Upload

The service supports multipart/chunked file uploads for large files using S3's multipart upload API.

Upload Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”                    β”Œβ”€β”€β”€β”€β”
β”‚  Client  β”‚                    β”‚  Server  β”‚                    β”‚ Redis β”‚                    β”‚ S3 β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜                    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜                    β””β”€β”€β”€β”¬β”€β”€β”€β”˜                    β””β”€β”¬β”€β”€β”˜
     β”‚                               β”‚                              β”‚                          β”‚
     β”‚  1. Send chunk 1              β”‚                              β”‚                          β”‚
     β”‚  (FileID, ChunkIndex=1,       β”‚                              β”‚                          β”‚
     β”‚   TotalChunk=N)               β”‚                              β”‚                          β”‚
     │──────────────────────────────>β”‚                              β”‚                          β”‚
     β”‚                               β”‚  2. CreateMultipartUpload    β”‚                          β”‚
     β”‚                               │─────────────────────────────────────────────────────────>β”‚
     β”‚                               β”‚                              β”‚       UploadID           β”‚
     β”‚                               β”‚<─────────────────────────────────────────────────────────│
     β”‚                               β”‚  3. Store UploadID           β”‚                          β”‚
     β”‚                               │─────────────────────────────>β”‚                          β”‚
     β”‚                               β”‚  4. UploadPart (chunk 1)     β”‚                          β”‚
     β”‚                               │─────────────────────────────────────────────────────────>β”‚
     β”‚                               β”‚                              β”‚       ETag               β”‚
     β”‚                               β”‚<─────────────────────────────────────────────────────────│
     β”‚                               β”‚  5. Store ETag               β”‚                          β”‚
     β”‚                               │─────────────────────────────>β”‚                          β”‚
     β”‚  6. Repeat for chunks 2..N-1  β”‚                              β”‚                          β”‚
     │──────────────────────────────>β”‚                              β”‚                          β”‚
     β”‚                               β”‚                              β”‚                          β”‚
     β”‚  7. Send final chunk N        β”‚                              β”‚                          β”‚
     │──────────────────────────────>β”‚                              β”‚                          β”‚
     β”‚                               β”‚  8. Get all ETags            β”‚                          β”‚
     β”‚                               │─────────────────────────────>β”‚                          β”‚
     β”‚                               β”‚<─────────────────────────────│                          β”‚
     β”‚                               β”‚  9. CompleteMultipartUpload  β”‚                          β”‚
     β”‚                               │─────────────────────────────────────────────────────────>β”‚
     β”‚                               β”‚                              β”‚                          β”‚
     β”‚  10. Broadcast to room        β”‚                              β”‚                          β”‚
     β”‚<──────────────────────────────│                              β”‚                          β”‚

Upload Requirements

  • Minimum chunk size: 5 MB (S3 requirement for multipart uploads)
  • Maximum file size: 20 MB (configurable via S3_MAX_UPLOAD_SIZE)
  • Supported formats: Images, Videos, Audio, Documents, Archives

Upload Example (JavaScript)

async function uploadFile(file, messageId, token) {
  const CHUNK_SIZE = 5 * 1024 * 1024; // 5MB minimum
  const totalChunks = Math.ceil(file.size / CHUNK_SIZE);
  
  for (let i = 0; i < totalChunks; i++) {
    const start = i * CHUNK_SIZE;
    const end = Math.min(start + CHUNK_SIZE, file.size);
    const chunk = file.slice(start, end);
    
    const formData = new FormData();
    formData.append('File', chunk);
    formData.append('ChunkIndex', i + 1);
    formData.append('TotalChunk', totalChunks);
    
    await fetch(`/api/v1/files/${messageId}/upload`, {
      method: 'POST',
      headers: { 'Authorization': `Bearer ${token}` },
      body: formData
    });
  }
}

Concurrent Upload Handling

The service uses Redis distributed locks to handle concurrent chunk uploads safely:

  • Lock acquisition prevents duplicate UploadID creation
  • Each chunk's ETag is tracked in Redis
  • Upload completes automatically when all chunks are received

⚑ RabbitMQ Load Balancing & Scalability

Consistent Hash-Based Queue Distribution

The service uses FNV-1a consistent hashing to distribute messages across multiple RabbitMQ queues, enabling horizontal scaling and ordered message delivery per room.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Message Publishing                               β”‚
β”‚                                                                          β”‚
β”‚   User sends message β†’ Hash(UserID) β†’ Queue N β†’ Consumer β†’ WebSocket   β”‚
β”‚                                                                          β”‚
β”‚   Example with 10 queues:                                               β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”‚
β”‚   β”‚ User A  │────>β”‚ hash % 10 = 3│────>β”‚ chat.queue.3    β”‚              β”‚
β”‚   β”‚ User B  │────>β”‚ hash % 10 = 7│────>β”‚ chat.queue.7    β”‚              β”‚
β”‚   β”‚ User C  │────>β”‚ hash % 10 = 3│────>β”‚ chat.queue.3    β”‚ (same queue) β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Capacity Calculation

With the default configuration of 10 queues per service instance:

Metric Value
Queues per instance 10
Messages/sec per queue ~1,000
Throughput per instance ~10,000 msg/sec
Concurrent WebSocket connections ~10,000 per instance

To handle 100,000 concurrent users:

  • Deploy 10 service instances behind a load balancer
  • Each instance handles ~10,000 connections
  • Total queue count: 100 (10 queues Γ— 10 instances)
  • Configure RABBITMQ_TOTAL_QUEUE=10 per instance

Scaling Strategy

                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  Load Balancer  β”‚
                    β”‚    (Nginx)      β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚                    β”‚                    β”‚
   β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
   β”‚Instance1β”‚         β”‚Instance2β”‚         β”‚Instance3β”‚
   β”‚10 queuesβ”‚         β”‚10 queuesβ”‚         β”‚10 queuesβ”‚
   β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
        β”‚                    β”‚                    β”‚
        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚ RabbitMQ Clusterβ”‚
                    β”‚  (3 nodes HA)   β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Benefits of This Architecture

  1. Ordered Delivery: Messages from the same user always go to the same queue
  2. Horizontal Scaling: Add more instances to handle more users
  3. Fault Tolerance: RabbitMQ cluster ensures no message loss
  4. Load Distribution: FNV-1a hash provides even distribution

πŸ§ͺ Testing

# Run all tests
make test

# Run tests with coverage
make test-cover

# Run tests with race detector
make test-race

πŸ” Security Considerations

  1. Environment Variables: Never commit .env files. Use .env.example as a template.
  2. JWT Secrets: Generate strong, random secrets for JWT signing.
  3. Database Passwords: Use strong, unique passwords for all database users.
  4. SSL/TLS: Enable SSL in production for all connections.
  5. API Keys: Rotate API keys regularly.
  6. Firebase Credentials: Keep service account keys secure and never expose them publicly.

πŸ“Š Monitoring

RabbitMQ Management UI

Access at http://localhost:15672 (default: guest/guest in development)

PostgreSQL

Connect via PgPool at localhost:9999

🀝 Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“ License

This project is open source and available under the MIT License.

πŸ™ Acknowledgments

  • Fiber - Fast HTTP framework
  • Wire - Compile-time dependency injection
  • Bun - SQL-first ORM for Go
  • Viper - Configuration management
  • Cobra - CLI framework

Made with ❀️ in Go