This project demonstrates a complete AWS infrastructure setup using Terraform, featuring a scalable web application architecture with ECS Fargate, Application Load Balancer, VPC networking, and monitoring capabilities.
The infrastructure creates a production-ready environment with the following components:
- VPC with public and private subnets across 2 availability zones
- Internet Gateway for public internet access
- NAT Gateway for private subnet internet access
- Route Tables for proper traffic routing
- ECS Fargate Cluster for container orchestration
- ECS Service running nginx web server
- Application Load Balancer for traffic distribution
- Target Group with health checks and auto-scaling
- S3 Bucket with intelligent tiering lifecycle policies
- CloudWatch Logs for centralized logging
- CloudWatch Alarms for CPU utilization monitoring
- Auto Scaling policies for dynamic resource management
- Multi-AZ Deployment: High availability across availability zones
- Auto Scaling: CPU-based scaling policies (1-4 instances)
- Security Groups: Proper network isolation and access control
- ARM64 Support: Optional Graviton processor support
- Lifecycle Management: S3 storage optimization
- Monitoring: Comprehensive logging and alerting
- Terraform >= 1.0.0
- AWS CLI configured with appropriate credentials
- Git for version control
- AWS account with appropriate permissions
- IAM user/role with the following permissions:
- VPC management
- ECS service management
- S3 bucket creation
- CloudWatch monitoring
- IAM role creation
# Install Terraform (Windows with Chocolatey)
choco install terraform
# Or download from https://www.terraform.io/downloads.html
# Configure AWS credentials
aws configure
# Enter your AWS Access Key ID, Secret Access Key, and default regiongit clone <repository-url>
cd infra-assignmentEdit terraform.tfvars to match your requirements:
project_name = "my-project"
env = "dev"
region = "us-east-1"
use_arm = false # Set to true for ARM64/Gravitonterraform initterraform plan -out plan.tfplanterraform apply "plan.tfplan"After successful deployment, get the ALB DNS name:
terraform output alb_dnsinfra-assignment/
├── main.tf # Root module configuration
├── variables.tf # Variable definitions
├── terraform.tfvars # Variable values
├── output.tf # Output values
├── provider.tf # AWS provider configuration
├── modules/
│ ├── vpc/ # VPC and networking resources
│ ├── ecs/ # ECS cluster and service
│ ├── s3/ # S3 bucket and lifecycle policies
│ └── cloudwatch/ # Monitoring and alerting
└── README.md # This file
- CIDR Block: 10.0.0.0/16
- Public Subnets: 2 subnets across different AZs
- Private Subnets: 2 subnets across different AZs
- NAT Gateway: Single NAT gateway for cost optimization
- Launch Type: Fargate (serverless)
- CPU: 256 units (0.25 vCPU)
- Memory: 512 MB
- Container: nginx:latest
- Port: 80 (HTTP)
- Min Capacity: 1 instance
- Max Capacity: 4 instances
- Scaling Policy: CPU utilization > 70%
- Evaluation Periods: 2
- Lifecycle Rule: Transition to STANDARD_IA after 30 days
- Versioning: Disabled by default
- Encryption: Server-side encryption enabled
- ECS service CPU utilization
- ECS service memory utilization
- ALB request count and latency
- CPU High: Triggers when average CPU > 70%
- Alarm Actions: Currently empty (can be configured for SNS notifications)
- ECS Logs:
/ecs/{project-name}with 14-day retention - ALB Logs: Can be enabled for detailed access logging
If you encounter "Resource already exists" errors:
# Option 1: Destroy and recreate (clean slate)
terraform destroy -auto-approve
terraform apply
# Option 2: Import existing resources
terraform import module.ecs.aws_iam_role.ecs_task_exec <role-name>If you see count-related errors:
- Ensure all modules are properly referenced
- Check that security group IDs are correctly passed
- Verify target group
target_typeis set to "ip" for Fargate - Check security group rules allow traffic between ALB and ECS
- Ensure private subnets have NAT gateway access
# Check Terraform state
terraform show
# Validate configuration
terraform validate
# Format code
terraform fmt
# Check plan without applying
terraform plan- Public subnets only contain ALB and NAT Gateway
- ECS tasks run in private subnets
- Security groups restrict traffic appropriately
- No direct internet access for ECS tasks
- ECS tasks use minimal required permissions
- Execution role follows least privilege principle
- No hardcoded credentials in code
- S3 bucket encryption enabled by default
- VPC endpoints can be added for private S3 access
- CloudTrail can be enabled for audit logging
- Single NAT Gateway (vs. one per AZ)
- S3 lifecycle policies for storage tiering
- Fargate spot instances can be enabled
- CloudWatch log retention limited to 14 days
- Enable Fargate spot instances for non-critical workloads
- Implement S3 Intelligent Tiering
- Use CloudWatch Contributor Insights
- Consider Savings Plans for long-term commitments
- Auto-scaling based on CPU utilization
- Load balancer distributes traffic
- Multi-AZ deployment for high availability
- Adjust CPU and memory in ECS task definition
- Modify auto-scaling thresholds
- Tune health check parameters
- HTTPS/SSL termination
- WAF integration for security
- Route 53 DNS management
- Backup and disaster recovery
- CI/CD pipeline integration
- Custom CloudWatch dashboards
- SNS notifications for alarms
- X-Ray tracing integration
- Enhanced logging and metrics
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This infrastructure is designed for development and testing environments. For production use, additional security measures, monitoring, and compliance configurations should be implemented.