Problem
When managing a personal project like Kitesurf.ninja on a tight budget, it’s tempting to jump straight into development, focusing on building features without a clear long-term plan. As developers, we love to create, but without structure or foresight, a project can quickly become difficult to manage, scale, or sustain.
This was the case with Kiteninja. The initial setup in Node.js on AWS, using managed services like an Elastic Load Balancer (ELB), worked well for a while but soon revealed its limitations. Costs spiraled beyond what was reasonable for a learning project, and as the app grew in complexity, the architecture became a bottleneck for scalability and performance.
Beyond the technical hurdles, the lack of a formal structure made extending and managing the project increasingly challenging. Recognizing these issues, I realized it wasn’t just about developing for the sake of it but about developing with purpose. This led to a deliberate decision to rewrite the project in Spring Boot. The transition introduced a more organized architecture and cost-efficient practices, with tools like Terraform and EC2 User Data enabling automated provisioning and deployment.
This shift wasn’t just a technical upgrade—it was a lesson in the importance of thoughtful planning, intentional design, and balancing the joy of building with the discipline of strategy.
Solution
To address the challenges and inefficiencies of the initial setup, I shifted focus from simply building features to building with purpose. The goal was not just to optimize costs and improve performance but to establish a solid foundation for future scalability and maintainability. This intentional approach led to several key changes:
-
Transitioning to Spring Boot By adopting a seven-layer architecture, the project gained much-needed structure. This improved both the organization of the codebase and its ability to scale as new features were introduced.
-
Leveraging Terraform for Infrastructure as Code Automating infrastructure provisioning and deployment processes brought consistency and repeatability to the setup. This step underscored the value of planning ahead to reduce manual effort and potential errors.
-
Optimizing Cloud Expenses Replacing managed AWS services with self-hosted alternatives reduced monthly costs by 39%, demonstrating that thoughtful trade-offs can yield both financial and operational benefits.
-
Adopting DevOps Practices Implementing Infrastructure as Code (IaC) and using EC2 User Data for automated server provisioning simplified deployments, allowing the project to scale while maintaining efficiency.
These deliberate changes not only resolved the project’s immediate challenges but also offered a deeper lesson: development is most effective when guided by purpose and planning. The transformation enhanced the project’s structure, scalability, and cost-effectiveness while providing valuable opportunities to grow technical skills.
Rewriting Kiteninja in Spring Boot
Why Spring Boot?
The transition to Spring Boot was motivated by several factors:
-
Cost Savings: Reducing reliance on managed AWS services like ELB while consolidating infrastructure saved over 39% in monthly expenses.
-
Improved Organization: Spring Boot enabled the adoption of a seven-layer architecture to better separate concerns and improve maintainability.
-
Future Scalability: Spring Boot’s multi-threaded nature provided the groundwork for handling concurrent requests efficiently as the application grows.
Seven-Layer Architecture
One of the most significant improvements in the rewrite was adopting a seven-layer architecture which brought structure and maintainability to the codebase.
- Controller Layer Example
One of the things I found really interesting while transitioning from Node.js to Spring Boot was how to handle location-based queries efficiently. I recently worked on implementing location-based pagination in Spring Boot, and it turned out to be really fun part to work on
@GetMapping("/list") public ResponseEntity
getLocations( @RequestParam int limit, @RequestParam int offset, @RequestParam String sort, @RequestParam String order, @RequestParam double lat, @RequestParam double lon) { LocationResponse response = locationService.getAllLocations(limit, offset, sort, order, lat, lon); "keyword">return ResponseEntity.ok(response); }
It was a fun challenge figuring out how to transition the logic from Node.js to Spring Boot while still handling location queries in a smooth and efficient way. Was able to use JPA,to interact with the database, and this approach not only made my app more self-sufficient because it doesn't need some costly service like Google maps; but also gave me better control over the data and how it's processed.
- Service Layer
While I'm not sharing examples of every layer here, the key takeaway is that the Service Layer is where the core logic resides—it ties everything together. It fetches or modifies data through the repository layer and ensures that business logic is handled consistently throughout the application. You can think of it as the heart of your application's functionality.
- Repository Layer
Handles database operations, leveraging Spring Data JPA and raw SQL for custom queries.
@Query(value = "SELECT ... FROM locations WHERE ...", nativeQuery = true) List
- DTO (Data Transfer Object) Layer
The DTO layer defines lightweight objects specifically designed for transferring data between layers of the application. Unlike entities that represent database tables, DTOs are tailored for the specific needs of API responses or service interactions. This separation has several key benefits:
Prevents Sensitive Data Leakage By explicitly defining which fields are included in a DTO, you ensure that only the necessary and intended data is exposed. This is especially crucial for compliance with regulations like HIPAA, where sensitive information must be safeguarded against unauthorized access or unintentional disclosure.
Improves Maintainability Decoupling the data structure of your API or service responses from the underlying database schema allows for more flexibility. Changes to the database do not directly impact external-facing data contracts.
Supports Data Validation and Transformation DTOs allow you to validate and transform data as it moves between layers, ensuring that only clean and compliant data reaches the consumer.
- Entity Layer
Maps database tables to Java objects using JPA annotations.
- Validation Layer
Ensures input data integrity through custom validators and annotations.
- Utility Layer
Contains reusable helper functions or services, such as distance calculations or API integration logic for external weather services.
Cost Optimization
The rewrite also introduced a cost-optimized architecture by replacing managed AWS services with self-hosted alternatives:
- Removed ELB:
Spring Boot’s built-in SSL support replaced the need for an Elastic Load Balancer.
Used Certbot with Let’s Encrypt for free SSL certificates.
- Consolidated Infrastructure:
Hosted both the backend and MySQL database on a single EC2 instance, eliminating the need for RDS.
Upgraded the instance size to handle the additional workload while still reducing costs.
- Streamlined Development Environments:
Used local Dockerized setups for development, consolidating Dev and Test environments into a single shared instance.
Cost Breakdown
Infrastructure as Code with Terraform
Using Terraform, I automated the provisioning of the EC2 instance and its associated resources, ensuring a consistent and repeatable setup.
Terraform Highlights
- EC2 Instance Provisioning:
Defined the instance, security groups, and key pair declaratively.
resource "aws_instance" "kiteninja" { ami = "ami-12345678" instance_type = "t3.small" key_name = "kiteninja-key" user_data = file("user-data.sh") }
- Security Groups:
Configured security rules for HTTP, HTTPS, and SSH access.
- Repeatable Setup:
Version-controlled Terraform files allowed easy rollback and collaboration.
Automating Setup with EC2 User Data
The User Data script automated the deployment of the codebase and installation of dependencies, reducing manual effort and improving consistency.
Example Script
#!/bin/bash sudo apt update -y && sudo apt upgrade -y sudo apt install openjdk-17-jdk -y sudo apt install git -y cd /home/ubuntu git clone https://github.com/yourusername/kiteninja.git cd kiteninja ./gradlew build java -jar build/libs/kiteninja.jar
Lessons Learned
- Spring Boot’s Multi-Threaded Power:
Spring Boot’s threading model offers better scalability and responsiveness compared to Node.js.
Learning thread safety will be critical for fully leveraging this feature.
- Automation is Key:
Using Terraform and EC2 User Data streamlined deployment and provisioning, making the process repeatable and reliable.
- Cost-Effectiveness Requires Trade-Offs:
Removing managed services like ELB and RDS saved money but introduced manual management tasks (e.g., database backups).
Plans For Future Enhancements
-
Automated Backups Schedule EBS snapshots to ensure reliable and consistent data protection.
-
Horizontal Scaling with Containers Containerize the application with Docker and explore Kubernetes for improved scalability and deployment.
-
Multi-Threaded Enhancements Use @Async and background processing to handle heavy computations and enhance responsiveness.
Conclusion and Lessons Learned
Rewriting Kitesurf.ninja in Spring Boot solved challenges of scalability, organization, and cost-efficiency while offering valuable learning opportunities. The transition introduced modern DevOps practices, such as automating deployments with Terraform and EC2 User Data, and improved the backend’s structure with a layered architecture.
Key takeaways include the need to deepen knowledge of multi-threading in Spring Boot, such as thread pools and async processing, to handle future scalability. Configuring SSL with Certbot and Keystore and automating renewals was a significant milestone, though there’s more to explore in advanced TLS configurations. Automating infrastructure deployments simplified workflows, with opportunities to enhance efficiency through advanced Terraform techniques. Finally, cutting managed service costs introduced manual maintenance tasks like database backups, highlighting the importance of balancing cost savings with operational simplicity.
This journey underscored the value of robust frameworks, cost-conscious design, and automation tools, providing a foundation for continued growth. Thank you for exploring this experience and its lessons.