In this article, we explore the critical topic of decoupling business logic processing from HTTP request handling. When deploying HTTP handling code to production, even if it functions correctly and without bugs, challenges often arise in scaling to handle numerous requests and maintaining flexibility to modify two distinct aspects of the code: HTTP request/response handling and business logic implementation. To address these challenges effectively, we can decouple these tasks by separating request handling from business logic processing. This article provides a practical example of achieving this separation using the worker pool pattern—a proven design pattern developed through years of community experience. The implementation leverages Go's robust concurrency mechanisms, including channels, WaitGroups, and context, to elegantly manage scalability and maintainability.
The task of decoupling HTTP request handling from business logic implementation can be technically challenging. One challenge is designing a solution that effectively separates these concerns. Another is ensuring the solution is adaptable to changes and scalable. A third challenge lies in implementing the solution, and the fourth and final one is debugging it. Regardless of the programming language used, concurrency mechanisms will inevitably be involved. This adds a significant layer of complexity, as designing, developing, and debugging parallel-running code is neither easy nor trivial. This article, along with the provided code, presents a straightforward design and implementation of a working system that can serve as a reference for tackling these challenges.
package main
import (
"context"
"fmt"
"math/rand"
"net/http"
"sync"
"time"
)
// Job represents a processing unit with response channel
type Job struct {
ID string
Payload struct{}
Result chan int
}
var (
jobQueue = make(chan Job, 100) // Buffered job channel
workerWg sync.WaitGroup // Worker synchronization
)
func heavyJob() int {
sum := 0
for i := 0; i < 50_000_000; i++ {
sum += rand.Intn(10)
}
return sum
}
// Worker pool implementation
func worker(ctx context.Context) {
defer workerWg.Done()
for {
select {
case job := <-jobQueue:
result := heavyJob()
job.Result <- result
case <-ctx.Done():
return
}
}
}
func handler(w http.ResponseWriter, r *http.Request) {
resultChan := make(chan int, 1)
jobID := fmt.Sprintf("job-%d", time.Now().UnixNano())
job := Job{
ID: jobID,
Result: resultChan,
}
// Submit job to worker pool
select {
case jobQueue <- job:
// Success
case <-time.After(100 * time.Millisecond):
http.Error(w, "Server busy", http.StatusServiceUnavailable)
}
// Wait for result with timeout
res := <-resultChan
fmt.Fprintf(w, "Result: %d", res)
close(resultChan)
}
func main() {
rand.Seed(time.Now().UnixNano())
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Start worker pool
for i := 0; i < 5; i++ {
workerWg.Add(1)
go worker(ctx)
}
http.HandleFunc("/", handler)
fmt.Println("Server starting on :8080...")
// Graceful exit handling
server := &http.Server{Addr: ":8080"}
go func() {
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
fmt.Printf("Server error: %v\\n", err)
}
}()
// Handle shutdown
// Signal to all workder to finish
<-ctx.Done()
server.Shutdown(context.Background())
workerWg.Wait()
}
var (
jobQueue = make(chan Job, 100) // Buffered job channel
workerWg sync.WaitGroup // Worker synchronization
)
jobQueue is where handlers post jobs and where workers pick jobs for processing. This queue essentially acts as a boundary between the HTTP handlers and the business logic. workerWg is a WaitGroup object that is used by the main function to implement graceful exit - only once all the workers have signaled on this WaitGroup that they’re done the main function can exit. This ensures that the main function will not try to quit while other goroutines are still running.
func heavyJob() int {
sum := 0
for i := 0; i < 50_000_000; i++ {
sum += rand.Intn(10)
}
return sum
}
This function is a simulation of the business logic. It simulates a task that takes significant time to complete.
func worker(ctx context.Context) {
defer workerWg.Done()
for {
select {
case job := <-jobQueue:
result := heavyJob()
job.Result <- result
case <-ctx.Done():
return
}
}
}
This is a worker implementation following the worker pattern. The worker pool pattern in Go is a concurrency design that uses a fixed number of goroutines (workers) to process tasks from a shared job queue, ensuring efficient resource utilization. By limiting the number of concurrent goroutines, it prevents goroutine exhaustion, which can occur when too many goroutines are created, leading to excessive memory usage and context-switching overhead. This pattern allows for scalability by enabling controlled parallelism, where the number of workers can be adjusted to match the workload or system capacity. It also achieves a balance between CPU and memory usage by keeping the number of active workers proportional to available system resources, such as CPU cores for computation-heavy tasks or higher worker counts for I/O-bound operations. Finally, tasks are processed in the order they are enqueued in the job queue, ensuring predictable and orderly execution, which is particularly useful for batch processing or high-throughput systems.