Bonus Challenge: RPC vs REST
Weight: +10% Extra Credit
Compare your gRPC implementation with a REST API alternative.
Objectives
- Implement the same calculator service using REST/HTTP
- Measure and compare performance characteristics
- Analyze trade-offs between RPC and REST
Requirements
1. REST API Implementation
Create a REST API version of your calculator using Flask or FastAPI:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import uvicorn
app = FastAPI()
class BinaryOperation(BaseModel):
a: float
b: float
class Result(BaseModel):
value: float
@app.post("/api/add")
async def add(op: BinaryOperation) -> Result:
return Result(value=op.a + op.b)
@app.post("/api/subtract")
async def subtract(op: BinaryOperation) -> Result:
return Result(value=op.a - op.b)
@app.post("/api/multiply")
async def multiply(op: BinaryOperation) -> Result:
return Result(value=op.a * op.b)
@app.post("/api/divide")
async def divide(op: BinaryOperation) -> Result:
if op.b == 0:
raise HTTPException(status_code=400, detail="Division by zero")
return Result(value=op.a / op.b)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Requirements:
- ✅ Same operations as gRPC version
- ✅ JSON request/response format
- ✅ Proper HTTP status codes
- ✅ Error handling
2. REST Client
Create a client using requests library:
import requests
import time
class RESTCalculatorClient:
def __init__(self, base_url='http://localhost:8000'):
self.base_url = base_url
self.session = requests.Session()
def add(self, a, b):
response = self.session.post(
f'{self.base_url}/api/add',
json={'a': a, 'b': b},
timeout=2.0
)
response.raise_for_status()
return response.json()['value']
Performance Comparison
Benchmark Test
Write a benchmark script that measures:
import time
import statistics
def benchmark_operation(client, operation, iterations=1000):
"""
Benchmark an operation.
Args:
client: gRPC or REST client
operation: Function to call
iterations: Number of iterations
Returns:
dict with latency statistics
"""
latencies = []
for i in range(iterations):
start = time.perf_counter()
try:
result = operation()
latency = (time.perf_counter() - start) * 1000 # ms
latencies.append(latency)
except Exception as e:
print(f"Error: {e}")
return {
'mean': statistics.mean(latencies),
'median': statistics.median(latencies),
'p95': statistics.quantiles(latencies, n=20)[18], # 95th percentile
'p99': statistics.quantiles(latencies, n=100)[98], # 99th percentile
'min': min(latencies),
'max': max(latencies),
}
Metrics to Compare
| Metric | gRPC | REST | Winner |
|---|---|---|---|
| Latency (mean) | ? ms | ? ms | ? |
| Latency (p95) | ? ms | ? ms | ? |
| Throughput | ? req/s | ? req/s | ? |
| Payload Size | ? bytes | ? bytes | ? |
| Connection Overhead | ? | ? | ? |
Analysis Report
Write a 2-3 page comparison covering:
1. Performance Analysis
Latency: - Which is faster? By how much? - Why do you think there's a difference? - Does it matter for your use case?
Throughput: - How many requests per second can each handle? - What's the bottleneck?
Payload Size: - Compare request/response sizes - Use network tools to measure:
# Capture gRPC traffic
tcpdump -i lo0 -w grpc.pcap port 50051
# Capture REST traffic
tcpdump -i lo0 -w rest.pcap port 8000
2. Developer Experience
API Definition:
- gRPC: Protocol Buffers (.proto files)
- REST: OpenAPI/Swagger or informal
Code Generation: - gRPC: Auto-generated client/server stubs - REST: Manual implementation
Type Safety: - gRPC: Strongly typed - REST: JSON (runtime validation needed)
3. Trade-offs Analysis
| Factor | gRPC Advantage | REST Advantage |
|---|---|---|
| Performance | Binary protocol, HTTP/2 | - |
| Browser Support | Limited (needs grpc-web) | Native support |
| Human Readability | - | JSON is readable |
| Tooling | - | More mature ecosystem |
| Streaming | Built-in bi-directional | Complex (SSE, WebSocket) |
| Learning Curve | Steeper | Easier to start |
4. Use Case Recommendations
When to use gRPC: - ✅ Microservice-to-microservice communication - ✅ Performance-critical applications - ✅ Bi-directional streaming needed - ✅ Polyglot environments (multiple languages)
When to use REST: - ✅ Public APIs for third-party developers - ✅ Browser-based clients - ✅ Simple CRUD operations - ✅ Caching with HTTP headers
Deliverables
📦 Code:
rest_server.py- REST API implementationrest_client.py- REST clientbenchmark.py- Performance comparison script
📊 Report (2-3 pages):
Include:
- Performance comparison table with measurements
- Analysis of why one might be faster
- Trade-offs discussion
- Use case recommendations
📈 Visualizations:
Create charts showing:
- Latency distribution (histogram)
- Throughput over time
- Payload size comparison
📹 Demo Video (1 minute):
Show:
- Both servers running
- Benchmark script running
- Results comparison
Grading Rubric
| Criterion | Points | Description |
|---|---|---|
| REST Implementation | 3 | Working REST API with all operations |
| Benchmark Script | 3 | Measures latency, throughput correctly |
| Performance Data | 2 | Valid measurements from both systems |
| Analysis Report | 2 | Insightful comparison and trade-offs |
| Total | +10 | Bonus Points |
Sample Results (Example)
Your results may vary based on hardware and implementation:
=== Benchmark Results ===
Operation: Add(10, 5) - 1000 iterations
gRPC:
Mean latency: 1.2 ms
Median latency: 1.1 ms
P95 latency: 2.1 ms
Throughput: 833 req/s
REST:
Mean latency: 2.8 ms
Median latency: 2.6 ms
P95 latency: 4.5 ms
Throughput: 357 req/s
Winner: gRPC (2.3x faster)
Tips
Fair Comparison
- Run both on same machine
- Use same Python version
- Measure multiple times and average
- Warm up before measuring (first requests are slower)
Common Mistakes
- Not disabling debug mode in Flask/FastAPI
- Including connection setup in latency measurements
- Testing on localhost (no real network latency)
Production Considerations
In production, other factors matter:
- Load balancing complexity
- Monitoring and observability
- Security (TLS setup)
- API versioning strategy
Advanced Bonus Ideas
Streaming Performance
Compare streaming capabilities:
- gRPC: Native server streaming
- REST: Server-Sent Events (SSE)
Load Testing
Use tools like:
ghzfor gRPC load testingwrkorlocustfor REST load testing
Network Conditions
Test under different conditions:
# Add latency with tc (Linux)
tc qdisc add dev lo root netem delay 50ms
# Add packet loss
tc qdisc add dev lo root netem loss 1%