Let me be honest — I used to think error handling was boring. Just wrap things in try-catch and move on, right? Wrong. That mindset cost us hours of debugging at 2 AM.
After dealing with enough production fires, I started collecting patterns that actually work. These aren’t theoretical best practices — they’re the things that stopped midnight pagerduty calls.
The Result Object Pattern
Most of us return exceptions for errors. But there’s a cleaner way — return a result object that explicitly handles both success and failure.
// Instead of throwing exceptions everywhere
function getUser(id) {
if (!id) return { ok: false, error: 'Invalid ID' };
const user = db.find(id);
if (!user) return { ok: false, error: 'User not found' };
return { ok: true, data: user };
}
// Call it like this
const result = getUser(userId);
if (!result.ok) {
return res.status(400).json({ error: result.error });
}
const user = result.data;
This might feel verbose at first. But here’s the thing — you always know exactly what can go wrong. No hidden exceptions flying up the stack.
Centralized Error Logging
We built a simple wrapper around our logger that captures:
- The error message
- Stack trace
- Request ID (so we can trace the flow)
- User context (if logged in)
- Timestamp
Before this, we’d get bug reports with zero context. “The app broke.” Thanks, helpful.
function logError(err, context = {}) {
logger.error({
message: err.message,
stack: err.stack,
requestId: context.requestId,
userId: context.userId,
timestamp: new Date().toISOString(),
...context
});
}
Graceful Degradation
Here’s a real story. Our payment service went down. Instead of crashing the whole checkout, we let users continue without payment and queued it for later.
Was it perfect? No. Did it prevent a P0 incident? Yes.
async function processOrder(order) {
try {
await paymentService.charge(order);
} catch (err) {
// Log but don't crash
logError(err, { orderId: order.id, critical: true });
await queue.add('payment-retry', { orderId: order.id });
// Let order proceed
return { status: 'pending-payment' };
}
}
What I Learned
Error handling isn’t about catching everything. It’s about knowing what matters and handling it intentionally.
We went from 20+ daily alerts to maybe 2-3. Not because we fixed all bugs, but because we started handling expected errors gracefully.
The biggest shift was treating errors as expected outcomes, not exceptional ones. Network calls fail. Databases time out. Users submit bad data. Plan for it.
If you’re starting fresh, pick one pattern from above and try it. Don’t rewrite everything at once.
