Skip to content

MoonFuji/Nroho_Backend_Overview

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Nroho Backend – Laravel Platform for Carpooling & Routing

This repo describes the backend I built for the Nroho carpooling app – a Laravel/MongoDB system for matching riders and drivers, orchestrating trips in real time, and handling all the operational "glue" around that.

The actual source lives in a private repository because it contains:

  • real‑world integration details (KYC provider, messaging, storage),
  • fairly opinionated matching, fraud, and reward logic that I don't want to open‑source verbatim.

That said, I'm very happy to walk through the real code live in an interview, whiteboard specific flows, or discuss trade‑offs in detail.


What this backend does (high‑level)

At a high level, the Nroho backend covers:

  • Trip & ride lifecycle

    • Drivers publish rides (origin/destination, time, capacity).
    • Riders create "interests" for routes/time windows.
    • The system coordinates joins/leaves, enforces capacity, and manages the state of each passenger on a trip.
  • Demand/supply matching

    • When a ride or an interest is created, we match one side to the other.
    • We track invitations, accept/decline, and "loose matches" over time, so users see relevant options without stale results.
  • User state, KYC & risk

    • Users can upload ID/driver license/face scans.
    • A KYC pipeline runs on top of an external microservice and feeds back structured decisions into the user profile.
    • Certain behaviors and inconsistencies feed into a lightweight fraud/risk score.
  • Wallet, rewards & incentives

    • Each user has a points wallet.
    • Points are awarded on ride completion, with multipliers depending on distance and quality of the trip.
    • Fraud analysis can reduce or deny rewards when something looks wrong.
  • Notifications & engagement

    • Push notifications via FCM, plus email/SMS hooks.
    • "Smart" ride suggestions based on a user's historical routes and searches, not just fire‑and‑forget broadcasts.
  • Analytics & operations

    • Demand/supply analysis across routes and time windows.
    • Operational dashboards for queues, jobs, and KYC/ride data quality.

Tech stack in short

  • Framework: Laravel 11 (PHP 8.x)
  • Database: MongoDB for domain data; Redis for queues & caching
  • Queues: Redis + Laravel Horizon, with separate critical, high, default, low queues
  • Scheduling: Laravel console commands for reminders, cleanups, analytics, marketing runs
  • Clients: Mobile (drivers/riders), plus web admin / operations

Design highlight 1 – Document‑centric aggregates for rides & interests

A big design choice was to lean into MongoDB documents as aggregates for the core domain:

  • A Ride document contains:

    • global ride state (created, started, completed, canceled),
    • an embedded customers[] array where each element tracks a passenger's status over time (pending, accepted, driver_arrived, picked, dropped_off, etc.),
    • an update_trail array that stores "what changed, when, and why".
  • A RideInterest document represents "I want a ride":

    • it tracks the interest's lifecycle (created, matched, accepted_invite, booked_a_match, expired),
    • embeds matched_rides[] and invitations[] with their own mini state machines and audit trails.

Most state changes are expressed as single Mongo updates with array filters (e.g., update one customer's status, log a trail entry, adjust counters). That gives:

  • strong invariants around capacity and passenger state,
  • a very clear, append‑only story of how a ride evolved,
  • and queries that line up naturally with how you think about a trip (one document per trip, not 10 tables).

Good interview topic: modeling live trip state and passenger timelines in a document store, and why that can be easier to reason about than purely relational tables in this domain.


Design highlight 2 – Time‑driven ride lifecycle & reminders

On top of those aggregates, there is a scheduler‑driven lifecycle:

  • Each ride stores timestamps like:

    • when pre‑reminders were sent ("starting soon"),
    • when it became "can start",
    • when it's considered stale or overdue,
    • when it was auto‑completed.
  • A set of indexed fields on the Ride model are tuned for:

    • scanning "rides that should get a reminder right now",
    • finding rides that are stale and should auto‑cancel,
    • auto‑completing rides that were started but never manually completed.
  • A recurring console command (e.g., rides:process-reminders) runs every minute and:

    • sends pre‑start / can‑start / overdue notifications,
    • nudges both driver and riders with ETA/follow‑up reminders during the ride,
    • auto‑completes or auto‑cancels rides when they fall outside acceptable windows.

All of that runs off background jobs and scheduled commands, so request handlers stay thin and responsive.

Good interview topic: designing a time‑based state machine for trips that doesn't require a separate workflow engine, just careful data modeling + scheduling.


Design highlight 3 – Routing‑aware fraud & reward pipeline

When a ride completes, a pipeline runs to score the trip and award points fairly:

  • The system calls a routing service (OSRM‑style) to:

    • compute actual road distance and duration between key points,
    • cross‑check intended route vs. what actually happened.
  • A fraud/risk component:

    • looks at how much of the intended route was covered,
    • checks that pickups/dropoffs make sense within tolerances,
    • outputs a score and a multiplier to apply on rewards (from "full reward" down to "none").
  • The wallet service:

    • uses that multiplier + distance to award driver and rider points,
    • records transactions and aggregates (e.g., total km driven, completed rides),
    • can flag users for manual review when repeated anomalies show up.

There is explicit attention to resilience:

  • timeouts and graceful fallbacks when the routing service is down,
  • no critical path is blocked because an external API is slow.

Good interview topic: plugging routing/telemetry into a reward system, and how to keep that robust while still being meaningful from a fraud‑prevention standpoint.


Design highlight 4 – Smart notifications built on learned behavior

Instead of just "notify everyone in the city", Nroho's backend learns a bit about where and when each user tends to travel:

  • It maintains per‑user route profiles, derived from:

    • ride interests,
    • searches (e.g., "from X to Y around 7am"),
    • self‑declared "frequent routes".
  • For each new ride, a multi‑tier matching engine:

    • first considers explicit, active intents (fresh ride interests),
    • then looks at historical behavior and route patterns,
    • and scores potential notifications by relevance.
  • A central decider applies guardrails:

    • activity tiers (active vs semi‑active vs cold users),
    • per‑user and per‑route cooldowns,
    • hourly/daily/weekly caps to avoid spam.

Most of this runs asynchronously in the background:

  • ride creation stays fast,
  • matching + notification targeting run on Redis queues under Horizon,
  • decisions and outcomes are logged for later tuning.

Good interview topic: moving from "blast everyone" to a more intelligent, routing‑aware notification system, and handling the scalability and abuse‑prevention angles as the user base grows.


Design highlight 5 – Async‑first integrations and operations

Anything that can be slow or flaky is offloaded to queues:

  • KYC document / face analysis calls an external microservice.
  • Push notifications (FCM), SMS, and some email flows are queued.
  • Analytics, route profile rebuilds, and marketing broadcasts run as jobs.

Operationally:

  • Queues are split by priority (critical, high, default, low) and supervised by Horizon.
  • There are scheduled commands for:
    • reminders and expirations,
    • cleanup,
    • periodic analytics and marketing pushes.
  • Error tracking and request tracing are wired in, so it's easy to follow what happens when something goes wrong in production‑like scenarios.

Good interview topic: queue design, back‑pressure, how to make third‑party integrations "fail soft", and what to monitor in a system that dispatches people in the real world.


Engineering quality – testing, logging, and operations

  • Automated tests & coverage

    • Breadth: Dozens of PHPUnit tests across domains: auth, ride flows (CreateRideTest, StartRideTest, CompleteRideTest, ProcessRideRemindersTest), booking, wallet (WalletServiceTest, WalletCalculationServiceTest), fraud (FraudDetectionServiceTest), smart notifications (SmartNotificationDeciderTest, ExpandedRideMatchingServiceTest), chat, admin filters, KYC, and more.
    • CI pipeline (GitHub Actions):
      • Spins up a real MongoDB 7 replica set and Redis in CI.
      • Installs PHP 8.4 + extensions, runs artisan test with coverage (Clover + Cobertura), publishes JUnit results and a coverage summary, and uploads to Codecov.
      • Builds frontend assets with pnpm/Vite so feature tests run against a real build, not stubs.
  • Structured, privacy‑safe logging

    • Central logging standard: all logs are structured JSON with consistent fields and correlation IDs.
    • A LogEvent constants class enforces event‑style names (ride.created, ride.interest.matched, auth.login.failed, kyc.id.approved, …) instead of free‑text messages.
    • PII‑safe by design: a RedactionProcessor runs on every log record and redacts passwords, tokens, credit‑card‑like patterns, KYC identifiers, face scans, etc., plus pattern‑matching for JWTs and Base64 secrets. Policy is to never log full request/response payloads; only whitelisted fields and IDs.
    • End‑to‑end traceability: request context and CorrelationIdMiddleware attach request_id, user_id, path, controller, and correlation IDs to both HTTP and queued jobs, so a single user action can be traced through controllers, services, and workers. Errors are automatically enriched in Sentry (route, method, user, tags).
  • Monitoring & observability

    • Queues & jobs: Redis + Laravel Horizon with a documented queue architecture: critical / high / default / low queues, wait‑time thresholds, auto‑scaling supervisors, and Mongo‑backed failed job storage.
    • Monitoring stack: Prometheus + Grafana + Loki with exporters for node, cAdvisor, Nginx, Redis, MongoDB, MinIO; dashboards for host/container health, Mongo index/ops, Nginx request rates, and Loki log queries (e.g. by request_id or tags/level). Alerts for high CPU/memory, error rate, slow response times.
    • Error tracking: Sentry wired with sampling and environment‑specific config; alert rules (e.g. auth failure spikes, KYC stalls) are documented for ops.
  • DevOps, environments & CI/CD

    • Testing environment: a dedicated testing stack (app_testing / queue_testing) with sanitized Mongo dumps from production, index rebuilds, and cache isolation; separate domain hitting testing containers via shared Nginx, plus /health endpoints and resource budgeting.
    • Production deployment: GitHub Actions builds Docker images (app, nginx, mongo‑backup), pushes to Docker Hub, syncs compose and Docker config to the server, runs docker compose for app, Horizon, scheduler, nginx, performs health checks (nginx config test + app /health), and stores current-sha / last-successful-sha for rollback. A separate rollback job can revert to the last successful deployment if the deploy step fails.
    • Security: a non‑blocking Trivy scan job runs on the repo for OS/library CVEs.
    • Operational runbooks: documented procedures for troubleshooting queues, monitoring Horizon wait times, and inspecting logs/metrics when something degrades.

How I'm happy to use this in an interview

If you're evaluating me for a backend / Laravel role:

  • I can walk through specific flows live (trip lifecycle, matching, KYC, wallet, notifications).
  • I'm happy to whiteboard data models and sequence diagrams for the parts that are most relevant to your product (e.g., routing and dispatch for snow removal, fleet health, driver engagement).
  • Under an appropriate agreement, I can give guided read‑only tours of selected parts of the private repo so we can talk concretely about the code and its trade‑offs.

If you'd like me to focus preparation on a particular area (matching, routing, queues, KYC, observability, etc.), let me know and I'll bring examples from the Nroho backend.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors