Performance Tester
Remote
Full Time
Experienced

Overview
We’re seeking a hands-on Performance Test Engineer to design and execute the end-to-end performance strategy for an ad-serving platform (Akka-based Java microservices) targeting <50ms response times and 4–5 million concurrent requests/users. You’ll build the test harness, model real-world traffic, execute large-scale distributed load tests, and turn findings into actionable tuning guidance.
Responsibilities:
- Own the performance test strategy & plan (load, stress, spike, soak, scalability, failover).
- Model traffic for ad-supported streaming (burstiness, fan-out, cache hit/miss, cold-start, geo distribution, p95/p99/p999).
- Build automated load frameworks & scripts (preferably Locust/Python; JMeter where appropriate). Parameterize data, correlations, and think-time.
- Orchestrate distributed load generation (cloud workers, containerized runners) to simulate 4–5M concurrent at scale.
- Integrate with observability/APM (metrics, logs, traces) to correlate system bottlenecks across app, JVM/GC, Akka dispatchers, network, caches, and databases.
- Produce capacity models & SLAs/SLOs dashboards; run performance gates in CI/CD.
- Partner with DevOps & developers to recommend tuning (thread pools, connection pools, GC, autoscaling, cache strategies, DB indexes/queries).
- Document test design, scenarios, results, and clear remediation plans.
- Load tools: Locust (Python), JMeter; (nice to have: k6, Gatling).
- Scripting & automation: Python (core), Bash; infra spin-up via Terraform/Docker/Kubernetes for load farms.
- Metrics/Tracing: CloudWatch, OpenTelemetry, Prometheus/Grafana; log analysis pipelines.
- Familiarity with Java service behaviors (Maven/Gradle pipelines, JVM/GC basics); Akka concepts are a plus.
- 3–5+ years in performance engineering for large-scale, low-latency distributed systems; streaming/ad-tech exposure is a plus.
- Demonstrated success hitting strict SLAs (p95/p99 latency) under millions of users/RPS.
- Strong Python and test-automation skills; ability to build maintainable, reusable test frameworks.
- Experience designing realistic workload models, synthetic data generation, and distributed load execution in cloud.
- Analytical, communicates crisply with stakeholders, converts data into prioritized recommendations.
- Location: Remote (prefer India candidates)
- Schedule: Must join US morning calls (Eastern Time) as needed.
- Start: 1–3 weeks from offer.
- Term: Through end of January (likely extension).
Apply for this position
Required*