NNue is hiringSenior Database Engineer
TYPEFull Time
REMOTE
Senior Database Engineer – DevSecOps
Location: Remote / Hybrid — Canada
Level: Senior (L6–L7)
Reports to: Senior Director of Engineering, Platform & Infrastructure
Why you'll love this Nue.io opportunity!
We're looking for a highly motivated Senior Database Engineer to take ownership of Nue's database infrastructure, scalability, performance, and reliability. You'll be the dedicated expert responsible for ensuring our databases are fast, resilient, fault tolerant, and ready to scale 10x.
This is not a traditional DBA role. We run managed databases on AWS (RDS, NeonDB) and are actively migrating toward a sharded, highly scalable and fault tolerant architecture. You'll work at the intersection of infrastructure engineering and data platform design — automating operations, driving performance, and partnering with product engineering teams to ensure our data layer supports rapid growth.
If you're passionate about cloud-native database engineering, performance at scale, and building systems that just work — this is the role for you.
Responsibilities
- Own and operate Nue's database fleets across AWS RDS & NeonDB (PostgreSQL), with a focus on reliability, availability, and cost efficiency.
- Lead the migration from self-managed RDS instances to fully managed PostgreSQL solutions (Aurora/NeonDB), defining the migration strategy, testing plan, and rollout approach.
- Design and drive the sharding strategy for our multi-tenant SaaS platform, defining shard key selection, tenant placement, cross-shard query patterns, and rebalancing procedures to support 10x growth.
- Drive database performance — proactively identify and resolve slow queries, optimize indexing strategies, tune connection pooling, and ensure transactional integrity across our multi-tenant SaaS platform.
- Design and implement database monitoring and alerting at the infrastructure and application level — covering slow queries, replication lag, connection saturation, CPU/memory utilization, and storage growth.
- Architect replication strategies including read/write separation, streaming replication, and failover strategies to optimize read-heavy workloads, reduce primary database load, and ensure high availability.
- Plan for scale — conduct load testing and capacity planning to answer critical questions like "How many more customers can our database support?" and "How do we prepare for 10x growth?"
- Build and maintain backup, restore, and disaster recovery procedures — ensuring RPO/RTO targets are met and regularly tested.
- Ensure zero-downtime database migrations — design backward and forward compatible schema changes that support continuous deployment.
- Collaborate with product engineering teams (Billing, Lifecycle, Data & Integrations, DevEx) to review schema designs, optimize queries, and establish database best practices.
- Leverage AI-powered tooling throughout all aspects of the role — from migration planning and query optimization to monitoring, runbook automation, and operational efficiency.
- Strengthen security posture — manage encryption at rest and in transit, access controls, secrets rotation, and audit logging for database systems in alignment with SOC 2 Type II compliance.
- Participate in on-call rotations and own the resolution of database-related production incidents.
What We’re Looking For
- 7+ years of experience in backend or infrastructure engineering, with at least 3+ years focused on database engineering or database reliability in production SaaS environments.
- Deep hands-on experience with PostgreSQL — including query optimization, EXPLAIN analysis, indexing strategies, partitioning, vacuuming, and replication.
- Strong experience operating fully managed databases on AWS — RDS, Aurora, and/or serverless PostgreSQL (NeonDB, Aurora Serverless v2, Supabase, etc.).
- Experience designing and operating sharded database architectures and replication topologies at scale.
- Proven ability to plan and execute large-scale database migrations with minimal downtime.
- Experience with database monitoring and observability tools and frameworks (CloudWatch, pganalyze, HyperDX, Datadog, or similar).
- Solid understanding of Infrastructure as Code (Terraform/OpenTofu, CloudFormation) for managing database infrastructure.
- Demonstrated ability to leverage AI-assisted development tools (e.g., Copilot, Cursor, Claude) to accelerate engineering workflows and operational tasks.
- Experience with connection pooling (PgBouncer, RDS Proxy) and read replica architectures.
- Familiarity with CI/CD pipelines and schema migration tools (Flyway, Liquibase, or similar).
- Strong scripting skills (Bash, Python) for automation and operational tooling.
- Comfortable working in a fast-paced startup environment with a small, high-impact team.
- Excellent communication and collaboration skills — able to partner with engineering teams, explain trade-offs, and drive consensus.
- Bachelor's degree in Computer Science, Engineering, or equivalent practical experience.
Preferred Skills
- Experience with MySQL/MariaDB in addition to PostgreSQL.
- Experience with NoSQL databases (e.g., DynamoDB) and in-memory data stores (Redis, Memcached).
- Familiarity with event streaming platforms (Kafka) and how they interact with database and caching systems.
- Knowledge of multi-tenant database architectures — schema-per-tenant, row-level isolation, connection management at scale.
- Experience with SOC 2, GDPR, or other compliance frameworks as they relate to data management.
- Domain knowledge in revenue operations, billing, or financial systems.
- Enthusiasm for AI tooling and prompt engineering to enhance database operations and development workflows.
- Experience with GitLab CI/CD pipelines.
- Deep, hands-on experience with AI coding agents and developer tooling (e.g., Copilot, Cursor, Claude, Codex) with established intuition for opportunities for increased productivity and leverage
What We Offer
- Competitive salary and benefits package.
- Equity participation in a high-growth company with significant upside potential.
- The opportunity to be the first dedicated database engineer — high ownership, high impact.
- A collaborative and supportive team environment that encourages personal and professional growth.
- Flexible remote/hybrid work arrangement within Canada.
- The chance to work on challenging infrastructure problems at a company scaling rapidly in the revenue lifecycle space.
What Success Looks Like
- Migration to fully-managed and highly-scalable PostgreSQL database infrastructure is complete; migration is planned, tested, and rolled out with zero data loss and minimal downtime, replacing self-managed RDS instances.
- A sharding strategy is in production — Shard key selection, tenant placement, and rebalancing procedures are defined and operational, positioning the data layer to support 10x customer growth.
- Read/write architecture is optimized — Streaming replication, read replicas, and failover strategies are in place, reducing primary database load and ensuring high availability targets are consistently met.
- Database performance is proactively managed, not reactive — Monitoring, alerting, and observability (slow queries, replication lag, connection saturation, resource utilization) are fully instrumented, with issues caught and resolved before they impact customers.
- AI is embedded across database operations — AI-powered tooling is actively used for migration planning, query optimization, monitoring, runbook automation, and day-to-day operational workflows — not as an afterthought, but as a force multiplier.
- Disaster recovery is tested and trusted — Backup, restore, and DR procedures meet defined RPO/RTO targets and are validated through regular drills.
- Engineering teams treat you as a partner — Product engineering squads (Billing, Lifecycle, Data & Integrations, DevEx) proactively engage you on schema design, query optimization, and database best practices.
Apply for this job
Please let Nue know you discovered this position on TRYremote so we can keep providing you with quality remote tech jobs.
