sonatus

Staff Cloud Backend Engineer

Apply Now

At a Glance

Location
United States
Experience
9+ years
Posted
2026-02-18T18:06:22-05:00

Key Requirements

Required Skills

Data EngineeringDockerGoJavaKafkaKubernetesPythonSQLScala

Domain Knowledge

  • Engineering

Requirements

9+ years of experience in backend engineering, data engineering, or a similar role, with a strong focus on building and optimizing data pipelines.

Proven experience with big data technologies (e.g., Apache Kafka, Apache Pulsar, etc) and cloud-native data processing frameworks.

Expertise in designing and implementing large-scale, distributed data pipelines.

Strong knowledge of backend development and integration, with proficiency in programming languages such as Golang(preferred), Python, Java, or Scala.

Experience with SQL and NoSQL databases.

Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) for deploying and managing data services.

Compensation & Benefits

Sonatus is a tight-knit team aligned around a unified vision. You can expect a strong engineering-oriented culture that focuses on building the best products and solutions for our customers. We embrace equality and diversity in all regards because respect is ingrained in our every fiber. Other benefits Sonatus offers include:

Stock option plan

Health care plan (Medical, Dental & Vision)

Retirement plan (401k, IRA)

Life Insurance (Basic, Voluntary & AD&D)

Unlimited paid time off (Vacation, Sick & Public Holidays)

Responsibilities

We are looking for a highly skilled and experienced Staff Cloud Backend Engineer to lead the design, development, and optimization of our Cloud Backend.

In this role, you will focus on creating robust, scalable, and efficient backend systems that process and manage large volumes of data.

You will work closely with cross-functional teams to ensure that data flows seamlessly across the organization, enabling high-quality data-driven decisions.

This role is critical in shaping our data infrastructure and ensuring it can scale with the demands of our growing data ecosystem.

Design, develop, and maintain highly scalable and efficient data pipelines that process and transform large datasets from various sources.

Build and optimize data ingestion frameworks to handle real-time and batch data processing with minimal latency and high reliability.