stackadapt

Senior Database Administrator

Apply Now

At a Glance

Location
Canada
Work Regime
remote
Posted
2026-03-24T10:03:23-04:00

Key Requirements

Required Skills

AIAWSETLJavaLinuxPythonSQLSnowflake

Domain Knowledge

  • Advertising
  • Education
  • Marketing

Benefits & Perks

Health Insurance

quity awards, and a comprehensive benefits package. Factors Influencing Final Comp

Requirements

Deep experience performing ETL design and development via custom coding (SQL, Python, Spark, Java, etc.) as well as using ETL tools (e.g., Coalesce, Informatica, DataStage, Talend) and other tools such as Data Quality tools, Metadata Manager, etc.

Extensive hands-on, professional, experience working with Snowflake as a Database Administrator (DBA) and clear understanding of Snowflake’s Access Control framework

Comfortable with Cloud-hosted solutions, especially AWS with experience deploying Secrets Manager, KMS, S3, EC2, Linux, Cross-Account Access, etc in a scaled environment

Understanding of data warehousing architecture fundamentals (e.g., Kimball vs.

Inmon, Medallion Architecture, 3NF data models, reference data models, dimensional models, conformed dimensions, SCDs, etc.).

Experience in orchestrating data operations via tools such as Apache Airflow, Cron, Astronomer etc.

Responsibilities

This role will be responsible for all operational database administration duties, data engineering, and data operations within the Data Lake and Enterprise Data Warehouse (Snowflake) ecosystem including, but not limited, to the following:

Take the lead on our daily database administration, ensuring our data environment stays healthy and reliable.

Partner closely with our Staff EDW Architect to bring new visions to life, designing high-quality data artifacts that follow industry best practices.

Turn business needs into smart, reusable ETL solutions that grow alongside our company.

Design end-to-end pipelines using your expertise with data models and architecture diagrams to build automated ingestion and transformation pipelines that keep our data moving on schedule

Analyze and produce artifacts such as Source-to-Target Mapping documents.