You are viewing a preview of this job. Log in or register to view more details about this job.

Senior Data Engineer

We are seeking a skilled and experienced Senior Data Engineer with a minimum of 4 years of experience to join our data team. The successful candidate will be responsible for designing, building, and maintaining scalable data pipelines, optimizing data architecture, and ensuring the integrity and quality of data across various systems.

Key Responsibilities:

  • Design, build, and maintain scalable ETL (Extract, Transform, Load) processes to ingest and transform data from various sources.
  • Optimize and troubleshoot existing data pipelines to improve performance and reliability.
  • Collaborate with data scientists, analysts, and stakeholders to understand data requirements and implement efficient data solutions.
  • Develop and maintain data models, schemas, and databases that support data storage, processing, and retrieval.
  • Ensure data architecture aligns with best practices for scalability, security, and performance.
  • Work closely with DevOps teams to deploy and manage data infrastructure in cloud environments (e.g., AWS, Azure, Google Cloud).
  • Implement data quality checks and validation processes to ensure data accuracy and consistency.
  • Monitor data pipelines for errors and inconsistencies, and proactively address issues.
  • Establish and enforce data governance policies and practices to ensure data integrity and compliance with industry standards.
  • Mentor and guide junior data engineers in best practices for data engineering.
  • Collaborate with cross-functional teams, including data scientists, analysts, and product managers, to deliver high-quality data solutions.
  • Participate in code reviews, design discussions, and technical decision-making processes.

Qualifications:

  • Minimum of 4 years of experience in data engineering or a related field.
  • Proven experience with designing and implementing scalable data pipelines and ETL processes.
  • Experience with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).

Skills:

  • Proficiency in programming languages such as Python, Java, or Scala.
  • Strong knowledge of SQL and experience with relational databases.
  • Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud).
  • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.
  • Excellent problem-solving and analytical skills.
  • Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
  • Ability to work independently and as part of a team in a fast-paced environment.