You are viewing a preview of this job. Log in or register to view more details about this job.

Data Engineer

About Fynite

Fynite enables business process automation through advanced AI/ML. Its cloud-native platform allows connection with 500+ data sources and seamlessly connects with your ERP system allowing instant end-to-end visibility, forecasting, and expert dialogues using conversational AI.

Advanced loss analysis tool utilizes in-memory databases, and distributed computing to handle large datasets and perform complex calculations in real-time, enabling it AI-powered brain to gain insights faster.

Our patent-pending technology accelerates the machine learning design process, enabling rapid implementation of use cases such as Dynamic Pricing and Risk Management - detecting issues before they occur and recommends the best course of action for efficient resolution.
 

Role Overview

We are looking for a Data Engineer to build and maintain scalable data pipelines that power our AI-agent based workflow automation. You will be responsible for handling ingestion, transformation, and storage of data from various internal and external sources including APIs, databases, and file systems.

This is a critical role in our data infrastructure team, with flexibility in hours and a focus on impact over effort.

 

Responsibilities

  • Design, build, and manage robust ETL/ELT pipelines using AWS services (Glue, S3, Redshift).
  • Integrate data from multiple sources including APIs, MySQL, PostgreSQL, and cloud storage.
  • Work closely with data scientists and product engineers to ensure reliable data flows.
  • Optimize data models, warehouse structures, and query performance.
  • Ensure data quality, consistency, and accuracy across environments.
  • Automate data validation, transformation, and monitoring workflows.
  • Collaborate in deploying pipelines using GitHub Actions, Docker, or Airflow.
     

Requirements

  • 2 years of hands-on experience in data engineering or backend data processing.
  • Education: Bachelors Degree (Masters preferred in computer science/ computer engineering)
  • Strong command of Python, SQL, and experience working with AWS (especially Redshift, Glue, S3).
  • Experience with relational databases like PostgreSQL, MySQL, and DocumentDB.
  • Knowledge of data warehouse best practices and columnar storage optimization.
  • Familiarity with CI/CD pipelines, version control, and containerization (e.g., Docker).
  • Strong debugging and performance tuning skills.
     

Preferred Skills

  • Experience in correlation analysis, pattern recognition and complex ML models
  • Familiarity with API integration, real-time data ingestion, or streaming (AWS Glue)
  • Exposure to machine learning data pipelines or model training workflows.
     

What We Offer

  • Competitive hourly compensation.
  • Remote-first culture with a high-impact engineering team.
  • Opportunity to work on AI-agent based workflow automation and data infrastructure.