Skip to content

Data Engineer

  • Remote
    • Prague, Praha, Hlavní město, Czechia
  • €40 - €45 per hour
  • Jimmy Technologies

If you are passionate about Data Engineering, cloud-native architectures, and AI applications, this role with our client offers an exciting opportunity to work on impactful projects!

Job description

We are looking for Data Engineers for more of our consulting, manufacturing, logistics, and US tax and auditing projects. The role involves designing data models, developing data pipelines, optimizing performance, and ensuring data quality for operational reporting. 

This is a remote-first position for engineers based in Europe.

Responsibilities

Data Modeling & ETL Development

  • Design data models.

  • Develop SQL code to define data structures and transform data from staging to marts.

  • Create Source to Target Mappings (STMs) for ETL specifications.

  • Evaluate data sources, assess quality, and determine the best integration approach.

  • Develop strategies for integrating data from various sources and data warehouses.

  • Optimize and maintain the data pipeline for SDCM, ECM, and DCM, flattening JSON into Snowflake tables.

  • Work with data vault modeling (a plus).

  • Implement changes to the JSON flattening process based on business needs.

Data Quality & Performance Optimization

  • Write and execute unit tests to ensure code accuracy.

  • Optimize the performance of the data pipeline and fix data quality issues.

  • Implement active monitoring for both data pipelines and data quality.

Collaboration & Agile Development

  • Gather requirements, set targets, define interface specifications, and conduct design sessions.

  • Work closely with data consumers to ensure proper integration.

  • Adapt and learn in a fast-paced project environment.

Work Conditions

  • Start Date: ASAP

  • Location: Remote

Job requirements

  • Strong SQL skills for ETL, data modeling, and performance tuning.

  • Experience with Snowflake and Databricks

  • Proficiency in Python, especially for handling and flattening complex JSON structures.

  • Hands-on experience with Cloud DWH architecture (AWS S3, Databricks, AWS Redshift).

  • Exposure to DBT (Data Build Tool) development.

  • Familiarity with Airflow.

  • Understanding of software engineering and testing practices within an Agile environment.

  • Excellent teamwork and communication skills.

  • Proficiency in English, with strong written and verbal communication skills.

  • Efficient, high-performance data pipelines for real-time and batch data processing.

  • Improved data quality and optimized data integration workflows.

or