Skip to content

Data Engineer

  • Remote
    • Prague, Praha, Hlavní město, Czechia
  • €40 - €45 per hour
  • Jimmy Technologies

If you are passionate about Data Engineering, cloud-native architectures, and AI applications, this role with our client offers an exciting opportunity to work on impactful projects!

Job description

We are looking for Data Engineers for the team of our Fortune 50 client, building an agentic system acting across large-scale enterprise data. The main objective of the project is to expose enterprise data into a Neo4j-backed semantic graph optimized for agentic reasoning.

This is a remote-first position for engineers based in Europe, Turkey, and Middle East with a required overlap of US working hours (2-6 PM CET).

Responsibilities

Data Modeling & ETL Development

  • Built ETL pipelines from Microsoft Fabric Data Lake into Neo4j

  • Design data models.

  • Transform raw structured and unstructured data into clean, well-modeled graph inputs (nodes, edges, metadata).

  • Create Source to Target Mappings (STMs) for ETL specifications.

  • Implement automated ingestion patterns, incremental (delta) updates, and streaming/CDC workflows.

Collaboration & Agile Development

  • Gather requirements, set targets, define interface specifications, and conduct design sessions.

  • Work closely with data consumers to ensure proper integration.

  • Adapt and learn in a fast-paced project environment.

Work Conditions

  • Start Date: ASAP

  • Location: Remote

  • Working hours: US time zone overlap required: 2-6pm CET

  • Long-term contract based-role: 6+month

Job requirements

  • Strong SQL skills for ETL, data modeling, and performance tuning.

  • Experience with Neo4j

  • Proficiency in Python, especially for handling and flattening complex JSON structures.

  • Hands-on experience with Microsoft Fabric, Synapse, ADF, or similar cloud data stacks.

  • Knowledge of Cypher, APOC, and graph modeling.

  • Familiarity with GraphRAG, retrieval systems, or RAG hybrids.

  • Understanding of software engineering and testing practices within an Agile environment.

  • Experience with Data as Code; version control, small and regular commits, unit tests, CI/CD, packaging, familiarity with containerization tools such as Docker (must have) and Kubernetes (plus).

  • Excellent teamwork and communication skills.

  • Proficiency in English, with strong written and verbal communication skills.

  • Efficient, high-performance data pipelines for real-time and batch data processing.

Nice to have:

  • Knowledge of cryptography and its application in enterprise data modeling in regulated industries (Banking, Finance, Ops)

  • Semantic models, ontologies, or knowledge engineering

or