Data Engineeer

Requisition ID:  9494
Job Location(s): 

Hoofddorp, NH, NL, 2132 HZ

Time in Office:  Hybrid

Overview

Crocs is seeking an experienced Data Engineer to design, implement, and maintain scalable data pipelines, decoupled data infrastructure, dynamic transformations and comprehensive orchestration layer using a combination of Snowflake, DBT, PySpark, Airflow, Azure Data Lake, GitHub, and other tools as needed.

In this role, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. As part of Enterprise Data Platform team, you will help advance the adoption of data-driven insights and advanced AI analytics across multiple business domains within Crocs enterprise.

What You'll Do

  • Data Modeling – Design, implement, and maintain scalable data models that support the Enterprise Data Warehouse (EDW) and analytical workloads following best practices and restrictions imposed by respective technologies.
  • ETL/ELT – design and implement efficient, scalable and easy-to-manage data movement processes supporting both batch and near-real-time data streams.
  • CI/CD – automate code integration, testing, and deployment using Git to ensure fast, reliable, and consistent delivery of data pipelines and ETL code.
  • Engineering Best Practices – Adhere to engineering standard methodologies, including test-driven development, agile management, and continuous integration pipelines.
  • Documentation – Create and maintain accurate and complete documentation of the pipelines developed.
  • Interest in Learning – Stay ahead of what is happening within the BI and analytics space and demonstrate interest and desire in data science and machine learning.

What You'll Bring to the Table

  • Bachelor’s degree in computer science, information technology, engineering, mathematics, or equivalent technical degree.
  • 3+ years in Data Engineering roles.
  • 1+ years of direct development in Snowflake.
  • Strong proficiency in SQL, Python and PySpark.
  • Solid experience utilizing Git version control in a data environment.
  • Experience designing data models best practices.
  • Experience working with/in GitHub, GitActions.
  • Proficiency in Apache Airflow, preferred.
  • Experience working with DBT, preferred.
  • Snowflake certifications, preferred.
  • Prior experience working in Azure cloud platform, preferred.

 The Company is an Equal Opportunity Employer committed to a diverse and inclusive work environment.

 

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or disability, or any other protected classification.

 

Job Category: Corporate