We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
Remote New

Data Engineer

Equiliem
United States, Oregon
Nov 21, 2024
eeking an experienced Senior Data Engineer to join our team. As a Senior Data Engineer, you will play a critical role in designing, building, and maintaining our big data infrastructure, ensuring the scalability, reliability, and performance of our data systems.

Job Summary:

We are looking for a highly skilled Senior Data Engineer with a strong background in big data engineering, cloud computing, and software development. The ideal candidate will have a proven track record of designing and implementing scalable data solutions using AWS, Spark, and Python. The candidate should have hands-on experience with Databricks, optimizing Spark applications, and building ETL pipelines. Experience with CI/CD, unit testing, and big data problem-solving is a plus.

Key Responsibilities:

* Design, build, and maintain large-scale data pipelines using AWS EMR, Spark, and Python

* Develop and optimize Spark applications and ETL pipelines for performance and scalability

* Collaborate with product managers and analysts to design and implement data models and data warehousing solutions

* Work with cross-functional teams to integrate data systems with other applications and services

* Ensure data quality, integrity, and security across all data systems

* Develop and maintain unit test cases for data pipelines and applications

* Implement CI/CD pipelines for automated testing and deployment

* Collaborate with the DevOps team to ensure seamless deployment of data applications

* Stay up to date with industry trends and emerging technologies in big data and cloud computing

Requirements:

* At least 5 years of experience in data engineering, big data, or a related field

* Proficiency in Spark, including Spark Core, Spark SQL, and Spark Streaming

* Experience with AWS EMR, including cluster management and job optimization

* Strong skills in Python, including data structures, algorithms, and software design patterns

* Hands-on experience with Databricks, including Databricks Lakehouse (advantageous)

* Experience with optimizing Spark applications and ETL pipelines for performance and scalability

* Good understanding of data modeling, data warehousing, and data governance

* Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI (advantageous)

* Strong understanding of software development principles, including unit testing and test-driven development

* Ability to design and implement scalable data solutions that meet business requirements

* Strong problem-solving skills, with the ability to debug complex data issues

* Excellent communication and collaboration skills, with the ability to work with cross-functional teams

Nice to Have:

* Experience with Databricks Lakehouse

* Knowledge of data engineering best practices and design patterns

* Experience with agile development methodologies, such as Scrum or Kanban

Comments for Suppliers: Notes:

The team is looking for a Data Engineer with experience in big data infrastructure and system migration.

Applied = 0

(web-5584d87848-7ccxh)