Responsibilities:
- Develop, optimize, and maintain ETL pipelines
- Work with Apache Hadoop, Spark, Kafka, and Hive
- Ensure data quality, security, and governance
- Collaborate with data architects and analysts
Requirements:
- Experience with big data tools and ETL frameworks.
- Proficiency in Python, SQL, and cloud environments (AWS, Azure, or GCP) is a plus
More Information
- Qualifications Bachelor