About the Role:
As a Data Engineer, you will play a key role in designing, building, and optimizing data pipelines and systems that support large-scale data processing. You will collaborate with cross-functional teams to implement robust data architecture and ensure the integrity, efficiency, and scalability of our data solutions.
Job Descriptions:
- Develop and maintain efficient data pipelines and workflows to support data ingestion, transformation, and delivery.
- Work closely with analysts, data scientists, and stakeholders to understand data requirements and translate them into technical solutions.
- Design and implement scalable data solutions and architectures, especially within data warehouse environments.
- Optimize SQL queries and develop Python-based scripts for robust data processing.
- Contribute to data modeling efforts for analytics and business intelligence.
- Monitor, troubleshoot, and improve the performance of data pipelines.
- Support integration with cloud-based platforms such as Alicloud, GCP, or AWS.
Requirements:
- A Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field.
- Must have at least 2 years of experience in Data Engineering or a related role.
- Strong understanding of Data Engineering principles and data architecture.
- Hands-on experience in implementing data solutions in data pipeline and/or data warehouse architectures, including data ingestion and workflow scheduling.
- Proficient in SQL and Python for data processing tasks.
- Strong analytical and problem-solving skills with the ability to translate business needs into scalable solutions.
- Knowledge of data modeling in Data Warehouses is highly preferred.
- Prior experience with cloud platforms such as Alicloud, GCP, or AWS is a plus.
- Familiarity with Machine Learning Engineering is a nice-to-have