What you will do:
- Develop and optimize data pipelines using SQL/Python
- Implement data transformations for analytics readiness
- Monitor and troubleshoot pipeline failures
- Maintain data warehouse/lake environments
- Execute data quality checks and validation
- Document data flows and lineage
- Work with analytics teams on data requirements
- Support senior engineers on complex tasks
- Participate in code reviews and knowledge sharing
- Up to 4 years of experience in Data Engineering or a related technical field.
- Proven track record in designing, building, and maintaining scalable data pipelines.
- Technical Skills:
- SQL (advanced querying, optimization)
- Python (Pandas, PySpark) or Java/Scala
- ETL tools (Airflow, dbt, SSIS)
- Cloud data platforms (Snowflake, BigQuery, Redshift)
- Strong analytical and problem-solving mindset.
- High attention to data accuracy and quality.
- Curious and eager to learn new technologies and tools.
- Committed to quality-focused execution and continuous improvement.
- Proactive in identifying and resolving data-related issues.
- A collaborative team player with a strong sense of ownership.