Job Description
- Design, develop, and maintain ETL processes using IBM DataStage.
- Perform data extraction, transformation, and loading from various sources to data warehouses or data lakes.
- Work closely with data analysts, data architects, and business users to ensure data quality and availability.
- Optimize data pipelines for better performance and scalability.
- Manage data flow and integration between Hadoop ecosystems and other systems.
Requirements
- Bachelor's degree in Computer Science, Information Systems, or related field.
- 4+ years of experience as a Data Engineer in enterprise or banking environments.
- Strong hands-on experience with IBM DataStage and ETL pipeline design.
- Proficient in SQL and data modeling concepts.
- Familiar with Hadoop ecosystem (HDFS, Hive, Spark, etc.).
- Good analytical, problem-solving, and communication skills.
- Able to work onsite and collaborate with cross-functional teams.