Job Requirements:
- True, early-stage work ethic and enthusiasm.
- 3+ years of experience in data engineering or a related field.
- Experience architecting efficient SQL data structures.
- Experience with data storage solutions such as PostgreSQL/Apache Druid/Apache Pinot.
- Expertise in ETL processes, data transformation, and data integration.
- Strong programming skills in at least one programming language, such as Python.
- Familiarity with cloud platforms such as AWS or Google Cloud Platform.
- Willingness to learn new technologies and think outside the box.
- Strong troubleshooting skills.
- Excellent communication skills, including ability to collaborate across borders and departments.
- Passion for continuous learning; insatiable curiosity driving tough questions, new knowledge and solutions.
- Preferably, have experience working in a start-up.
- Preferably, have a basic knowledge of finance and accounting.
Job Description:
- Implement and maintain data architecture, data pipelines, and data storage solutions.
- Collaborate with data scientists, analysts, and other stakeholders to design and implement data-driven solutions.
- Develop and maintain ETL processes, data integration and data transformation processes.
- Monitor and maintain data quality, accuracy, and integrity.
- Identify opportunities to improve data infrastructure and processes, and implement improvements in a timely manner.
- Develop and maintain documentation for data infrastructure, processes and systems.
- Stay up to date with the latest trends and technologies in data engineering (Airflow, Great Expectations, dbt, Apache Superset).