Responsibilities:
- Design, develop, and maintain data pipelines and ETL processes using Python and YAML-based configurations.
- Manage and optimize relational and analytical databases, primarily PostgreSQL and ClickHouse (preferred).
- Deploy and manage containerized applications using Podman, and orchestrate services with Kubernetes (preferred).
- Apply advanced database practices such as partitioning, indexing, sharding, and housekeeping to ensure performance and scalability
- Monitor and ensure data integrity, consistency, and security in compliance with banking data governance standards
- Apply knowledge of networking fundamentals, including reverse proxy, SSH tunneling, and secure communication setups
- Collaborate with data analysts, developers, and system administrators to deliver reliable and auditable data solutions for banking clients.
Requirement:
- Minimum D3 degree in computer science
- Minimum 1 year of experience in the same field
- Strong knowledge of PostgreSQL administration and performance tuning
- Proficiency in Python for data manipulation, automation, or ETL development.
- Strong background working with Linux-based environments.
Experience:
- Exposure to banking data models, financial transaction systems, or core banking platforms (Temenos T24 Preferred)
- Experience with ClickHouse or other columnar analytical databases
- Working experience with Kubernetes for container orchestration.