Company:
Kuda
Industry: Banking / Financial Services
Deadline: Not specified
Experience: 7 years
Location: Western Cape
Province: Cape Town
Field: Data, Business Analysis and AIÂ , ICT / Computer
Role Overview
- We are expanding our reach and seeking a visionary Senior Data Engineer to spearhead our data engineering efforts, driving innovation and growth. With a passion for data-driven decision-making, you will play a pivotal role in shaping the future of banking for millions.
Roles and responsibilities
- Design, develop, and optimise large-scale data ingestion, transformation, and processing pipelines for structured, semi-structured, and unstructured data.
- Lead the integration of multi-cloud and hybrid data platforms (e.g., Azure SQL, Google BigQuery, on-premises SQL Server).
- Define and enforce data architecture standards to ensure scalability, security, and optimal performance.
- Leverage Dataform to manage SQL-based transformations, version control, testing, and deployment of analytics datasets in BigQuery.
- Introduce and manage real-time streaming solutions (e.g., Kafka, Pub/Sub, or Dataflow) in conjunction with batch data pipelines.
- Data Quality & Governance
- Establish data quality frameworks with automated validation, anomaly detection, and reconciliation checks.
- Collaborate with Data Governance teams to maintain data catalogues metadata management, and lineage tracking.
- Implement security, privacy, and compliance standards (such as GDPR, NDPR, and ISO 27001) within data pipelines.
- Mentor junior and mid-level data engineers, providing technical guidance and career development support.
- Partner with Data Science and BI teams to deliver data products for predictive modelling, experimentation, and self-service analytics.
- Act as a subject matter expert in cross-functional projects, advising on technical trade-offs and best practices.
- Research and adopt emerging data engineering technologies and methodologies.
- Drive automation in data workflows to reduce manual intervention and operational risk.
- Optimise data storage and compute costs through partitioning, clustering, and workload management.
Requirements
- 7+ years of experience in data engineering, with a proven track record of leading teams.
- Expert-level SQL skills, including advanced query optimisation and performance tuning.
- Proven experience with:
- Microsoft SQL Server / Azure SQL DB / Azure Managed Instance
- Google BigQuery & Google Cloud Platform
- Dbt. Cloud or Dataform for data modelling, testing, and deployment
- Data ingestion tools (e.g., Airbyte, Azure Data Factory and Fivetran)
- Strong programming skills in Python (preferred) and at least one additional language (Java, Scala, or Go).
- Experience with streaming architectures (Kafka, Pub/Sub, Spark Streaming, or Flink).
- Familiarity with infrastructure-as-code tools (Terraform, Pulumi, or Deployment Manager).
- Strong understanding of modern data architectures (Medallion, Data Mesh Lakehouse).
- Hands-on experience with CI/CD for data and containerization (Docker, Kubernetes).
- Proficient in Agile delivery methodologies.
Preferred:
- Knowledge of machine learning pipelines (Vertex AI, MLflow, or SageMaker).
- Prior experience working in FinTech or other regulated industries.
- Exposure to Looker or similar BI tools.

