Company:
BETSoftware
Industry: ICT / Telecommunication
Deadline: Jan 30, 2026
Job Type: Full Time
Experience: 1 – 2 years
Location: Western Cape
Province:
Field: Data, Business Analysis and AIÂ , ICT / Computer
Skill Set
- SQL
- Hadoop
- SQL MS
- data engineering
- data warehousing
- Python, Java, or Scala
- Analytical
- Machine Learning
Responsibilities
Job Responsibilities:
Data Engineering
- Design and manage high-throughput, low-latency data pipelines using distributed computing frameworks.
- Build scalable ETL/ELT workflows using tools like Airflow and Spark.
- Work with containerised environments (e.g., Kubernetes, OpenShift) and real-time data platforms (e.g., Apache Kafka, Flink).
- Ensure efficient data ingestion, transformation, and integration from multiple sources.
- Maintain data integrity, reliability, and governance across systems.
Data Analysis and Modelling:
- Apply statistical and machine learning techniques to analyse data and translate complex data sets to identify patterns, trends and actionable insights that drive business strategy and operational efficiency.
- Develop predictive models, recommendation systems, and optimisation algorithms to solve business challenges and enhance operational efficiency.
- Transform raw data into meaningful features that improve model performance and translate business challenges into analytical problems providing data driven solutions.
Design and Planning Data Engineering Solutions
- Design and implement testing frameworks to measure the impact of business interventions.
- Design and implement scalable, high-performance big data applications that support analytical and operational workloads.
- Assist in evaluations and recommend best-fit technologies for real-time and batch data processing.
- Ensure that data solutions are optimised for performance, security, and scalability.
- Develop and maintain data models, schemas, and architecture blueprints for relational and big data environments.
- Ensure seamless data integration from multiple sources, leveraging Kafka for real-time streaming and event-driven architecture.
- Facilitate system design and review, ensuring compatibility with existing and future systems.
- Optimise data workflows, ETL/ELT pipelines, and distributed storage strategies.
Technical Development and Innovation:
- Keep abreast of technological advancements in data science, data engineering, machine learning and AI.
- Continuously evaluate and experiment with new tools, libraries, and platforms to ensure that the team is using the most effective technologies.
- Work on end-to-end and data engineering projects that support strategic goals. This includes requirements gathering, technical deliverable planning, output quality and stakeholder management.
- Continuous research on to develop and implement innovative ideas and improved methods, systems and work processes which lead to higher quality and better results.
- Build and maintain Kafka-based streaming applications for real-time data ingestion, processing, and analytics.
- Design and implementation data lake and data warehouse data processing & ingestion applications.
- Utilise advanced SQLSpark query optimisation techniques, indexing strategies, partitioning, and materialised views to enhance performance.
- Work extensively with relational databases (PostgreSQL, MySQL, SQL Server) and big data technologies (Hadoop, Spark).
- Design and implement data architectures that efficiently handle structured and unstructured data at scale.
Resourceful and Improving:
- Find innovative ways following processes to overcome challenges, leveraging available tools, data, and methodologies effectively.
- Continuously seek out new techniques, best practices and emerging trends in Data Science, AI, and machine learning.
- Actively contribute to team learning by sharing insights, tools and approaches that improve overall performance.
Qualifications
Job Specification:
- At least 1 years in a technical role with experience in data warehousing, and data engineering.
- Proficiency in programming languages such as Python, Java, or Scala for data processing.
- Proficiency in SQL for data processing using SQL MS server or PostgreSQL.
- 1-2 years’ experience across the data science workflow will be advantageous.
- 1-2 years of proven experience as a data scientist, with expertise in machine learning, statistical analysis and data visualisation will be advantageous.
- Experience with big data technologies such as Hadoop, Spark, Hive, and Airflow will be advantageous.
- Expertise in SQL/Spark performance tuning, database optimisation, and complex query development will be advantageous.
- Advantageous on .net Programming (C#, C++, Java) and Design Patterns.
Living Our Spirit
- Adaptability & Resilience: Embrace change with flexibility, positivity, and a proactive mindset. Thrive in dynamic, fast-paced environments by adjusting to evolving priorities and technologies.
- Decision-Making & Accountability: Make timely, data-informed decisions involving the team to ensure transparency and alignment. Confidently justify choices based on thorough analysis and sound judgment.
- Innovation & Continuous Learning: Actively pursue new tools, techniques, and best practices in Data Science, AI, and engineering. Share insights openly to foster team growth and continuously improve performance.
- Collaboration & Inclusion: Foster open communication and create a supportive, inclusive environment where diverse perspectives are valued. Empower team members to share ideas, seek help, and give constructive feedback freely.
- Leadership & Growth: Lead authentically with integrity and openness. Support team members through mentorship, skill development, and creating a safe space for honest feedback and innovation. Celebrate successes and embrace challenges as growth opportunities.
Apply Before 12/12/2025
