Data Engineer

Johannesburg, GP, ZA, South Africa

Job Description

Requisition Details & Talent Acquisition Contact



136570 - Busi Radebe
Job Post End Date: 12 March 2025

Job Family

Marketing, Communication, and Data Analytics

Career Stream

Data Analysis

Leadership Pipeline

Manager of Others & Self Professional

Job Purpose



The purpose of the Data Engineer (with Cloud Certification) role in the Group Marketing Data Analytics team is to utilize data expertise and related technologies, aligning with the Nedbank Data Architecture Roadmap to drive technical thought leadership across the Enterprise. This role involves managing data analysts, delivering fit-for-purpose data products and supporting data initiatives to enhance analytics, machine learning, and artificial intelligence capabilities specifically for marketing purposes. Data Engineers in this team play a key role in advancing Nedbank's data infrastructure to enable a data-driven marketing organization. This includes creating data pipelines, ingestion processes, provisioning, streaming, self-service, and APIs tailored to marketing data needs.

Job Responsibilities



Data Analysts:
Manage several data analysts, their performance management and delivery.

Data Maintenance & Quality:
Maintain, improve, clean, and manipulate data in operational and analytics databases.

Data Infrastructure:

Build and manage scalable, optimized, secure, and reliable data infrastructure using technologies such as:
Infrastructure and Databases: DB2, PostgreSQL, MSSQL, HBase, NoSQL. Data Lake Storage: Azure Data Lake Gen 2. Cloud Solutions: SAS, Azure Databricks, Azure Data Factory, HDInsight. Data Platforms: SAS, Ab Initio, Denodo, Netezza, Azure Cloud.

Collaborate with Information Security, CISO, and Data Governance to ensure data privacy and security.
Data Pipeline Development:
Build and maintain data pipelines for ingestion, provisioning, streaming, and APIs. Integrate data from multiple sources (Golden Sources, Trusted Sources, Writebacks). Load data into Nedbank Data Warehouse (Data Reservoir, Atomic Data Warehouse, Enterprise Data Mart). Provision data to Lines of Business Marts, Regulatory Marts, and Compliance Marts through self-service data virtualization. Ensure consistency in data transformation for reporting and analysis. Utilize big data tools like Hadoop, streaming tools like Kafka, and data replication tools like IBM InfoSphere. Utilize data integration tools such as Ab Initio, Azure Data Factory, and Azure Databricks.

Data Modelling:
Collaborate with Data Modelers to create data models and schemas for the Data Reservoir, Data Lake, Atomic Data Warehouse, and Enterprise Data Marts.

Automation & Monitoring:
Automate data pipelines and monitor their performance for efficiency and effectiveness.

Collaboration:
Work closely with Data Analysts, Software Engineers, Data Modelers, Data Scientists, Scrum Masters, and Data Warehouse teams to deliver end-to-end data solutions that bring value to the business.

Data Quality & Governance:
Implement data quality checks to maintain data accuracy, consistency, and security throughout the data lifecycle.

Performance Optimization:
Ensure the optimal performance of data warehouses, data integration patterns, and real-time/batch jobs.

API Development:
Develop APIs for efficient data delivery and ensure seamless integration between data consumers and providers.

People Specification


Essential Qualifications - NQF Level
Matric / Grade 12 / National Senior Certificate Advanced Diplomas/National 1st Degrees Preferred Qualification
BCom, BSc, BEng in related fields.

Essential Certifications


Cloud (Azure/AWS), DevOps, or Data Engineering certifications. Preferred Certifications
Azure Data Factory Azure Synapse Analytics Event Hub Microsoft Fabric AZ-900 certification (Microsoft Azure Fundamentals) Data Science certifications (e.g., Coursera, Udemy, SAS Data Scientist certification, Microsoft Data Scientist). Minimum Experience Level

Total Experience Required: 3 - 6 years.



Type of Experience:



Ability to work independently within a squad. Experience designing, building, and maintaining data warehouses and data lakes. Hands-on experience with big data technologies (Hadoop, Spark, Hive). Programming experience with Python, Java, SQL. Knowledge of relational and NoSQL databases. Experience with cloud platforms like AWS, Azure, and GCP. Familiarity with data visualization tools. Analytical and problem-solving skills.

Technical / Professional Knowledge
Cloud Data Engineering (Azure, AWS, Google) - Intermediate, Proficiency Level 3 Data Warehousing - Progressive, Proficiency Level 4 Databases (PostgreSQL, MS SQL, IBM DB2, HBase, MongoDB) - Progressive, Proficiency Level 4 Programming (Python, Java, SQL) - Progressive, Proficiency Level 4 Data Analysis and Data Modelling - Intermediate, Proficiency Level 3 Data Pipelines and ETL tools (Ab Initio, ADB, ADF, SAS ETL) - Intermediate, Proficiency Level 3 Agile Delivery - Progressive, Proficiency Level 4 Problem-Solving Skills - Progressive, Proficiency Level 4 Behavioural Competencies
Decision Making Influencing Communication Innovation Technical/Professional Knowledge and Skills Building Partnerships Continuous Learning
-

Please contact the Nedbank Recruiting Team at +27 860 555 566

Beware of fraud agents! do not pay money to get a job

MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD1398375
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Johannesburg, GP, ZA, South Africa
  • Education
    Not mentioned