PBT Group has a requirement for an intermediate DevOps Data Engineer.
Duties:
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Glue, Step-functions, Kafka CC, PySpark, DynamoDB, Delta.io, RedShift, Lambda, DeltaLake, Python,.
Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or 3rd party services and Kafka CC.
Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, PySpark, Scala, Kafka CC.
Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.
Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL , AWS big data technologies and Kafka CC.
Creation and support of real-time data pipelines built on AWS technologies including Glue, Lambda, Step Functions, PySpark , Athena and Kafka CC
Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency
Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning
Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
Required Skills:
Advanced working data engineering knowledge and experience working with modern data practices, using Delta.io , CDC management and data load practices.
Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
Experience working with distributed systems as it pertains to data storage and computing
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, meta data, dependency, and workload management.
A successful history of manipulating, processing and extracting value from large, disconnected data sets.
Working knowledge of message queuing, stream processing, and highly scalable Big Data, data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience in a Data Engineer or similar roles
Experience with big data tools is a must: Delta.io, PySpark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with data pipeline and workflow management tools: Step functions , glue workflow etc.
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Required Qualifications / Training:
Relevant data warehouse and BI solution training is essential.
B.Sc. or related degree is advantageous.
5+ years programming experience.
In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
ExecutivePlacements.com
Beware of fraud agents! do not pay money to get a job
MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.