Basic experience/understanding of AWS Components (in order of importance):
Glue
CloudWatch
SNS
Athena
S3
Kinesis Streams (Kinesis, Kinesis Firehose)
Lambda
DynamoDB
Step Function
Param Store
Secrets Manager
Code Build/Pipeline
CloudFormation
Business Intelligence (BI) Experience
Technical data modelling and schema design (not drag and drop)
Kafka
AWS EMR
Redshift
Basic experience in Networking and troubleshooting network issues.
Knowledge of the Agile Working Model.
ADVANTAGEOUS SKILLS
Demonstrate expertise in data modelling Oracle SQL.
Exceptional analytical skills analysing large and complex data sets.
Perform thorough testing and data validation to ensure the accuracy of data transformations.
Strong written and verbal communication skills, with precise documentation.
Self-driven team player with ability to work independently and multi-task.
Experience in working with Enterprise Collaboration tools such as Confluence, JIRA.
Experience developing technical documentation and artefacts.
Knowledge of data formats such as Parquet, AVRO, JSON, XML, CSV.
Experience working with Data Quality Tools such as Great Expectations.
Experience developing and working with REST API's is a bonus.
Experience building data pipeline using AWS Glue or Data Pipeline, or similar platforms.
Familiar with data store such as AWS S3, and AWS RDS or DynamoDB.
Experience and solid understanding of various software design patterns.
Experience preparing specifications from which programs will be written, designed, coded, tested and debugged.
Strong organizational skills.
ROLE:
Data Engineers are responsible for building and maintaining Big Data Pipelines using GROUP Data Platforms.
Data Engineers are custodians of data and must ensure that data is shared in line with the information classification requirements on a need-to-know basis.