Job Summary You will help build and maintain scalable data pipelines and related systems in a research focused environment. You will be responsible for designing, developing, testing, and deploying data solutions that meet the business requirements and align with the scientific goals. You will collaborate with research scientists, internal IT and other stakeholders to ensure data quality, reliability, accessibility, security and governance, as follows: Design, develop, and maintain end-to-end technical aspects of all data pipelines required to support the research scientists and data managers Support ETL processes including, data ingestion, transformation, validation, and integration processes using various tools and frameworks Optimize data performance, scalability, and security Provide technical guidance and support to data analysts and research scientists. Design data integrations and data quality frameworks Work and collaborate with the rest of the IT department to help develop the strategy for long term scientific Big Data platform architecture Document and effectively communicate data engineering processes and solutions. Make use of and help define the right cutting edge technology, processes and tools needed to drive technology within our science and research data management departments. Minimum Qualifications: Bachelor's degree or higher in Computer Science, IT, Engineering, Mathematics, or a related field Industry recognized IT related certification and technology qualification such as Databases and Data related certifications. This is a technical role so a strong focus needs to be on technical skills and experience Minimum Experience: 7+ years experience in a Data Engineering, High Performance Computing, Data Warehousing, Big Data Processing Strong experience with high performance computing environments including Unix, Docker, Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, Google BigQuery and Azure Synapse Analytics At least 5 years advanced experience and very strong proficiency in UNIX, Linux, Windows Knowledge of various data related programming, scripting or data engineering tools such as Python, R, Julia, T-SQL, PowerShell, etc. Knowledge and Abilities: Strong Experience working with various relational database technologies like MS SQL, MySQL, PostgreSQL as well as NoSQL databases such as MongoDB, Cassandra etc. Experience of Big Data technologies such as Hadoop, Spark and Hive Experience with data pipeline tools such as Airflow, Spark, Kafka, or Dataflow Experience working with containerization is advantageous Experience with data quality and testing tools such as Great Expectations, dbt, or DataGrip is advantageous Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous Experience with data warehouse and data lake technologies such as BigQuery, Redshift, or Snowflake advantageous Strong Experience designing end-to-end data pipelines. Strong knowledge of data modeling, architecture, and governance principles Strong Linux Administration skills Programming skills in various languages advantageous Strong data security and compliancy experience Excellent communication, collaboration, and problem-solving skills Ability to work independently and as part of a cross-functional team Interest and enthusiasm for medical scientific research and its applications. SUMMARY: There is large emphasis on the technical element in the role (having experience working with and designing hardware clusters from the ground up). They have a number of technical skills and experience already within the IT department but are looking for someone very strong, and with the past working experience, to help design these clusters and understand the technology in play for advanced scientific computational requirements. This goes beyond just the hardware element though and you will need strong Linux, Data Pipeline and coding/scripting skills. Although this is not a developer role, you should be comfortable with a certain level of coding/scripting and data analysis packages with strong virtualization skills like VMware, HyperV, OpenStack, KVM etc. You will therefore be that technical link with the scientists and will help build platforms (could be hardware, software, cloud etc) and then also manage the Data and Data Pipelines on this system. This would include the compliancy, performance and security of the data too.Carlysle Human CapitalRecruiter
Job Mail
MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.