Big Data Engineer



About Our Client The Client is a large organization within the utility industry. Job Description Support and maintain existing ETL processes that ingest data from various sources (such as relational databases, APIs, SFTP sites, etc.), transform it and load it into S3 or Redshift.

Work closely with internal teams to capture and analyze data and process requirements and implement those as required. Engage with 3rd party vendors in order to implement and integrate new solutions. Design and implement a uniform and consistent ETL framework and migrate existing ETL processes into this platform.

Build data models, such as a single customer view, to facilitate descriptive and predictive analytics. Design and develop a centralized monitoring tool for infrastructure performance and processes, capable of raising alarms and notifications and executing automated actions. Support AWS infrastructure and continuously work towards improving its reliability and availability whilst optimizing cost and performance.

Assess and implement new technologies and processes. The Successful Applicant 3 years' experience in a similar role. Proficient in Python.

Experience in programming in Java. Writing and tuning complex SQL queries. Experienced in working with relational databases (such as Oracle, MS SQL, MySQL).

Ability to design and develop ETL workflows. Knowledge of complexities and challenges of implementing ETL and data flow processes for Big Data. Excellent understanding and extensive experience in AWS services (such as S3, EC2, CloudWatch, Kinesis, Redshift, Athena …) Experience in Linux operating system and Linux shell scripting.

What's on Offer Huge growth opportunity. Work for a large reputable organization..

Share Now


20 Oct 2019


We're not around right now. But you can send us an email and we'll get back to you, asap.

Copyright © NZRelo™ 2019. All Rights Reserved.

To save this website, tap the button and select Add To Home Screen.

× Tap this message to dismiss.

Forgot your details?