Want to be notified the moment a job has been posted? Setup Job Alerts
Data Sourcing Engineering Senior SpecialistTelstra
Employment Type Permanent Closing Date 24 May 2021 11:59pm Job Title Data Sourcing Engineering Senior Specialist Job Description We're Australia’s leading telecommunications and technology company. And, with a presence in more than 20 countries, we’re creating a global footprint – which for you, that means incredible work opportunities and experiences to develop and grow your career. Our Networks & IT team With our world-class network covering the Australian population and connecting businesses internationally, you’ll have exposure to exciting innovations in the IT industry, including cloud computing, IoT, and virtualisation.
As technologies advance, so will your career which means an agile approach will be critical to your success. The role with us As a key member in the Data Sourcing chapter you will be working on a variety of problems related to ingesting large volumes of data at high speed from a wealth of different sources including but not limited to: Network elements, Log files, Telemetry and Application Instrumentation via Prometheus. Utilizing a varied toolset consisting of Python, Scala, PHP, Perl, Elixir, Erlang you will be responsible for maintaining and supporting a host of both internal and externally hosted data collection applications within a medium sized team distributed across Australia.
All software is deployed to a combination of on-premise servers and cloud platforms via AWS & Azure. To be successful in this role you will need to understand both batch and streaming technologies with in-depth understanding of data manipulation using data frames, parquet, Protobufs and JSON. Having previous experience in platforms like Kafka, Hadoop, Nifi & Spark will be advantageous however is not strictly required.
You will need to have detailed experience developing software in a Linux based environment with one or more of the above languages using best practises such as unit & property testing with extensive skills using and optimizing relational databases such as MySQL or PostgreSQL. We utilize tools such as Docker, Kubernetes, NixOS, Terraform and Puppet for automation and use Grafana, Influx, VictoriaMetrics in concert with Prometheus Alert Manager and Nagios for alarming, instrumentation, and dashboards. Practicing a true Dev-ops culture you will be responsible for maintaining CI/CD pipelines and will use tools such as Git, Bitbucket, Bamboo, Jenkins to assist.
You will need strong skills in software development as you will be challenged by large-scale custom-built data pipelines that need to operate at maximum scale to achieve desired throughput. As such a background in distributed system design, event streaming and ETL is required. Day to day duties include but are not limited to Participating in agile scrum ceremonies such as sprint planning, backlog grooming, estimation, and review.
Support new development activities and improve application performance Ensure technical debt is managed in line with group level OKR's and Tels....