We are pleased to announce the following Position in the Digital IT Department within the Technology Division. In keeping with our current business needs, we are looking for a person who meets the criteria indicated below.
Reporting to the Manager – Data Engineering, the position holder will be responsible for software design and development, testing, trouble shooting, third line support as well as Research and Development.
- Perform technical aspects of big data development for assigned applications including design, developing prototypes, and coding assignments.
- Build analytics software through consistent development practices that will be used to deliver data to end users for exploration, advanced analytics and visualizations for day to day business reporting.
- Plan and deliver highly scalable distributed big data systems, using different open source technologies including but not limited to Apache Kafka, Nifi, HBase, Cassandra, Hive, MongoDB, Postgres, Redis DB etc.
- Code, test, and document scripts for managing different data pipelines and the big data cluster.
- Receive escalated, technically complex mission critical issues, and maintain ownership of the issue until it is resolved completely.
- Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.
- Develop tools, and scripts to automate troubleshooting activities.
- Drive further improvements in the big data platform, tooling and processes.
- Upgrading products/services and applying patches as necessary.
- Maintaining backup and restoring the ETL and Reports repositories and other Systems binaries and source codes.
- Build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
- Develop machine learning algorithms and libraries for problem solving and AI operations.
- Research and provide input on design approach, performance and base functionality improvements for various software applications.
- Highly proficient in more than one modern language, e.g. Java/C#/NodeJS/Python/Scala.
- Experience with relational data stores as well as one or more NoSQL data stores (e.g., Mongo, Cassandra).
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
- Demonstrated proficiency with data structures, algorithms, distributed computing, and ETL systems.
- Experience with various messaging systems, such as Kafka or RabbitMQ.
- Good knowledge of and experience with big data frameworks such as Apache Hive, Spark,
- A working knowledge and experience of SQL scripting.
- Experience in deploying and managing Machine Learning models at scale is an added advantage.
- Hands on implementation and delivery of apache Spark workloads in an Agile working environment is an added advantage.
How To Apply
If you feel that you are up to the challenge and possess the necessary qualification and experience, kindly proceed to https://www.safaricom.co.ke/careers/ search and Apply. Remember to update your candidate profile on the recruitment portal and then Click on the apply button. Remember to attach your resume.
Deadline for application: 24th December 2020