Information TechnologyFull-TimeJunior-level(1-2 yrs)
Job Description
About the role
You will be working as part of the Data Engineering team on automated data processing pipelines. The majority of your tasks will involve the development and maintenance of various ETL pipelines designed to make data available for internal and external consumers. This includes receiving data from partners, validating it, and ensuring it is ready for customer scoring processes. Your work will also support internal and external reporting, revenue assurance, and data archiving, all of which are essential for scaling the company's daily operations.
Key Responsibilities
Develop and maintain data processing pipelines.
Participate in the design, implementation, and release of data science initiatives.
Deliver high-quality, testable, secure, readable, and documented software.
Collaborate effectively in English with local and remote peers across different teams.
Assist in the evaluation of new technologies and initiatives.
Requirements
Minimal Qualifications and Experience
BA/BSc/HND degree.
Minimum 2+ years of working experience.
Working proficiency with Python or any other programming/scripting language.
Working proficiency with SQL or R.
Advanced working proficiency with any RDBMS, including deployment and performance optimizations.
Experience with GIT, automated testing, and CI/CD.
Familiarity with the Linux environment as a user and developer (Debian is a plus).
Ideal Qualifications
Deep understanding of PostgreSQL database system.
Experience with software development best practices, system monitoring, and issue debugging.
Experience with system scaling and addressing continuous integration.