for building and maintaining scalable data pipelines and processing systems that will be used to extract Responsibilities: Develop and maintain data pipelines, ETL processes and data integration work flows for fault-tolerant data processing systems. Implement best practices for big data processing, storage, and analysis. Develop and maintain documentation on data processes, infrastructure, and work flows. Monitor performance focus on mining-related systems and data Experience with ETL processes and tools. Strong programming skills
a passion for building robust and scalable data processing systems? Join our dynamic team and contribute Experience: Designing, building, and maintaining data processing systems with a focus on scaling and performance cross-functional teams to architect, build, and enhance data processing pipelines. Utilize your expertise in SQL, Python to ensure efficient data flow and transformation. Design and implement ETL processes that extract, transform experience in designing, building, and maintaining data processing systems. Strong proficiency in SQL, Python
end-to-end technical aspects of all data pipelines Support ETL processes including, data ingestion, transformation Document and effectively communicate data engineering processes and solutions. Bachelor's degree or higher Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various relational Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, Google BigQuery
client data. Training results reports. Data analysis Process analysis Project support Profile Honors client data. Training results reports. Data analysis Process analysis Project support We offer R neg
their database architecture, data flow, and data processes. The role involves efficiently managing the Responsibilities: Design and develop scalable data pipelines and ETL processes to collect, process, and integrate including database servers, data warehouses, and data processing frameworks. Troubleshoot and resolve data-related tools, and frameworks to improve data infrastructure and processing capabilities. Document data pipelines focus on database management, data analysis, and producing ETL processes at scale is essential. Basic
their database architecture, data flow, and data processes. The role involves efficiently managing the Responsibilities: Design and develop scalable data pipelines and ETL processes to collect, process, and integrate including database servers, data warehouses, and data processing frameworks. Troubleshoot and resolve data-related tools, and frameworks to improve data infrastructure and processing capabilities. Document data pipelines focus on database management, data analysis, and producing ETL processes at scale is essential. Basic
Report writing Fuel consumption monitoring Data processing Minimum two years experience with Microsoft
learning models from and utilises distributed data processing and analysis methodologies.
and policies Understand business strategy Data analysis, process analysis Analytical Profile Honours / Masters policies Understand business strategy, Data analysis, process analysis, Analytical Willing to travel
Purpose of role Financial data management & data processing Create financial models. Management Information