RESTful APIs. Good understanding of Git version control system. Ability to write clean, well-documented
physics and reality modelling, 3D graphics, hardware control, motion feedback systems, audio simulation or networking
Support ETL processes including, data ingestion, transformation, validation, and integration processes using and effectively communicate data engineering processes and solutions. Bachelor's degree or higher in Performance Computing, Data Warehousing, Big Data Processing Strong Experience working with various relational Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, Google BigQuery
security, scalability, and data integrity within all processes and solutions. Apply established DB design methodologies specifications. Streamline data import and integration processes Advise on areas that could be improved on and sources. Ability to clearly document and execute processes relating to data setup, technical roadmaps for
design, and implement internal process improvements: automating manual processes,optimizing for delivery, re-designing
and business analysis teams to assist with the process of determining requirements. Mentor juniors Relevant
take ownership of project throughout complete process Ability to communicate with Stakeholders Ability