Design, create, modify extract/transform/load (ETL) pipelines in Azure Data Factory ensuring efficient data flow from source to destination.
Ensure data accuracy and data integrity throughout the ETL processes via data validation, cleansing, deduplication, and error handling to ensure reliable and usable data being ingested.
Monitor the ETL processes and optimize ETL pipelines for speed and efficiency, addressing bottlenecks, and ensuring the ETL system can handle the volume, velocity, and variety of data.
Participate in data modeling, designing of the data structures and schema in the data warehouse to optimize query performance and align with business needs.
Work closely with Business controlling, data engineers and IT teams to understand data requirements and deliver the data infrastructure that supports business goals.
Provide technical support for ETL systems, troubleshooting issues and ensuring the continuous availability and reliability of data flows.
Ensure proper documentation of data sources, ETL processes and data architecture.
Requirements:
BA/BS or equivalent in Computer Science, Business Analytics, Information Systems or equivalent.
At least 4 years of database experience in an enterprise production environment.
Working experience on Azure cloud and with Azure Data Factory is a must, and Oracle cloud is a plus.
Demonstrable proficiency in PL/SQL to write SQL scripts.
Experience in data injection, cleansing and data modelling.
Hands-on database administration skills on Oracle and PostgreSQL is a plus.
Good analytical and troubleshooting skills.
Proactive, self-motivated and able to work in a dynamic environment.