Job Overview
The Data and AI teams at TE are part of the TE Information Solutions (TEIS) Organization and Corporate Technology and are responsible for driving organic growth by leveraging Data and AI. We are on an exciting journey to build and scale our Data and Analytics (D&A) practice.
Job Responsibilities
- Lead conversations with business or functional teams to understand its processes/tools/data and define/develop appropriate data acquisition methodologies for analytics projects.
- Analyse data requirements, complex source data, data models, and determine the best methods in extracting, transforming, and loading the data into the data staging, warehouse, and other system integration projects.
- Be the key anchor for data extraction, preparation and hosting processes representing the engineering teams. Data expert for engineering datasets used by data scientists working on various use cases.
- Create master data files from disparate data sources by building data pipelines. Develop and test architecture for data extraction.
- Perform functional and stress testing on data models and ETL jobs to ensure that design is optimized for performance, scale, and reusability.
- Develop and deploy ETL job workflow with reliable error/exception handling and rollback.
- Develop and maintain database solutions, optimizing storage and retrieval for high-performance machine learning applications.
- Manage foundational data administration tasks such as scheduling jobs, troubleshooting job errors, identifying issues with job windows, assisting with database backups and performance tuning.
- Support the full-stack development team with integration of machine learning models into the web applications.
- Document test cases results and implementation steps.
- Ensure data quality throughout entire ETL process, including audits and feedback loops to sources of truth.
- Create or update technical documentation for transition to support teams.
Job Requirements
Education/Experience
- B.S/M.S. in Computer Science or Computer Engineering or related field,
- 5+ years of experience building data models on Snowflake, Redshift, HANA, Teradata, Exasol etc
- 5+ years of experience working on any ETL tools and technologies.
- 3+ years of experience working in a cloud environment, AWS preferred.
- 5+ Years of Experience in any programming language (Python, R, Scala, Java)
- Proficiency with modern software development methodologies such as Agile, source control, CI/CD, project management and issue tracking with JIRA
- Basic Knowledge of supervised and/or Unsupervised learning, feature engineering, generative-AI models, model training and deployment is plus but not necessary
We Value
- AWS Cloud Certifications is strongly preferred. Ability to obtain one after joining team is required.
- Experience in working with engineering teams
- Strong interpersonal and communication skills
- Visualization tool experience, especially with Tableau
- Experience in working with data scientists
- Ability to work independently in a fast-paced environment
- Willing to flex daily work schedule to allow for time-zone differences for global team communications.
- Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
- Strong problem-solving capabilities. Results oriented. Relies on fact-based logic for decision-making.