Description
Data Engineering Experience:
- Databricks: Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.
- AWS Cloud: Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.
- IBM Skills: DB2, Datastage, Tivoli Workload Scheduler, Urban Code.
- Programming Languages: Proficiency in Python, SQL.
- Data Warehousing & ETL: Experience with modern ETL frameworks and data warehousing techniques.
- DevOps & CI/CD: Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).
- Familiarity with big data technologies like Apache Spark, Hadoop, or similar.
- Test automation skills.
- ETL/ELT tools and creating common data sets across on-prem (IBM Datastage ETL) and cloud data stores.
Leadership & Strategy: