Description
Location: Remote-first (UK-based)
Rate: Up to £550 p/d
Contract: 6 - 12 months (Outside IR35)
Tech Stack: Python, FastAPI, GCP, Apache Spark, Apache Beam, Google Cloud Dataflow
Company Overview:
We're working with a forward-thinking consultancy that helps top companies build and scale high-performance data platforms. They take an engineering-first approach, and more than half of their team consists of hands-on engineers. If you love working with large-scale data processing and cutting-edge cloud technologies, this one’s for you.
Responsibilities:
- Building data pipelines and ETL workflows that process huge datasets
- Designing, optimizing, and maintaining high-throughput reporting solutions
- Working with Apache Spark for large-scale data processing
- Using Apache Beam and Google Cloud Dataflow to manage complex data workflows
- Developing and improving backend APIs to support data-heavy applications
Requirements:
- Strong Python skills – writing clean, efficient, and scalable code
- Experience with BigQuery, PostgreSQL, and Elasticsearch
- Hands-on experience with Google Cloud, Kubernetes, and Terraform
- Deep understanding of Apache Spark for large-scale data processing
- Knowledge of Apache Beam & Google Cloud Dataflow for data pipeline orchestration
- A team-first mindset with strong communication skills
Additional Information:
This is a contract role outside IR35; you must be UK-based and have a registered company in the UK. Interested? Click Apply or reach out to Ionut Roghina for more details!