Job description
DATA ENGINEER (Spark, Kafka)
Start: ASAP
Duration: initial 6-months
Location: Hybrid, once per week in Windsor
Rate: inside IR35, paying up to £510 per day.
Responsibilities:
- Design, implement and manageKafkaa-based data pipelines, supporting real-time data processing
- Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability
- Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow
- Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases
- Implement security measures to protect Kafka clusters and data streams
Skills required:
- Design, build, and maintain reliable, scalable data pipelines. Data Integration, Data Security
- Strong knowledge of data engineering tools and technologies (SQL, ETL, data warehousing)
- Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL
- Programming languages such as Python, PySpark.
Start: ASAP
Duration: initial 6-months
Location: Hybrid, once per week in Windsor
Rate: inside IR35, paying up to £510 per day.
Responsibilities:
- Design, implement and manageKafkaa-based data pipelines, supporting real-time data processing
- Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability
- Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow
- Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases
- Implement security measures to protect Kafka clusters and data streams
Skills required:
- Design, build, and maintain reliable, scalable data pipelines. Data Integration, Data Security
- Strong knowledge of data engineering tools and technologies (SQL, ETL, data warehousing)
- Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL
- Programming languages such as Python, PySpark.