Salesforce Data Engineer - FSC

Posted 21 January 2026
Salary 65-80/h
LocationMeridian
Job type Contract
Discipline Enterprise Applications (SAP/Salesforce/MS Dynamics)
Reference75962
Remote working Remote

Job description

We are hiring a Salesforce Data Engineer for one of our Consulting clients -

This is a remote role (can be based out of the US / Canada) working on EST

Initially the contract would be 20 hours / week for 3-5 months

Candidates must have experience with Salesforce FSC data models, python, and experience doing data migration for Salesforce projects.


Overview
The Data Engineer builds and maintains the systems that power AI, analytics, and data driven  decision making. This role focuses on creating and orchestrating efficient data pipelines, organizing data for scale, and ensuring data is clean, secure, and ready for use across the business. The engineer enables clients to unlock insights, optimize performance, reduce waste, and move with confidence. The work supports BI, Operations, System Integrations and AI practices by ensuring high quality data is consistently available. 

Primary Responsibilities
  • Design, build, and maintain data pipelines that support efficient collection, ingestion, storage, and processing
  • Implement modern data architectures such as, data lakes, data warehouses, lakehouses, and data mesh platforms
  • Develop streaming data flows for near real time and low latency use cases
  • Clean and prepare data to support analytics, reporting, and AI model readiness
  • Improve performance and reliability across data systems
  • Apply data governance and security best practices to safeguard customer information
  • Partner with technical and business teams to understand requirements and deliver effective solutions
  • Identify opportunities to streamline operations and reduce cost through smarter data design
  • Monitor and resolve issues to maintain dependable, resilient data operations
Required Qualifications
 
  • Experience building and maintaining data pipelines, and ETL/ELT scalable frameworks.
  • Strong foundation in relational and non relational data systems
  • Strong data modeling skills
  • Working knowledge of data lake, data warehouse, and lakehouse patterns
  • Hands on experience with both batch and  streaming data pipelines
  • Proficiency in SQL, Python and modern data engineering tools and libraries, such as Pandas
  • Ability to design structured, scalable solutions for analytics and AI preparation
  • Familiarity with cloud platforms and distributed processing frameworks
  • Clear, concise communication skills
Preferred Qualifications
  • Experience with Databricks, Snowflake, Microsoft Synapse, Fabric, AWS Glue, DMS, or similar data platforms and technologies
  • Experience with Open Data platforms and tools, such as Apache Spark, Airflow, Delta Lake, or Iceberg
  • Background supporting Data migrations, API integrations, and Machine Learning or AI data requirements
  • Understanding of data governance, lineage, and secure data practices
  • Exposure to a data product mindset and domain oriented or data mesh approaches
Reach out to learn more!