Snap logic Developer

4 years

0 Lacs

Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Note: Looking for Immediate Joiners

Job Title: Data Engineering Specialist (Streaming & Integration)

Department: Data & Analytics

Reports To: Data Engineering Manager

Location: Remote (Global)

Employment Type: Full-Time

Overview:

We are seeking a highly skilled Data Engineering Specialist with deep expertise in real-time data streaming and integration platforms to join our growing data team. The ideal candidate will have hands-on experience with SnapLogic and Confluent Kafka, and will be responsible for designing, building, and maintaining robust, scalable data pipelines that enable real-time analytics, operational intelligence, and seamless integration across enterprise systems.

Key Responsibilities:

  • Design, develop, and maintain high-throughput, low-latency data pipelines using SnapLogic and Confluent Kafka.
  • Architect and implement event-driven systems using Kafka for real-time data ingestion, processing, and distribution across microservices and downstream analytics platforms.
  • Configure and manage SnapLogic integration workflows for secure, reliable, and automated data movement between SaaS, on-premise, and cloud applications.
  • Collaborate with data scientists, analysts, and application teams to understand data needs and deliver scalable integration solutions.
  • Optimize Kafka cluster performance, monitor stream health, and ensure data durability, consistency, and fault tolerance.
  • Implement data quality checks, schema evolution strategies, and observability using tools like Confluent Control Center, Grafana, and Prometheus.
  • Ensure security and compliance in data flows through encryption, access control, and audit logging.
  • Participate in agile ceremonies and contribute to technical documentation, release planning, and CI/CD practices.
  • Stay current with evolving trends in streaming data, integration platforms, and cloud-native data architectures.

Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 4+ years of professional experience in data engineering with a focus on streaming data and integration.
  • Proven experience with Confluent Kafka: building producers/consumers, managing topics, handling partitioning, replication, and stream processing using Kafka Streams or KSQL.
  • Extensive hands-on experience with SnapLogic, including building, testing, and deploying integrations using the SnapLogic Integration Cloud.
  • Strong understanding of data modeling, ETL/ELT processes, and data pipeline orchestration.
  • Experience with cloud platforms (AWS, Azure, or GCP) and containerized environments (Docker, Kubernetes).
  • Proficiency in scripting languages (Python, Bash) and familiarity with infrastructure as code (Terraform, CloudFormation).
  • Knowledge of data security, governance, and compliance standards (e.g., GDPR, SOC 2).
  • Excellent communication skills and ability to work in a collaborative, remote-first environment.


Note: Looking for Immediate Joiners

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You