Home
Jobs

Staff Software Engineer

8 years

0 Lacs

Posted:6 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Staff Software Engineer, Data Ingestion The Staff Software Engineer, Data Ingestion will be a critical individual contributor responsible for designing, developing, and maintaining robust and scalable data pipelines. This role is at the heart of our data ecosystem, deliver new analytical software solution to access timely, accurate, and complete data for insights, products, and operational efficiency. Key Responsibilities: Design, develop, and maintain high-performance, fault-tolerant data ingestion pipelines using Python. Integrate with diverse data sources (databases, APIs, streaming platforms, cloud storage, etc.). Implement data transformation and cleansing logic during ingestion to ensure data quality. Monitor and troubleshoot data ingestion pipelines, identifying and resolving issues promptly. Collaborate with database engineers to optimize data models for fast consumption. Evaluate and propose new technologies or frameworks to improve ingestion efficiency and reliability. Develop and implement self-healing mechanisms for data pipelines to ensure continuity. Define and uphold SLAs and SLOs for data freshness, completeness, and availability. Participate in on-call rotation as needed for critical data pipeline issues Key Skills: 8+ years of experience, ideally with an engineering background, working in software product companies. Extensive Python Expertise: Extensive experience in developing robust, production-grade applications with Python. Data Collection & Integration: Proven experience collecting data from various sources (REST APIs, OAuth, GraphQL, Kafka, S3, SFTP, etc.). Distributed Systems & Scalability: Strong understanding of distributed systems concepts, designing for scale, performance optimization, and fault tolerance. Cloud Platforms: Experience with major cloud providers (AWS or GCP) and their data-related services (e.g., S3, EC2, Lambda, SQS, Kafka, Cloud Storage, GKE). Database Fundamentals: Solid understanding of relational databases (SQL, schema design, indexing, query optimization). OLAP database experience is a plus (Hadoop) Monitoring & Alerting: Experience with monitoring tools (e.g., Prometheus, Grafana) and setting up effective alerts. Version Control: Proficiency with Git. Containerization (Plus): Experience with Docker and Kubernetes. Streaming Technologies (Plus): Experience with real-time data processing using Kafka, Flink, Spark Streaming. Show more Show less

Mock Interview

Practice Video Interview with JobPe AI

Start Software Interview Now

My Connections BrightEdge

Download Chrome Extension (See your connection in the BrightEdge )

chrome image
Download Now
BrightEdge

10 Jobs

RecommendedJobs for You

Hyderabad, Telangana, India

Bengaluru, Karnataka, India

Chennai, Tamil Nadu, India

Hyderabad, Telangana, India

Hyderabad, Telangana, India