Jobs
Interviews

319 Data Ingestion Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Senior Data Engineer with a deep focus on data quality, validation frameworks, and reliability engineering . This role will be instrumental in ensuring the accuracy, integrity, and trustworthiness of data assets across our cloud-native infrastructure. The ideal candidate combines expert-level Python programming with practical experience in data pipeline engineering, API integration, and managing cloud-native workloads on AWS and Kubernetes . Roles and Responsibilities Design, develop, and deploy automated data validation and quality frameworks using Python. Build scalable and fault-tolerant data pipelines that support quality checks across data ingestion, transformation, and delivery. Integrate with REST APIs to validate and enrich datasets across distributed systems. Deploy and manage validation workflows using AWS services (EKS, EMR, EC2) and Kubernetes clusters. Collaborate with data engineers, analysts, and DevOps to embed quality checks into CI/CD and ETL pipelines. Develop monitoring and alerting systems for real-time detection of data anomalies and inconsistencies. Write clean, modular, and reusable Python code for automated testing, validation, and reporting. Lead root cause analysis for data quality incidents and design long-term solutions. Maintain detailed technical documentation of data validation strategies, test cases, and architecture. Promote data quality best practices and evangelize a culture of data reliability within the engineering teams. Required Skills: Experience with data quality platforms such as Great Expectations , Collibra Data Quality , or similar tools. Proficiency in Docker and container lifecycle management. Familiarity with serverless compute environments (e.g., AWS Lambda, Azure Functions), Python, PySpark Relevant certifications in AWS , Kubernetes , or data quality technologies . Prior experience working in big data ecosystems and real-time data environments.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

The data engineer is responsible for Designing, Architecting and implementing robust, scalable and maintainable data pipelines. candidate will work directly with upstream stockholders (applications owners, data providers) and downstream stakeholders (Data consumers, Data analysts, Data Scientists) to define the data pipeline requirements, implement solutions that serve downstream stakeholders needs through APIs, materialized views. On day to day, Candidate works in conjunction with the Data Analyst, for the aggregation and preparation of data. interacts with security, continuity and IT architecture to validate the IT assets designs and develops. Furthermore, works with BNP Paribas international team Direct Responsibilities Work on the stages from data ingestion to analytics, encompass integration, transformation, warehousing and maintenance The Data Engineer designs architecture, orchestrate, deploys and monitors reliable data processing systems. Implement Batch and streaming data pipelines to ingest data into the data warehouse Perform undercurrent activities (Data architecture, Data management, DataOps, Security) Perform data transformation and modeling, to convert data from OLTP to OLAP to speed up data querying, and best align with business needs Serve downstream stakeholders across the organization, whose improved access to standardized data will make them more effective at delivering use cases, building dashboards and guiding decisions Technical Competencies Master Data engineering fundamentals concepts (Data warehouse, Data Lake, Data Lakehouse) Master Golang, Bash, SQL, Python Master of HTTP and REST API Best practices Master batch and streaming datapipeline using Kafka Master code versioning with Git and best practices for continuous integration & delivery (CI/CD) Master writing clean and tested code following software engineering best practices (Readable, Modular, Reusable, Extensible) Master data modeling (3NF, Kimball, Vault) Knowledge ofdata orchestration using Airflow or Dagster Knowledge to self-host and manage tools like Metabase, DBT, Knowledge of cloud principals and infrastructure management (IAM, Logging, Terraform, Ansible) Knowledge of data abstraction layers (Object Storage, Relational, NoSQL, Document, Trino, and Graph databases) Knowledge with Containerization and workload orchestration with (Docker, Kubernetes, Artifactory) Background in working in an agile environment (knowledge of the methods and their limits) Skills Referential Behavioural Skills : (Please select up to 4 skills) Communication skills - oral & written Attention to detail / rigor Adaptability Ability to synthetize / simplify Transversal Skills: Ability to develop and adapt a process Ability to understand, explain and support change Analytical Ability Ability to set up relevant performance indicators Ability to anticipate business / strategic evolution Education Level: Bachelor Degree or equivalent Experience Level At least 5 years

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Ingestion Engineer at Saxon Global, you will be responsible for designing, developing, and optimizing data ingestion pipelines to integrate multiple sources into Databricks. Your expertise in CI/CD and Kubernetes will be crucial in implementing and maintaining efficient data workflows. Collaboration with Data Engineers and stakeholders is essential to streamline data ingestion strategies and ensure data integrity, security, and compliance throughout the process. Key Responsibilities: - Design, develop, and optimize data ingestion pipelines for integrating multiple sources into Databricks. - Implement and maintain CI/CD pipelines for data workflows. - Deploy and manage containerized applications using Kubernetes. - Collaborate with Data Engineers and stakeholders to streamline data ingestion strategies. - Troubleshoot and optimize ingestion pipelines for performance and scalability. Required Skills & Qualifications: - Proven experience in data ingestion and pipeline development. - Hands-on experience with CI/CD tools such as GitHub Actions, Jenkins, Azure DevOps, etc. - Strong knowledge of Kubernetes and container orchestration. - Experience with Databricks, Spark, and data lake architectures. - Proficiency in Python, Scala, or SQL for data processing. - Familiarity with cloud platforms like AWS, Azure, or GCP. - Strong problem-solving and analytical skills. Preferred Qualifications: - Experience with Infrastructure as Code tools like Terraform, Helm, etc. - Background in streaming data ingestion technologies such as Kafka, Kinesis, etc. - Knowledge of data governance and security best practices.,

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Chennai

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

We are seeking a motivated and skilled Data Scientist with 3 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in machine learning, with a focus on implementing algorithms at scale. Additionally, knowledge of computer vision and natural language processing will be ideal Key Responsibilities: - Develop and implement machine learning models, offline batch models as well as real time online and edge compute models - Analyze complex datasets and extract meaningful insights to drive business decisions - Collaborate with cross-functional teams to identify and solve business problems using data-driven approaches - Communicate findings and recommendations to stakeholders effectively Required Qualifications: - Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field - 3+ years of experience in a Data Scientist role - Strong proficiency in Python and SQL - Solid understanding of machine learning algorithms and statistical modeling techniques - Knowledge of Natural Language Processing (NLP) and Computer Vision (CV) concepts and algorithms - Hands-on experience implementing and deploying machine learning algorithms - Experience with data visualization tools and techniques - Strong analytical and problem-solving skills - Excellent communication skills, both written and verbal Preferred Qualifications: - Experience with PySpark and other big data processing frameworks - Knowledge of deep learning frameworks (e.g., TensorFlow, PyTorch) Technical Skills: - Programming LanguagesPython (required), SQL (required), Java (basic knowledge preferred) - Machine LearningStrong foundation in traditional ML algorithms, and a working knowledge of NLP and Computer Vision - Big DataDeep knowledge of PySpark - Data Storage and RetrievalFamiliarity with databases/mlflow preferred - MathematicsStrong background in statistics, linear algebra, and probability theory - Version ControlGit Soft Skills: - Excellent communication skills to facilitate interactions with stakeholders - Ability to explain complex technical concepts to non-technical audiences - Strong problem-solving and analytical thinking - Self-motivated and able to work independently as well as in a team environment - Curiosity and eagerness to learn new technologies and methodologies We're looking for a motivated individual who is passionate about data science and eager to take on challenging tasks. If you thrive in a fast-paced environment and are excited about leveraging cutting-edge technologies in machine learning to solve real-world problems, we encourage you to apply! PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog. Life at PhonePe PhonePe in the news

Posted 3 weeks ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

Pune

Work from Office

: Job TitleData Engineer (ETL, Big Data, Hadoop, Spark, GCP), AS Location:Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Hands-on experience for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience: More than 7+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark, SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. How well support you . . .

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Mumbai

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

9 - 13 Lacs

Kolkata

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

30 - 32 Lacs

Hyderabad, Ahmedabad, Chennai

Work from Office

Dear Candidate, We are looking for a skilled Data Engineer to design and maintain data pipelines, ensuring efficient data processing and storage. If you have expertise in ETL, SQL, and cloud-based data platforms, wed love to hear from you! Key Responsibilities: Design, develop, and maintain scalable data pipelines. Optimize data workflows for performance and efficiency. Work with structured and unstructured data sources. Implement data governance and security best practices. Collaborate with data scientists and analysts to support data-driven decisions. Ensure compliance with data privacy regulations (GDPR, CCPA). Required Skills & Qualifications: Proficiency in SQL, Python, or Scala for data processing. Experience with ETL tools (Informatica, Apache NiFi, AWS Glue). Hands-on experience with cloud data platforms (AWS, Azure, GCP). Knowledge of data warehousing (Snowflake, Redshift, BigQuery). Familiarity with Apache Spark, Kafka, or Hadoop for big data processing. Soft Skills: Strong problem-solving and analytical skills. Ability to work independently and in a team. Good communication skills to collaborate with stakeholders. Note: If interested, please share your updated resume and your preferred contact details. If shortlisted, our HR team will reach out to you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Greater Noida

Work from Office

SQL DEVELOPER: Design and implement relational database structures optimized for performance and scalability. Develop and maintain complex SQL queries, stored procedures, triggers, and functions. Optimize database performance through indexing, query tuning, and regular maintenance. Ensure data integrity, consistency, and security across multiple environments. Collaborate with cross-functional teams to integrate SQL databases with applications and reporting tools. Develop and manage ETL (Extract, Transform, Load) processes for data ingestion and transformation. Monitor and troubleshoot database performance issues. Automate routine database tasks using scripts and tools. Document database architecture, processes, and procedures for future reference. Stay updated with the latest SQL best practices and database technologies.Data Retrieval: SQL Developers must be able to query large and complex databases to extract relevant data for analysis or reporting. Data Transformation: They often clean, join, and reshape data using SQL to prepare it for downstream processes like analytics or machine learning. Performance Optimization: Writing queries that run efficiently is key, especially when dealing with big data or real-time systems. Understanding of Database Schemas: Knowing how tables relate and how to navigate normalized or denormalized structures is essential. QE: Design, develop, and execute test plans and test cases for data pipelines, ETL processes, and data platforms. Validate data quality, integrity, and consistency across various data sources and destinations. Automate data validation and testing using tools such as PyTest, Great Expectations, or custom Python/SQL scripts. Collaborate with data engineers, analysts, and product managers to understand data requirements and ensure test coverage. Monitor data pipelines and proactively identify data quality issues or anomalies. Contribute to the development of data quality frameworks and best practices. Participate in code reviews and provide feedback on data quality and testability. Strong SQL skills and experience with large-scale data sets. Proficiency in Python or another scripting language for test automation. Experience with data testing tools Familiarity with cloud platforms and data warehousing solutions

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

About the Role: We are seeking a skilled and detail-oriented Data Migration Specialist with hands-on experience in Alteryx and Snowflake. The ideal candidate will be responsible for analyzing existing Alteryx workflows, documenting the logic and data transformation steps and converting them into optimized, scalable SQL queries and processes in Snowflake. The ideal candidate should have solid SQL expertise, a strong understanding of data warehousing concepts. This role plays a critical part in our cloud modernization and data platform transformation initiatives. Key Responsibilities: Analyze and interpret complex Alteryx workflows to identify data sources, transformations, joins, filters, aggregations, and output steps. Document the logical flow of each Alteryx workflow, including inputs, business logic, and outputs. Translate Alteryx logic into equivalent SQL scripts optimized for Snowflake, ensuring accuracy and performance. Write advanced SQL queries , stored procedures, and use Snowflake-specific features like Streams, Tasks, Cloning, Time Travel , and Zero-Copy Cloning . Implement data ingestion strategies using Snowpipe , stages, and external tables. Optimize Snowflake performance through query tuning , partitioning, clustering, and caching strategies. Collaborate with data analysts, engineers, and stakeholders to validate transformed logic against expected results. Handle data cleansing, enrichment, aggregation, and business logic implementation within Snowflake. Suggest improvements and automation opportunities during migration. Conduct unit testing and support UAT (User Acceptance Testing) for migrated workflows. Maintain version control, documentation, and audit trail for all converted workflows. Required Skills: Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have aleast 4 years of hands-on experience in designing and developing scalable data solutions using the Snowflake Data Cloud platform Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. 1+ years of experience with Alteryx Designer, including advanced workflow development and debugging. Strong proficiency in SQL, with 3+ years specifically working with Snowflake or other cloud data warehouses. Python programming experience focused on data engineering. Experience with data APIs , batch/stream processing. Solid understanding of data transformation logic like joins, unions, filters, formulas, aggregations, pivots, and transpositions. Experience in performance tuning and optimization of SQL queries in Snowflake. Familiarity with Snowflake features like CTEs, Window Functions, Tasks, Streams, Stages, and External Tables. Exposure to migration or modernization projects from ETL tools (like Alteryx/Informatica) to SQL-based cloud platforms. Strong documentation skills and attention to detail. Experience working in Agile/Scrum development environments. Good communication and collaboration skills.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

12 - 15 Lacs

Gurugram, Ahmedabad

Work from Office

We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 3 weeks ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Nagpur

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with teams, making team decisions, and providing solutions to problems for your immediate team and across multiple teams. You will engage with multiple teams and contribute to key decisions, ensuring the successful performance of your team and delivering high-quality applications. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Manage and prioritize application development tasks- Ensure applications meet business process and application requirements- Perform code reviews and provide feedback to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services- Experience with cloud-based application development- Strong understanding of data storage and management in Azure- Hands-on experience with Azure data services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage- Experience with data integration and ETL processes in Azure Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring the smooth functioning of applications and providing solutions to work-related problems. A typical day in this role involves collaborating with team members, analyzing business requirements, and developing and implementing application solutions. You will also actively participate in team discussions and contribute to providing solutions to work-related problems, becoming a subject matter expert in your field. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with team members to analyze business requirements.- Design and develop applications based on business process and application requirements.- Configure applications to ensure smooth functioning and optimal performance.- Troubleshoot and debug application issues to ensure proper functionality.- Collaborate with cross-functional teams to integrate applications with other systems.- Stay updated with emerging technologies and industry trends to enhance application development processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with cloud-based application development.- Strong understanding of data storage and management in Azure.- Experience with Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage.- Hands-on experience with Azure Data Factory and Azure Databricks.- Knowledge of Azure Functions and Azure Logic Apps for application integration. Additional Information:- The candidate should have a minimum of 3 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant - Databricks Architect! In this role, the Databricks Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities . Architect and design solutions to meet functional and non-functional requirements. . Create and review architecture and solution design artifacts. . Evangelize re-use through the implementation of shared assets. . Enforce adherence to architectural standards/principles, global product-specific guidelines, usability design standards, etc. . Proactively guide engineering methodologies, standards, and leading practices. . Guidance of engineering staff and reviews of as-built configurations during the construction phase. . Provide insight and direction on roles and responsibilities required for solution operations. . Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. . Considers the art of the possible, compares various architectural options based on feasibility and impact, and proposes actionable plans. . Demonstrate strong analytical and technical problem-solving skills. . Ability to analyze and operate at various levels of abstraction. . Ability to balance what is strategically right with what is practically realistic. . Growing the Data Engineering business by helping customers identify opportunities to deliver improved business outcomes, designing and driving the implementation of those solutions. . Growing & retaining the Data Engineering team with appropriate skills and experience to deliver high quality services to our customers. . Supporting and developing our people, including learning & development, certification & career development plans . Providing technical governance and oversight for solution design and implementation . Should have technical foresight to understand new technology and advancement. . Leading team in the definition of best practices & repeatable methodologies in Cloud Data Engineering, including Data Storage, ETL, Data Integration & Migration, Data Warehousing and Data Governance . Should have Technical Experience in Azure, AWS & GCP Cloud Data Engineering services and solutions. . Contributing to Sales & Pre-sales activities including proposals, pursuits, demonstrations, and proof of concept initiatives . Evangelizing the Data Engineering service offerings to both internal and external stakeholders . Development of Whitepapers, blogs, webinars and other though leadership material . Development of Go-to-Market and Service Offering definitions for Data Engineering . Working with Learning & Development teams to establish appropriate learning & certification paths for their domain. . Expand the business within existing accounts and help clients, by building and sustaining strategic executive relationships, doubling up as their trusted business technology advisor. . Position differentiated and custom solutions to clients, based on the market trends, specific needs of the clients and the supporting business cases. . Build new Data capabilities, solutions, assets, accelerators, and team competencies. . Manage multiple opportunities through the entire business cycle simultaneously, working with cross-functional teams as necessary. Qualifications we seek in you! Minimum qualifications . Excellent technical architecture skills, enabling the creation of future-proof, complex global solutions. . Excellent interpersonal communication and organizational skills are required to operate as a leading member of global, distributed teams that deliver quality services and solutions. . Ability to rapidly gain knowledge of the organizational structure of the firm to facilitate work with groups outside of the immediate technical team. . Knowledge and experience in IT methodologies and life cycles that will be used. . Familiar with solution implementation/management, service/operations management, etc. . Leadership skills can inspire others and persuade. . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. . Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Experience in a solution architecture role using service and hosting solutions such as private/public cloud IaaS, PaaS, and SaaS platforms. . Experience in architecting and designing technical solutions for cloud-centric solutions based on industry standards using IaaS, PaaS, and SaaS capabilities. . Must have strong hands-on experience on various cloud services like ADF/Lambda, ADLS/S3, Security, Monitoring, Governance . Must have experience to design platform on Databricks. . Hands-on Experience to design and build Databricks based solution on any cloud platform. . Hands-on experience to design and build solution powered by DBT models and integrate with databricks. . Must be very good designing End-to-End solution on cloud platform. . Must have good knowledge of Data Engineering concept and related services of cloud. . Must have good experience in Python and Spark. . Must have good experience in setting up development best practices. . Intermediate level knowledge is required for Data Modelling. . Good to have knowledge of docker and Kubernetes. . Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, RBAC, SSO etc. . Knowledge of cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. . Experience building and supporting mission-critical technology components with DR capabilities. . Experience with multi-tier system and service design and development for large enterprises . Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. . Exposure to infrastructure and application security technologies and approaches . Familiarity with requirements gathering techniques. Preferred qualifications . Must have designed the E2E architecture of unified data platform covering all the aspect of data lifecycle starting from Data Ingestion, Transformation, Serve and consumption. . Must have excellent coding skills either Python or Scala, preferably Python. . Must have experience in Data Engineering domain with total . Must have designed and implemented at least 2-3 project end-to-end in Databricks. . Must have experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o SQL Endpoint - Photon engine o Unity Catalog o Databricks workflows orchestration o Security management o Platform governance o Data Security . Must have knowledge of new features available in Databricks and its implications along with various possible use-case. . Must have followed various architectural principles to design best suited per problem. . Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. . Must have strong understanding of Data warehousing and various governance and security standards around Databricks. . Must have knowledge of cluster optimization and its integration with various cloud services. . Must have good understanding to create complex data pipeline. . Must be strong in SQL and sprak-sql. . Must have strong performance optimization skills to improve efficiency and reduce cost. . Must have worked on designing both Batch and streaming data pipeline. . Must have extensive knowledge of Spark and Hive data processing framework. . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. . Must be strong in writing unit test case and integration test. . Must have strong communication skills and have worked with cross platform team. . Must have great attitude towards learning new skills and upskilling the existing skills. . Responsible to set best practices around Databricks CI/CD. . Must understand composable architecture to take fullest advantage of Databricks capabilities. . Good to have Rest API knowledge. . Good to have understanding around cost distribution. . Good to have if worked on migration project to build Unified data platform. . Good to have knowledge of DBT. . Experience around DevSecOps including docker and Kubernetes. . Software development full lifecycle methodologies, patterns, frameworks, libraries, and tools . Knowledge of programming and scripting languages such as JavaScript, PowerShell, Bash, SQL, Java, Python, etc. . Experience with data ingestion technologies such as Azure Data Factory, SSIS, Pentaho, Alteryx . Experience with visualization tools such as Tableau, Power BI . Experience with machine learning tools such as mlFlow, Databricks AI/ML, Azure ML, AWS sagemaker, etc. . Experience in distilling complex technical challenges to actionable decisions for stakeholders and guiding project teams by building consensus and mediating compromises when necessary. . Experience coordinating the intersection of complex system dependencies and interactions . Experience in solution delivery using common methodologies especially SAFe Agile but also Waterfall, Iterative, etc. Demonstrated knowledge of relevant industry trends and standards Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala, Python HBase, Hive Good to have Aws -S3, Athena, Dynamo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark Data Frames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 4 weeks ago

Apply

8.0 - 12.0 years

6 - 14 Lacs

Pune, Bengaluru

Hybrid

Job Summary We are looking for a highly skilled Cloud Engineer with a strong background in real-time and batch data ingestion and data processing, azure products-devops, azure cloud. The ideal candidate should have a deep understanding of streaming architectures and performance optimization techniques in cloud environments, preferably in subsurface domain. Key Responsibilities Automation experience essential : Scripting, using PowerShell. ARM Templates, using JSON (PowerShell also acceptable) Azure DevOps with CI/CD, Site Reliability Engineering Must be able to understand the concept of how the applications function. The ability to priorities workload and operate across several initiatives simultaneously Update and maintain the Kappa-Automate database and connectivity with the pi historian and data lake Participate in troubleshooting, performance tuning, and continuous improvement of the Kappa Automate platform Designing and implementing highly configurable Deployment pipelines in Azure Configuring Delta Lake on Azure Databricks Apply performance tuning techniques such as partitioning, caching, and cluster Working on various Azure storage types Work with large volumes of structured and unstructured data, ensuring high availability and performance. Collaborate with cross-functional teams (data scientists, analysts, business users) Qualifications • Bachelors or Master’s degree in Computer Science, Information Technology, or a related field. • 8+ years of experience in data engineering or a related role. • Proven experience with Azure technologies.

Posted 4 weeks ago

Apply

9.0 - 11.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,Bachelor Of Technology,Bachelor Of Science,Bachelor Of Comp. Applications,Master Of Engineering,Master Of Technology,Master Of Science,Master Of Comp. Applications Service Line Engineering Services Responsibilities Must have skills -: Expert level knowledge - JavaScript, NodeJS, etc. Good Exposure to C, C++, etc. Expert level knowledge on frequently used data storage or SQL or NoSQL databases. Expert level knowledge in Software development, networking & system design. Knowledge of Linux Operating System internals. Experience in Linux embedded systems. Experience in architecture of complex performant Linux system software. Capability to translate business requirements into architectural framework and system designs. Should have worked on system designs and software development to deliver a high-performance Linux system application written in either C, C++, JavaScript, NodeJS, etc. Experience in multimedia and Digital Television and Web Content Streaming technologies.Good to have skills -: Understanding of “RDK Central” software ecosystem designs, functional components, and principles Good understanding of video encoding, streaming and various media delivery Good understanding in CA (Certifying Authority) & DRM (Digital Right Management) systems Good Understanding of E2E video management technologies Experience in broadcast TV, IPTV and OTT solutions Good to have experience in Nagra CA (security design, SoC capabilities, certification process) Knowledge in Smart RCU integration (BLE), Linux Bootloader, Systemd, DBUS, etc. Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : Primary skills:Technology-Media-Settop Box, DVB,Technology-Media-Video Streaming,Technology-Open System-Linux Preferred Skills: Technology-Open System-Linux Technology-Media-Video Streaming Technology-Media-Settop Box DVB

Posted 4 weeks ago

Apply

8.0 - 12.0 years

35 - 80 Lacs

Mumbai

Work from Office

Job Summary As an SE (Solutions Engineer) in NetApp’s Sales function, you will utilize strong customer handling and technical competencies to set objectives and execute plans for winning sales campaigns. This challenging and high-visibility position provides a huge opportunity to grow in your career and cover the largest account base in the region. You develop long-term strategies and shorter-term plans to meet aggressive performance goals with the channel partners and internal stakeholders, including the Client Executive and the District Manager. You must be extremely results driven, customer focused, tech savvy, and skilled at building internal relationships and external partnerships. Job Requirements Excellent verbal and written communications skills, including presentation skills. Proven experience in presales, designing and proposing technical solutions. Excellent presentation, relationship building and negotiating skills. Ability to work collaboratively with functional peers across functions including Marketing, Sales, Sales Operations, Customer Support, and Product Development. Strong understanding of Data storage, Data Protection, Disaster recovery and competitive offerings in the marketplace. Understanding of Cloud technologies is highly desirable. Ability to convey and analyze information clearly as needed to help customer make buying decisions. An excellent understanding how technology products and solutions solve business problems. The ability to hold key technical decision maker and CXO relationships within major accounts in the territory assigned. Education At least 15 years of overall experience with at least 10 years in presales. A Bachelor of Sciences Degree in Engineering, Computer Science; or related field is preferred; a Graduate Degree is mandatory.

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

We are seeking a Senior Software Engineer to join our Site Reliability Engineering team, with a focus on Observability and Reliability. As a key member of our SRE team, you will play a critical role in ensuring the performance, stability, and availability of our applications and systems with a focused approach in Application Performance Management, Observability Reliability of the platform. The Senior Software Engineer will be responsible for the design, implementation, and maintenance of our observability and reliability infrastructure, with a primary focus on the ELK stack (Elasticsearch, Logstash, and Kibana). The role involves configuring, fine-tuning, and automating alerts, integrating Elastic solutions with other tools and applications, generating reports, and optimizing the observability and monitoring systems. Qualifications/Skills/Abilities Minimum Requirements Formal Education Bachelors degree in computer science, Information Technology, or a related field (or equivalent experience). Experience (type duration) 5+ years of experience in Site Reliability Engineering, Obervability reliability, DevOps Skills Proficiency in configuring and maintaining the ELK stack (Elasticsearch, Logstash, Kibana) is mandatory. Strong scripting and automation skills, with expertise in Python, Bash, or similar languages. Experience in Data structures using Elasticsearch Indices. Experience in writing Data Ingestion Pipelines using Logstash. Experience with infrastructure as code (IaC) and configuration management tools (e.g., Ansible, Terraform). Handson and experience with cloud platforms ( AWS preferred) and containerization technologies (e.g., Docker, Kubernetes). Good to have Telecom domain expertise but not mandatory Strong problem-solving skills and the ability to troubleshoot complex issues in a production environment. Excellent communication and collaboration skills. Accreditation / certifications / licenses Relevant certifications (e.g., Elastic Certified Engineer) are a plus.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies