Jobs
Interviews

491 Data Pipeline Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

2 - 30 Lacs

Hyderabad

Work from Office

Job Title: Senior Automation Engineer Job Type: Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market Job Summary: We are seeking a detail-oriented and innovative Senior Automation Engineer to join our customer's team In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes Create detailed and effective test plans and test cases based on technical requirements and business specifications Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage Document test cases, results, and identified defects; communicate findings clearly to the team Conduct performance testing to ensure data processing and retrieval meet established benchmarks Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions Hands-on experience with Databricks, data warehouse, and data lake architectures Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred) Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences Bachelors degree in Computer Science, Information Technology, or a related discipline Demonstrated problem-solving skills and a collaborative approach to teamwork Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications Ability to integrate user-facing elements with server-side logic for seamless data experiences Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies

Posted 1 month ago

Apply

5.0 - 8.0 years

25 - 40 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 25 to 40 LPA Exp: 5 to 11 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 5+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Presales ** Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Exp with Presales Exp in Gen AI POC Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Collaborate with key stakeholders to understand the specific data engineering requirements and objectives of the organization. Take ownership of the data engineering process and work closely with the team to ensure the successful implementation and maintenance of data pipelines and structures for smart plant initiatives. Align the data fabric for our initial Pilot facilities, working in coordination with our business and vendor counterparts to effectively meet the data needs for all initiatives. Develop a comprehensive roadmap outlining the data points necessary to support various initiatives, such as digital twin, predictive capabilities, and reporting. Assist in the creation of a roll-out plan for additional facilities, including scaling documents, updating packages for our data fabric technologies, and refining data structures. Collaborate with other team members and external resources, to delegate and oversee tasks where necessary to speed up development. Stay up to date with the latest industry trends and best practices in data engineering, recommending and implementing improvements as appropriate. Implement AI

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 18 Lacs

Noida

Work from Office

Precognitas Health Pvt. Ltd., a fully owned subsidiary of Foresight Health Solutions LLC, is seeking a Data Engineer to build and optimize our data pipelines, processing frameworks, and analytics infrastructure that power critical healthcare insights. Are you a bright, energetic, and skilled data engineer who wants to make a meaningful impact in a dynamic environment? Do you enjoy designing and implementing scalable data architectures, ML pipelines, automating ETL workflows, and working with cloud-native solutions to process large datasets efficiently? Are you passionate about transforming raw data into actionable insights that drive better healthcare outcomes? If so, join us! Youll play a crucial role in shaping our data strategy, optimizing data ingestion, and ensuring seamless data flow across our systems while leveraging the latest cloud and big data technologies. Required Skills & Experience : 4+ years of experience in data engineering, data pipelines, and ETL/ELT workflows. Strong Python programming skills with expertise in Python Programming, NumPy, Pandas, and data manipulation techniques. Hands-on experience with orchestration tools like Prefect, Apache Airflow, or AWS Step Functions for managing complex workflows. Proficiency in AWS services, including AWS Glue, AWS Batch, S3, Lambda, RDS, Athena, and Redshift. Experience with Docker containerization and Kubernetes for scalable and efficient data processing. Strong understanding of data processing layers, batch and streaming data architectures, and analytics frameworks. Expertise in SQL and NoSQL databases, query optimization, and data modeling for structured and unstructured data. Familiarity with big data technologies like Apache Spark, Hadoop, or similar frameworks. Experience implementing data validation, quality checks, and observability for robust data pipelines. Strong knowledge of Infrastructure as Code (IaC) using Terraform or AWS CDK for managing cloud-based data infrastructure. Ability to work with distributed systems, event-driven architectures (Kafka, Kinesis), and scalable data storage solutions. Experience with CI/CD for data workflows, including version control (Git), automated testing, and deployment pipelines. Knowledge of data security, encryption, and access control best practices in cloud environments. Strong problem-solving skills and ability to collaborate with cross-functional teams, including data scientists and software engineers. Compensation will be commensurate with experience. If you are interested, please send your application to jobs@precognitas.com. For more information about our work, visit www.caliper.care

Posted 1 month ago

Apply

10.0 - 15.0 years

40 - 50 Lacs

Hyderabad

Hybrid

Envoy Global is a proven innovator in the global immigration space. Our mission combines our industry-leading tech platform with holistic service to streamline, simplify and expedite the immigration process for employers and individuals. We are seeking a highly skilled Team Lead OR Manager, Data Engineering within Envoy Global 's tech team to join us on a full time, permanent basis. This role is responsible for the end-to-end design, development, and documentation of data pipelines and ETL (Extract, Transform, Load) processes. This role focuses on enabling data migration, integration, and warehousing, encompassing the creation of ETL jobs, reports, dashboards, and data pipelines. As our Senior Data Engineering Lead OR Manager, you will be required to: Lead and mentor a small team of data engineers, fostering a collaborative and innovative environment. Design, develop, and document robust data pipelines and ETL jobs. Engage in data modeling activities to ensure efficient and effective data structures. Ensure the seamless integration of data across various platforms and systems Lead all aspects of the design, implementation, and maintenance of data engineering pipelines in our Azure environment including integration with a variety of data sources Collaborate with Data Analytics and DataOps teams and other partners in Architecture, Engineering and Devops teams to delivery high quality data platforms that enable analytics solutions for the business Ensure data engineering standards are in line with established principles of data governance, data quality and data security Monitor and optimizes the performance of data pipelines, ensuring they meet SLAs in terms of data availability and quality Hire, manage and mentor a team of Data Engineers and Data Quality Engineers Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Proven experience in data engineering, with a strong background in designing and developing ETL processes. Excellent collaboration skills, with the ability to work effectively with cross-functional teams. Leadership experience, with a track record of managing and mentoring a team of data engineers. 8+ years of experience as a Data Engineer with 3+ years of experience in a managerial role Technical experience in one or more of the cloud-based data warehouse/data lake platforms such as AWS, Snowflake, Azure Synapse ETL experience using SSIS, ADF or another equivalent tool Knowledgeable in Data Modeling and Data warehouse concepts Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application.

Posted 1 month ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Bengaluru

Remote

If you are passionate about Snowflake, data warehousing, and cloud-based analytics, we'd love to hear from you! Apply now to be a part of our growing team. Perks and benefits Intersected candidates can go through the below link to apply directly and can complete the 1st round of technical discussion https://app.hyrgpt.com/candidate-job-details?jobId=67ecc88dda1154001cc8b88f Job Summary: We are looking for a skilled Snowflake Engineer with 3-10 years of experience in designing and implementing cloud-based data warehousing solutions. The ideal candidate will have hands-on expertise in Snowflake architecture, SQL, ETL pipeline development, and performance optimization. This role requires proficiency in handling structured and semi-structured data, data modeling, and query optimization to support business intelligence and analytics initiatives. The ideal candidate will work on a project for one of our key Big4 consulting customer and will have immense learning opportunities Key Responsibilities: Design, develop, and manage high-performance data pipelines for ingestion, transformation, and storage in Snowflake. Optimize Snowflake workloads, ensuring efficient query execution and cost management. Develop and maintain ETL processes using SQL, Python, and orchestration tools. Implement data governance, security, and access control best practices within Snowflake. Work with structured and semi-structured data formats such as JSON, Parquet, Avro, and XML. Design and maintain fact and dimension tables, ensuring efficient data warehousing and reporting. Collaborate with data analysts and business teams to support reporting, analytics, and business intelligence needs. Troubleshoot and resolve data pipeline issues, ensuring high availability and reliability. Monitor and optimize Snowflake storage and compute usage to improve efficiency and performance. Required Skills & Qualifications: 3-10 years of experience in Snowflake, SQL, and data engineering. Strong hands-on expertise in Snowflake development, including data sharing, cloning, and time travel. Proficiency in SQL scripting for query optimization and performance tuning. Experience with ETL tools and frameworks (e.g., DBT, Airflow, Matillion, Talend). Familiarity with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Strong understanding of data warehousing concepts, including fact and dimension modeling. Ability to work with semi-structured data formats like JSON, Avro, Parquet, and XML. Knowledge of data security, governance, and access control within Snowflake. Excellent problem-solving and troubleshooting skills. Preferred Qualifications: Experience in Python for data engineering tasks. Familiarity with CI/CD pipelines for Snowflake development and deployment. Exposure to streaming data ingestion and real-time processing. Experience with BI tools such as Tableau, Looker, or Power BI.

Posted 1 month ago

Apply

6.0 - 10.0 years

10 - 20 Lacs

Chennai

Work from Office

Do you love leading data-driven transformations and mentoring teams in building scalable data platforms? Were looking for a Data Tech Lead to drive innovation, architecture, and execution across our data ecosystem. Your Role: Lead the design and implementation of modern data architecture, ETL/ELT pipelines, and data lakes/warehouses Set technical direction and mentor a team of talented data engineers Collaborate with product, analytics, and engineering teams to translate business needs into data solutions Define and enforce data modeling standards, governance, and naming conventions Take ownership of the end-to-end data lifecycle: ingestion, transformation, storage, access, and monitoring Evaluate and implement the right cloud/on-prem tools and frameworks Troubleshoot and resolve complex data challenges, while optimizing for performance and cost Contribute to documentation, design blueprints, and knowledge sharing We’re Looking For Someone With: Proven experience in leading data engineering or data platform teams Expertise in designing scalable data architectures and modern data stacks Strong hands-on experience with cloud platforms (AWS/Azure/GCP) and big data tools Proficiency in Python, SQL, Spark, Databricks, or similar tools A passion for clean code, performance tuning, and high-impact delivery Strong communication, collaboration, and leadership skills

Posted 1 month ago

Apply

3.0 - 5.0 years

11 - 15 Lacs

Bengaluru

Work from Office

About Rippling Rippling gives businesses one place to run HR, IT, and Finance It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers For the first time ever, you can manage and automate every part of the employee lifecycle in a single system Take onboarding, for example With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third-party apps like Slack and Microsoft 365?all within 90 seconds Based in San Francisco, CA, Rippling has raised $1 4B+ from the worlds top investors?including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock?and was named one of America's best startup employers by Forbes We prioritize candidate safety Please be aware that all official communication will only be sent from @Rippling com addresses About The Role We are seeking a passionate and highly experienced Staff Software Engineer to join our Employment Products team As a senior most engineer and architect of the team, you will be responsible for designing, building, and scaling a first-of-its-kind Employments product You will work on complex domains across 10+ countries, building a clean DSL for internal stakeholders, large-scale distributed systems, and cutting-edge performance analytics Your work will have a direct impact on building a world class payroll product which accelerates success of expanding to more countries in a 10x shorter time span Key Responsibilities Architect, develop, and maintain large-scale, distributed systems and scalable services for the Rippling Unity Platform Set the direction for engineering best practices and technology adoption Engage in coding and code reviews using Python, Golang, and Java Guide and support engineers, fostering a culture of learning and technical excellence Partner with cross-functional teams to align on goals and ensure successful project outcomes Design and implement clean, modular APIs, including Backend for Frontend (BFF) systems Architect systems capable of supporting millions of users, ensuring performance, reliability, and scalability Design analytical and transactional systems (e g , Presto, S3, Snowflake, MySQL, Aurora, MongoDB) to handle petabyte-scale data Implement streaming solutions (e g , Spark Streaming, Apache Flink, Kafka Connect) for transactional and analytical workflows Establish robust observability practices, including monitoring, logging, and tracing Maintain standards and comprehensive documentation for system architecture and operations Qualifications Experience: 9+ years of software engineering experience, with at least 3 years in a role leading architecture, designing consumer facing products and building systems Technical Expertise: Strong proficiency in backend development, distributed systems, and large-scale data pipelines Data Pipeline Experience: Hands-on experience with data processing frameworks Scalability and Performance: Deep knowledge of building and scaling real-time, high-throughput systems Consumer-Facing Product Development: Experience working on consumer-grade applications with a focus on intuitive user experiences

Posted 1 month ago

Apply

11.0 - 13.0 years

35 - 50 Lacs

Bengaluru

Work from Office

Principal AWS Data Engineer Location : Bangalore Experience : 9 - 12 years Job Summary: In this key leadership role, you will lead the development of foundational components for a Lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new Lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Must Have Tech Skills: Prior Principal Engineer experience, leading team best practices in design, development, and implementation, mentoring team members, and fostering a culture of continuous learning and innovation Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Deep technical knowledge of AWS data services and engineering practices, with demonstrable experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience of delivering Lakehouse solutions/architectures Nice To Have Tech Skills: Knowledge of additional programming languages and development tools to provide flexibility and adaptability across varied data engineering projects A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Lead complex projects autonomously, fostering an inclusive and open culture within development teams. Mentor team members and lead technical discussions. Provides strategic guidance on best practices in design, development, and implementation. Leads the development of high-quality, efficient code and develops necessary tools and applications to address complex business needs Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to a Lakehouse architecture, leveraging Iceberg capabilities. Serves as an internal subject matter expert in software development, advising stakeholders on best practices in design, development, and implementation Key Skills: Deep technical knowledge of data engineering solutions and practices. Expert in AWS services and cloud solutions, particularly as they pertain to data engineering practices Extensive experience in software architecture and solution design Specialized expertise in Python and Spark Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Educational Background: Bachelor’s degree in computer science, Software Engineering, or a related field is essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.

Posted 1 month ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary: Experience : 4 - 8 years Location : Bangalore The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable previous experience as a data engineer. Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Nice To Have Tech Skills: Familiar with data services in a Lakehouse architecture. Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Writes high quality code, ensuring solutions meet business requirements and technical standards. Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components. Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development. Actively contributes to technical discussions, supporting a culture of continuous learning and innovation. Key Skills: Proficient in Python and familiar with a variety of development technologies. Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading. Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Experienced in Agile development, including sprint planning, reviews, and retrospectives Educational Background: Bachelor’s degree in computer science, Software Engineering, or related essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Develop, optimize, and maintain scalable data pipelines using Python and PySpark. Design and implement data processing workflows leveraging GCP services such as: BigQuery Dataflow Cloud Functions Cloud Storage

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Greetings from Teamware Solutions a division of Quantum Leap Consulting Pvt. Ltd We are hiring a Senior Data Engineer Work Mode: Hybrid Location: Bengaluru, Hyderabad, Mumbai, Kolkata, Gurgaon, Noida Experience: 5 - 8 Years Notice Period: Immediate to 15 days Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: Collaborate with agile teams to design and develop cutting-edge data engineering solutions. Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. Develop high-performance real-time data ingestion solutions for streaming workloads. Adhere to best practices and established design patterns across all data engineering initiatives. Ensure code quality through elegant design, efficient coding, and performance optimization. Focus on data quality and consistency by implementing monitoring processes and systems. Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. Perform data analysis to troubleshoot and resolve data-related issues. Automate data engineering pipelines and data validation processes to eliminate manual interventions. Implement data security and privacy measures, including access controls, key management, and encryption techniques. Stay updated on technology trends, experimenting with new tools, and educating team members. Collaborate with analytics and business teams to improve data models and enhance data accessibility. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Education: Bachelors degree in Computer Science, Computer Engineering, or a related field. Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. Proficiency with data acquisition and transformation tools such as Fivetran and DBT. Strong experience in building efficient data engineering pipelines using Python and PySpark. Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. Expertise in advanced SQL programming and performance tuning. Key Skills: Strong problem-solving abilities and perseverance in the face of ambiguity. Excellent emotional intelligence and interpersonal skills. Ability to build and maintain productive relationships with internal and external stakeholders. A self-starter mentality with a focus on growth and quick learning. Passion for operational products and creating outstanding employee experiences. If you are interested in this position, please send your resume to netra.s@twsol.com.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Hyderabad

Hybrid

6-10 years of strong understanding of data pipeline/data warehouse management. - SQL Server/SSIS packages based. - Microsoft ADF & PowerBI based. - Snowflake on AWS Required Candidate profile - Strong SQL knowledge - Good experience in ITIL process (Incident, Change & problem management)

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a AWS Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future ar Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience • 10+ years of experience in data engineering with a minimum of 6 years on AWS. • Proficiency in AWS data services, including S3, Redshift, DynamoDB, Glue, Lambda, and EMR. • Strong SQL skills and experience with NoSQL databases on AWS. • Programming skills in Python, Java, or Scala for data processing and ETL tasks. • Solid understanding of data warehousing concepts, data modeling, and ETL best practices. • Experience with machine learning model deployment on AWS SageMaker. • Familiarity with data orchestration tools, such as Apache Airflow, AWS Step Functions, or AWS Data Pipeline. • Excellent problem-solving and analytical skills with attention to detail. • Strong communication skills and ability to collaborate effectively with both technical and non-technical stakeholders. • Experience with advanced AWS analytics services such as Athena, Kinesis, QuickSight, and Elasticsearch. • Hands-on experience with Amazon Bedrock and generative AI tools for exploring and implementing AI-based solutions. • AWS Certifications, such as AWS Certified Big Data – Specialty, AWS Certified Machine Learning – Specialty, or AWS Certified Solutions Architect. • Familiarity with CI/CD pipelines, containerization (Docker), and serverless computing concepts on AWS. Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization. •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes. •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate. •Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio. •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

2.0 - 7.0 years

2 - 3 Lacs

Pune

Work from Office

Work with the founder and core team to build and optimize quant trading strategies. Use Python and ML to analyze market data. No prior finance experience needed—training in options and execution provided. Just bring curiosity and coding skills!

Posted 1 month ago

Apply

4.0 - 8.0 years

7 - 17 Lacs

Pune, Chennai, Bengaluru

Hybrid

Job Summary: We are seeking a skilled ETL Tester with strong expertise in Snowflake and data validation to join our data engineering team. The ideal candidate should have 58 years of experience in ETL testing, strong SQL skills, and proven experience working in cloud data platformsespecially Snowflake. Key Responsibilities: • Design, develop, and execute test cases for ETL workflows and data pipelines. • Perform data validation and reconciliation between source systems and Snowflake. • Verify data completeness, accuracy, transformation logic, and data quality rules. • Collaborate with data engineers and business analysts to understand data flow and transformation logic. • Develop SQL queries to validate large datasets and perform manual and automated testing. • Test performance and scalability of Snowflake data models and transformations. • Validate data ingestion from various sources into Snowflake (e.g., S3, Azure Blob, APIs). • Create test plans, test cases, test scripts, and defect tracking documentation. • Work in Agile/Scrum teams and participate in sprint planning and retrospectives. • Work with CI/CD pipelines for test automation in data environments. Required Skills: • 58 years of ETL/Data Warehouse testing experience. • Minimum 2 years of experience with Snowflake understanding Snowflake architecture, SQL, and data warehousing concepts. • Strong knowledge of SQL (complex joins, CTEs, window functions). • Familiarity with ETL tools like Informatica, Talend, DataStage, or Azure Data Factory. • Experience in testing data pipelines across cloud platforms (AWS, Azure, or GCP). • Experience in data quality testing, data profiling, and data lineage validation. • Proficient in defect tracking tools like JIRA or Azure DevOps. • Knowledge of automation frameworks for ETL testing is a plus (e.g., Python, Selenium, dbt tests)

Posted 1 month ago

Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Role : Data Engineer Location : Bangalore Experience : 3+ Yrs Employment Type : Full Time, Permanent Working mode : Regular Job Description : Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Kolkata

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Chennai

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Mumbai

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 1 month ago

Apply

7.0 - 12.0 years

0 - 0 Lacs

Chennai

Work from Office

What you will do: ACV’s Machine Learning (ML) team is looking to grow its MLOps team. Multiple ACV operations and product teams rely on the ML team’s solutions. Current deployments drive opportunities in the marketplace, in operations, and sales, to name a few. As ACV has experienced hyper growth over the past few years, the volume, variety, and velocity of these deployments has grown considerably. Thus, the training, deployment, and monitoring needs of the ML team has grown as we’ve gained traction. MLOps is a critical function to help ourselves continue to deliver value to our partners and our customers. Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of their stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The core responsibilities of this role are: Working with fellow machine learning engineers to build, automate, deploy, and monitor ML applications. Developing data pipelines that feed ML models. Deploy new ML models into production. Building REST APIs to serve ML models predictions. Monitoring performance of models in production. Required Qualifications: Graduate education in a computationally intensive domain or equivalent work experience. 2-4 years of prior relevant work or lab experience in ML projects/research, and 1+ years as tech lead or major contributor on cross-team projects. Advanced proficiency with Python, SQL etc. Experience with cloud services (AWS / GCP) and kubernetes, docker, CI/CD. Preferred Qualifications: Experience with MLOps-specific tooling like Vertex AI, Ray, Feast, Kubeflow, or ClearML, etc. are a plus. Experience with EDA, including data pipeline building and data visualization. Experience with building ML models. #LI-NX1

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 13 Lacs

Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)

Work from Office

Design implement Python AI/ML/Gen AI models algorithms &app Coordinate data scientists to translate their ideas into working solutions Apply ML techniques to explore & analyze data patterns insights Supp deployment of models to production env Required Candidate profile Strong proficiency in Python including libraries Exp with machine learning algorithms & tech Understanding of AI concepts neural networks Exp with one of the cloud platforms cloud-based machine Perks and benefits 10% additional variable on top of fixed +mediclaim

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies