Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
4.0 - 9.0 years
20 - 35 Lacs
Pune
Hybrid
Key Skills: Data Science, Machine Learning, NLP, Generative AI, LLM, Transformer Networks, GANs, VAEs, Prompt Engineering, Model Development, MarkLogic (preferred), Cloud-native models, Model Deployment, Data Pipelines Roles & Responsibilities: Apply deep learning and generative modeling techniques to develop LLM solutions in AI, especially in the field of Natural Language Processing (NLP). Work with various LLM technologies and Transformer Encoder Networks to enhance NLP capabilities. Design and implement state-of-the-art generative models for tasks such as text generation, text completion, language translation, and document summarization. Utilize expertise in machine learning, focusing on generative models like GANs, VAEs, and transformer-based architectures. Develop and optimize model development, model serving, and training/re-training techniques in data-sparse environments. Apply prompt engineering techniques for developing instruction-based LLMs. Collaborate with SAs and cross-functional teams to identify business requirements and deliver solutions that meet customer needs. Stay updated with the latest advancements in generative AI and LLM technologies. Evaluate and preprocess large-scale datasets, ensuring data quality and integrity, and develop data pipelines for model training and evaluation. Articulate model behavior analysis and hallucination effects to business stakeholders. Develop guardrails for LLMs, leveraging both open-source and cloud-native models. Collaborate with software engineers to deploy and optimize generative models in production environments. Mentor junior data scientists and contribute to the growth of the data science team. Experience Requirement: 4-10 years of experience working in Data Science, Machine Learning, and especially NLP technologies. Experience with LLM technologies and solid understanding of Transformer Encoder Networks. Proven experience with generative models, including GANs, VAEs, and transformer-based architectures. Familiarity with model development, training/re-training techniques, and deployment in data-sparse environments. Strong understanding of prompt engineering techniques for LLMs. Experience with data preprocessing and building data pipelines for generative models. Ability to explain model behaviors, hallucination effects, and behavioral analysis techniques to stakeholders. Experience with developing guardrails for LLMs using open-source or cloud-native models. Exposure to MarkLogic is a plus. Education: Any Graduation.
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
Ford Motor Credit Company is on a mission to modernize its legacy Platforms in order to transform the business and harness the power of data to enhance customer experiences, ensure regulatory compliance, and improve operational efficiencies. The company is in the process of rearchitecting core applications and platforms to transition to a Unified, Modern, Scalable, and Secure solutions on Google Cloud. As a Solution Architect joining the FC India IT Architecture team, you will play a crucial role in defining the optimal platform and domain architecture. Your responsibilities will include creating solutions for cloud implementation across various engagements such as experimentation, proof of concepts, and production deployments. It is essential to ensure alignment with Ford standards and the global value delivery framework while promoting reuse. In this role, you will provide architecture guidance to engineering teams in India and the US. Moreover, you will drive business adoption of the new platform and facilitate the sunset of legacy platforms. Your expertise will be instrumental in providing leadership, analysis, and design tasks to support the development of technology solutions that meet business needs and align with architectural governance and standards. Responsibilities: - Stay abreast of emerging technology trends and disruptions to enable new business and operating models. - Collaborate with global architecture teams to build Cloud solutions. - Develop future architecture utilizing cloud-native Digital stack and Unified data platforms. - Quickly grasp business processes, current application landscape, and create solution roadmaps. - Lead the creation and management of Application integration models, GCP infrastructure provisioning, and DevOps pipelines. - Participate in architecture assessments on technical solutions and provide recommendations aligned with business needs and architectural governance. - Collaborate with teams on cloud-based design, development, and data mesh architecture. - Deliver standard definitions, reference models, and architecture designs to support the architecture review board (ARB) in assessing technology investments. - Provide advisory and technical consulting across workstreams to engineering, product, and testing teams. Qualifications: Required Skills and Selection Criteria: - Minimum 8 years of relevant work experience in solution architecture with a strong understanding of cloud hosting concepts. - Experience in migrating legacy applications to containerized microservice design applications. - Proficiency in domain-driven design and data mesh principles. - Hands-on experience with Java, Angular, React, PostGres, GraphQL, REST APIs, CI/CD pipelines, and Event-based architecture. - Solid knowledge of front-end development, GraphQL, CRM, Data Warehousing, ETL, etc. - Define Non-Functional Requirements (NFRs) for cloud-based solutions. - Hands-on experience in implementing API, Microservices, and application security. - Familiarity with enterprise frameworks, architecture design patterns, and secure interoperability standards. - Understanding of traditional and cloud data warehouse environments and building data pipelines on the cloud. - Practical experience in DevOps principles, continuous integration and deployment (CI/CD), automated testing, and deployment pipelines. - Implementation of cloud security best practices and tools like IAM, Encryption, Network Security, etc. - Proficiency in microservices architecture. - Google Professional Cloud Architect certification is an advantage. - Excellent communication, collaboration, problem-solving skills, and attention to detail. - Exposure to Agile and SAFe development processes. Nice to Have: - Masters degree in Computer Science/Engineering, Data Science, or related field. - Strong presentation skills with the ability to communicate architectural proposals effectively. - Knowledge or certification in TOGAF or equivalent framework. (Note: This job description is a summary of the information provided and should be written in the second person without headers.),
Posted 3 weeks ago
3.0 - 12.0 years
0 Lacs
kolkata, west bengal
On-site
As a Cloud DB Engineer, you will be responsible for designing and developing data pipelines to collect, transform, and store data from various sources in order to support analytics and business intelligence. Your role will involve integrating data from multiple sources, including databases, APIs, and third-party tools, to ensure consistency and accuracy across all data systems. You will also be tasked with designing, implementing, and optimizing both relational and non-relational databases to facilitate efficient storage, retrieval, and processing of data. Data modeling will be a key aspect of your responsibilities, where you will develop and maintain data models that represent data relationships and flows, ensuring structured and accessible data for analysis. In addition, you will design and implement Extract, Transform, Load (ETL) processes to clean, enrich, and load data into data warehouses or lakes. Monitoring and optimizing performance of data systems, including database query performance, data pipeline efficiency, and storage utilization, will be crucial to your role. Collaboration is essential as you will work closely with data scientists, analysts, and other stakeholders to understand data needs and ensure that data infrastructure aligns with business objectives. Implementing data quality checks and governance processes to maintain data accuracy, completeness, and compliance with relevant regulations will also be part of your responsibilities. Furthermore, creating and maintaining comprehensive documentation for data pipelines, models, and systems will be necessary to ensure transparency and efficiency in data management processes.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Ingestion Engineer at Saxon Global, you will be responsible for designing, developing, and optimizing data ingestion pipelines to integrate multiple sources into Databricks. Your expertise in CI/CD and Kubernetes will be crucial in implementing and maintaining efficient data workflows. Collaboration with Data Engineers and stakeholders is essential to streamline data ingestion strategies and ensure data integrity, security, and compliance throughout the process. Key Responsibilities: - Design, develop, and optimize data ingestion pipelines for integrating multiple sources into Databricks. - Implement and maintain CI/CD pipelines for data workflows. - Deploy and manage containerized applications using Kubernetes. - Collaborate with Data Engineers and stakeholders to streamline data ingestion strategies. - Troubleshoot and optimize ingestion pipelines for performance and scalability. Required Skills & Qualifications: - Proven experience in data ingestion and pipeline development. - Hands-on experience with CI/CD tools such as GitHub Actions, Jenkins, Azure DevOps, etc. - Strong knowledge of Kubernetes and container orchestration. - Experience with Databricks, Spark, and data lake architectures. - Proficiency in Python, Scala, or SQL for data processing. - Familiarity with cloud platforms like AWS, Azure, or GCP. - Strong problem-solving and analytical skills. Preferred Qualifications: - Experience with Infrastructure as Code tools like Terraform, Helm, etc. - Background in streaming data ingestion technologies such as Kafka, Kinesis, etc. - Knowledge of data governance and security best practices.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be proficient in leveraging cloud platforms such as AWS, Azure, or GCP for data storage, compute, and analytics services. Experience in database programming using various SQL flavors is essential. Moreover, you will need to implement data governance frameworks encompassing data quality, lineage, and cataloging. Collaboration with cross-functional teams, including business analysts, data engineers, and DevOps teams, will be a key aspect of this role. Familiarity with the Big Data ecosystem, whether on-premises (Hortonworks/MapR) or in the Cloud, is required. You should be able to evaluate emerging cloud technologies and suggest enhancements to the data architecture. Proficiency in any orchestration tool like Airflow or Oozie for scheduling pipelines is preferred. Hands-on experience in utilizing tools such as Spark Streaming, Kafka, Databricks, and Snowflake is necessary. You should be adept at working in an Agile/Scrum development process and optimizing data systems for cost efficiency, performance, and scalability.,
Posted 3 weeks ago
15.0 - 19.0 years
0 Lacs
maharashtra
On-site
At PwC, we focus on leveraging advanced technologies and techniques in data and analytics engineering to design and develop robust data solutions for our clients. As a Director - Generative AI with over 15 years of experience, you will play a crucial role in transforming raw data into actionable insights, enabling informed decision-making, and driving business growth. Your main responsibilities will involve developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. You should possess proficiency in Generative AI based application development and have a strong focus on leveraging AI and ML technologies. Strong experience in Python programming and related frameworks such as Django and Flask is essential. You must have extensive experience in building scalable and robust applications using Python. A solid understanding of data engineering principles and technologies including ETL, data pipelines, and data warehousing is required. Familiarity with AI and ML concepts, algorithms, and libraries such as TensorFlow and PyTorch is necessary. Knowledge of cloud platforms like AWS, Azure, and GCP and their AI/ML services is also expected. Experience with database systems such as SQL, NoSQL, and data modeling is a plus. Strong problem-solving and analytical skills are essential, with the ability to translate business requirements into technical solutions. You should have excellent leadership and team management skills, with the ability to motivate and develop a high-performing team. Strong communication and collaboration skills are also crucial, as you will be working effectively in cross-functional teams. Being self-motivated and proactive with a passion for learning and staying up-to-date with emerging technologies and industry trends is key to success in this role. Educational background in BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA would be preferred for this position.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
The ideal candidate for the Azure Data Engineer position will have 4-6 years of experience in designing, implementing, and maintaining data solutions on Microsoft Azure. As an Azure Data Engineer at our organization, you will be responsible for designing efficient data architecture diagrams, developing and maintaining data models, and ensuring data integrity, quality, and security. You will also work on data processing, data integration, and building data pipelines to support various business needs. Your role will involve collaborating with product and project leaders to translate data needs into actionable projects, providing technical expertise on data warehousing and data modeling, as well as mentoring other developers to ensure compliance with company policies and best practices. You will be expected to maintain documentation, contribute to the company's knowledge database, and actively participate in team collaboration and problem-solving activities. We are looking for a candidate with a Bachelor's degree in Computer Science, Information Technology, or a related field, along with proven experience as a Data Engineer focusing on Microsoft Azure. Proficiency in SQL and experience with Azure data services such as Azure Data Factory, Azure SQL Database, Azure Databricks, and Azure Synapse Analytics is required. Strong understanding of data architecture, data modeling, data integration, ETL/ELT processes, and data security standards is essential. Excellent problem-solving, collaboration, and communication skills are also important for this role. As part of our team, you will have the opportunity to work on exciting projects across various industries like High-Tech, communication, media, healthcare, retail, and telecom. We offer a collaborative environment where you can expand your skills by working with a diverse team of talented individuals. GlobalLogic prioritizes work-life balance and provides professional development opportunities, excellent benefits, and fun perks for its employees. Join us at GlobalLogic, a leader in digital engineering, where we help brands design and build innovative products, platforms, and digital experiences for the modern world. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers worldwide, serving customers in industries such as automotive, communications, financial services, healthcare, manufacturing, media and entertainment, semiconductor, and technology.,
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal consultant- Data Engineer In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, AI Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate AI engineers with GenAI . Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between AI research and production to create ground-breaking new products, features and solve problems for our customers with GenAI Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing AI models into production on a variety of cloud platforms with GenAI Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and AI tooling - Sagemaker , GenAI Execution Paradigm - low latency/Streaming, batch Ensure GenAI outputs are contextually relevant, Familiarity with Generative AI technologies, Design and Implement GenAI Solutions Collaborate with service line teams to design, implement and manage Gen-AI solution Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) AI and GenAI Tools Certifications in AI/ML or GenAI Familiarity with generative models, prompt engineering, and fine-tuning techniques to develop innovative AI solutions. Designing, developing, and implementing solutions tailored to meet client needs. Understanding business requirements and translating them into technical solutions using GEN AI Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
Bengaluru
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 7+ years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 3 weeks ago
5.0 - 10.0 years
3 - 5 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Azure Databricks Developer Job Title: Azure Databricks Developer Experience: 5+ Years Location: PAN India (Remote/Hybrid as per project requirement) Employment Type: Full-time Job Summary: We are hiring an experienced Azure Databricks Developer to join our dynamic data engineering team. The ideal candidate will have strong expertise in building and optimizing big data solutions using Azure Databricks, Spark, and other Azure data services. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Databricks and Apache Spark. Integrate and manage large datasets using Azure Data Lake, Azure Data Factory, and other Azure services. Implement Delta Lake for efficient data versioning and performance optimization. Collaborate with cross-functional teams including data scientists and BI developers. Ensure best practices for data security, governance, and compliance. Monitor performance and troubleshoot Spark clusters and data pipelines. Skills & Requirements: Minimum 5 years of experience in data engineering with at least 2+ years in Azure Databricks. Proficiency in Apache Spark (PySpark/Scala). Strong hands-on experience with Azure services ADF, ADLS, Synapse Analytics. Expertise in building and managing ETL/ELT pipelines. Strong SQL skills and experience with performance tuning. Experience with CI/CD pipelines and Azure DevOps is a plus. Good understanding of data modeling, partitioning, and data lake architecture. Role & responsibilities Preferred candidate profile
Posted 3 weeks ago
10.0 - 15.0 years
15 - 20 Lacs
Pune
Work from Office
Notice Period: Immediate About the role: We are hiring a Senior Snowflake Data Engineer with 10+ years of experience in cloud data warehousing and deep expertise on the Snowflake platform. The ideal candidate should have strong skills in SQL, ETL/ELT, data modeling, and performance tuning, along with a solid understanding of Snowflake architecture, security, and cost optimization. Roles & Responsibilities: Collaborate with data engineers, product owners, and QA teams to translate business needs into efficient Snowflake-based data models and pipelines. Design, build, and optimize data solutions leveraging Snowflake features such as virtual warehouses, data sharing, cloning, and time travel. Develop and maintain robust ETL/ELT pipelines using tools like Talend, Snowpipe, Streams, Tasks, and Python. Ensure optimal performance of SQL queries, warehouse sizing, and cost-efficient design strategies. Implement best practices for data quality, security, and governance, including RBAC, network policies, and masking. Contribute to code reviews and development standards to ensure high-quality deliverables. Support analytics and BI teams with data exploration and visualization using tools like Tableau or Power BI. Maintain version control using Git and follow Agile development practices. Required Skills: Snowflake Expertise: Deep knowledge of Snowflake architecture and core features. SQL Development: Advanced proficiency in writing and optimizing complex SQL queries. ETL/ELT: Hands-on experience with ETL/ELT design using Snowflake tools and scripting (Python). Data Modeling: Proficient in dimensional modeling, data vault, and best practices within Snowflake. Automation & Scripting: Python or similar scripting language for data workflows. Cloud Integration: Familiarity with Azure and its services integrated with Snowflake. BI & Visualization: Exposure to Tableau, Power BI or other similar platforms.
Posted 3 weeks ago
0.0 - 5.0 years
9 - 19 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role is for junior and senir people across level Role & responsibilities Assist in the design, development, and maintenance of ETL/ELT pipelines on AWS (Glue, Lambda, S3, Redshift) and Azure (Data Factory, Data Lake, Synapse) Perform data ingestion, transformation, and validation using Python, SQL, and cloud-native tools Work with structured and unstructured data from diverse sources such as APIs, databases, and files Monitor and optimize performance of data pipelines to ensure data quality and timely availability Support data lake and warehouse implementations on cloud platforms Collaborate with data analysts, scientists, and business users to understand data requirements Maintain clear documentation of data workflows, processes, and cloud configurations Follow best practices in security, logging, and error handling within cloud-based data solutions Preferred candidate profile data engineering background on any cloud platform
Posted 4 weeks ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and Inviting applications for the role of Principal Consultant -Lead MLOps Engineer! In this role, you will define, implement and oversee the MLOps strategy for scalable, compliant, and cost-efficient deployment of AI/ GenAI models across the enterprise. This role combines deep DevOps knowledge, infrastructure architecture, and AI platform design to guide how teams build and ship ML models securely and reliably. You will establish governance, reuse, and automation frameworks for AI infrastructure, including Terraform-first cloud automation, multi-environment CI/CD, and observability pipelines. Responsibilities Architect secure, reusable, modular IaC frameworks across cloud and regions for MLOps Lead the development of CI/CD pipelines and standardize deployment frameworks. Design observability and monitoring systems for ML/ GenAI workloads. Collaborate with platform, data science, compliance and Enterprise Architecture teams to ensure scalable ML operations. Define enterprise-wide MLOps architecture and standards (build ? deploy ? monitor) Lead design of GenAI / LLMOps platform (Bedrock/OpenAI/Hugging Face + RAG stack) Integrate governance controls (approvals, drift detection, rollback strategies) Define model metadata standards, monitoring SLAs, and re-training workflows Influence tooling, hiring, and roadmap decisions for AI/ML delivery Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Qualifications we seek in you! Minimum Qualifications Good years of experience in DevOps or MLOps roles. Degree/qualification in Computer Science or a related field, or equivalent work experience Strong Python programming skills. Hands on experience in containerised deployment. Proficient with AWS (SageMaker, Lambda, ECR), Terraform, and Python. Demonstrated experience deploying multiple GenAI systems into production. Hands-on experience deploying 3-4 ML/ GenAI models in AWS. Deep understanding of ML model lifecycle: train ? test ? deploy ? monitor ? retrain. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Knowledge of governance and compliance policies, standards, and procedures Exposure to RAG/LLM workloads and model deployment infrastructure. Experience in developing, testing, and deploying data pipelines Preferred Qualifications/ Skills Experience designing model governance frameworks and CI/CD pipelines. Knowledge of governance and compliance policies, standards, and procedures Advanced understanding of platform security, cost optimization, and ML observability. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
6.0 - 10.0 years
12 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilities Build our data pipelines for our data warehouse in Python working with APIs to source data Build power bi reports and dashboards associated to this process Contribute to our strategy for new data pipelines and data engineering approaches Maintain a medallion based architecture for data analysis with Kimball Participates in daily scrum calls, follows agile SDLC Creates meaningful documentation of their work Follow organizational best practices for dbt and writes maintainable code Qualifications 5+ years of professional experience as a Data Engineer Strong dbt experience (3+ years) and knowledge of modern data stack Strong experience with Snowflake (3+ years) You have experience using Dagster and running complex pipelines (1+ year) Some Python experience, experience with git and Azure Devops Experience with data modeling in Kimball and medallion based structures
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru, Bellandur
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 4-6 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 1 month ago
3.0 - 7.0 years
3 - 7 Lacs
Gurgaon, Haryana, India
On-site
This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 month ago
5.0 - 10.0 years
12 - 20 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Pune
Work from Office
We are looking for a highly skilled and experienced Data Engineer with over 5 years of experience to join our growing data team. The ideal candidate will be proficient in Databricks, Python, PySpark, and Azure, and have hands-on experience with Delta Live Tables. In this role, you will be responsible for developing, maintaining, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. You will collaborate with cross-functional teams to build robust data infrastructure and enable data-driven decision-making. Key Responsibilities: .Design, develop, and manage scalable and efficient data pipelines using PySpark and Databricks .Build and optimize Spark jobs for processing large volumes of structured and unstructured data .Integrate data from multiple sources into data lakes and data warehouses on Azure cloud .Develop and manage Delta Live Tables for real-time and batch data processing .Collaborate with data scientists, analysts, and business teams to ensure data availability and quality .Ensure adherence to best practices in data governance, security, and compliance .Monitor, troubleshoot, and optimize data workflows and ETL processes .Maintain up-to-date technical documentation for data pipelines and infrastructure components Qualifications: 5+ years of hands-on experience in Databricks platform development. Proven expertise in Delta Lake and Delta Live Tables. Strong SQL and Python/Scala programming skills. Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure). Familiarity with data modeling and data warehousing concepts.
Posted 1 month ago
3.0 - 8.0 years
3 - 8 Lacs
Mumbai, Maharashtra, India
On-site
Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities.
Posted 1 month ago
2.0 - 6.0 years
2 - 6 Lacs
Bengaluru, Karnataka, India
On-site
Key Deliverables: Lead end-to-end development of scalable ML models and data solutions Implement MLOps workflows for versioning, deployment, and monitoring Design and conduct A/B testing and statistical analysis for insight generation Optimize large-scale data pipelines and ensure model performance in production Role Responsibilities: Collaborate with cross-functional teams to deliver high-impact AI projects Apply deep learning, reinforcement learning, or ensemble methods as needed Utilize cloud platforms and container tools for scalable model deployment Translate business problems into data-driven solutions with measurable outcomes
Posted 1 month ago
6.0 - 8.0 years
6 - 8 Lacs
Gurgaon, Haryana, India
On-site
Design, develop, and maintain robust and scalable data pipelines using Python and SQL. Analyze and understand source systems and data flows to support accurate data ingestion. Ensure data quality, consistency, and governance across various systems and platforms. Optimize existing pipelines and queries for improved performance and scalability. Role Requirements and Qualifications: Strong proficiency in Python and SQL for data processing, scripting, and analytics. Proven experience in building and maintaining production-level data pipelines. Familiarity with Azure cloud services such as Azure Data Factory, Blob Storage, and SQL Database. Experience working with Databricks for big data processing and collaborative analytics. Exposure to or willingness to learn Exploratory Data Analysis (EDA) techniques.
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Gurgaon, Haryana, India
On-site
Maintain, upgrade, and evolve data pipeline architectures to ensure optimal performance and scalability. Orchestrate the integration of new data sources into existing pipelines for further processing and analysis. Keep documentation up to date for pipelines and data feeds to facilitate smooth operations and collaboration within the team. Collaborate with cross-functional teams to understand data requirements and optimize pipeline performance accordingly. Troubleshoot and resolve any issues related to pipeline architecture and data processing. Role Requirements and Qualifications: Experience with cloud platforms for deployment and management of data pipelines. Familiarity with AWS / Azure for efficient data processing workflows. Experience with constructing FAIR data products is highly desirable. Basic understanding of computational clusters to optimize pipeline performance. Prior experience in data engineering or operations roles, preferably in a cloud-based environment. Proven track record of successfully maintaining and evolving data pipeline architectures. Strong problem-solving skills and ability to troubleshoot technical issues independently. Excellent communication skills to collaborate effectively with cross-functional teams.
Posted 1 month ago
4.0 - 9.0 years
14 - 22 Lacs
Pune
Work from Office
Responsibilities: * Design, develop, test and maintain scalable Python applications using Scrapy, Selenium and Requests. * Implement anti-bot systems and data pipeline solutions with Airflow and Kafka. Share CV on recruitment@fortitudecareer.com Flexi working Work from home
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough