Jobs
Interviews

1629 Cloud Platforms Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 5.0 years

0 - 5 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

We are seeking a skilled Machine Learning Engineer with experience in model development, versioning, and deployment, particularly in cloud environments. The candidate should have strong expertise in Natural Language Processing (NLP), text annotation, and information extraction. Key Responsibilities: Develop, version, and deploy machine learning models efficiently in cloud environments Work extensively with NLP techniques including text annotation and information extraction Collaborate with data scientists, engineers, and stakeholders to integrate ML solutions into products and workflows Optimize models for performance, scalability, and accuracy Document models, deployment processes, and workflows Required Skills: Proven experience with machine learning model lifecycle management including versioning and deployment Strong knowledge of cloud deployment platforms (AWS, Azure, GCP, etc.) Proficient in NLP techniques, text annotation, and information extraction Good problem-solving and communication skills

Posted 1 month ago

Apply

4.0 - 8.0 years

3 - 15 Lacs

Delhi, India

Remote

As a Domain Consultant you will be the expert for our Cortex portfolio, a Next-Gen AI-powe'red security operations platform. You will play a key role in defining technical solutions that secure a customer s key business imperatives. You evangelize our industry leading solutions in Security Intelligence and Automation, XDR, Attack Surface Management, SOAR and Incident Response that establish Palo Alto Networks as a customer s cybersecurity partner of choice Your Impact Collaborate with account teams to recommend and develop customer solutions within your assigned specialization area Present to customers as our expert at all levels in the customer hierarchy, from practitioner to senior leadership Lead and support customer demonstrations that showcase our unique value proposition Scope and Lead Proof of Value (PoV) projects for prospective customers based on best practices to ensure technical win in your assigned opportunities Drive high technical validation and PoV win rates within your assigned specialization area Architect solutions that will help our customers strengthen and simplify their security posture Accelerate technical validation of proposed solutions within your specialization Document High-Level Design and Key Use Cases to ensure proper implementation and value realization of Palo Alto Networks Solutions Help our customers build and develop further their services around Cortex solutions Lead conversations about industry trends and emerging changes to the security landscape Discuss, with credibility, competitive offers in the marketplace and position ours as the best alternative Assist account solutions consultants to respond effectively to RFIs/RFPs while serving as the main technical point of contact for Cortex Position Palo Alto Networks or Partner delivered services as appropriate to ensure proper implementation and value realization of Palo Alto Networks solutions Your Experience Deep experience with security incident response, both IR tools and IR workflow process or SOC operational processes Strong technical hands-on experience (At least 5+ years) with EDR/XDR Experience around SOAR and SIEM will be an added advantage Strong practical experiences with threat hunting, malware, exploits and be able to demonstrate simulation of cyber attacks Experience installing, configuring and integrating a complex Security environment Experience with Security Analytics or Threat Intel is a plus Deep understanding of Unix/Linux and Windows operating systems and scripting skill in Python/JavaScript/PowerShell is an advantage Strong problem finding and solving skills, ability to analyze complex multivariate problems and use a systematic approach to gain quick resolution 8+ years of experience in a customer facing role Strong English language skills, both oral and written - Ability to confidently present with impact to an audience in person and remotely A team player - ability to share knowledge openly, interact with integrity, embrace diversity A Self-Starter, self-motivated and a quick learner with the ability to embrace change - the Cortex portfolio is always evolving and as a technical Specialist your expertise must be at the leading edge

Posted 1 month ago

Apply

8.0 - 13.0 years

3 - 15 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

As a Domain Consultant you will be the expert for our Cortex portfolio, a Next-Gen AI-powe'red security operations platform. You will play a key role in defining technical solutions that secure a customer s key business imperatives. You evangelize our industry leading solutions in Security Intelligence and Automation, XDR, Attack Surface Management, SOAR and Incident Response that establish Palo Alto Networks as a customer s cybersecurity partner of choice Your Impact Collaborate with account teams to recommend and develop customer solutions within your assigned specialization area Present to customers as our expert at all levels in the customer hierarchy, from practitioner to senior leadership Lead and support customer demonstrations that showcase our unique value proposition Scope and Lead Proof of Value (PoV) projects for prospective customers based on best practices to ensure technical win in your assigned opportunities Drive high technical validation and PoV win rates within your assigned specialization area Architect solutions that will help our customers strengthen and simplify their security posture Accelerate technical validation of proposed solutions within your specialization Document High-Level Design and Key Use Cases to ensure proper implementation and value realization of Palo Alto Networks Solutions Help our customers build and develop further their services around Cortex solutions Lead conversations about industry trends and emerging changes to the security landscape Discuss, with credibility, competitive offers in the marketplace and position ours as the best alternative Assist account solutions consultants to respond effectively to RFIs/RFPs while serving as the main technical point of contact for Cortex Position Palo Alto Networks or Partner delivered services as appropriate to ensure proper implementation and value realization of Palo Alto Networks solutions Your Experience Deep experience with security incident response, both IR tools and IR workflow process or SOC operational processes Strong technical hands-on experience (At least 5+ years) with EDR/XDR Experience around SOAR and SIEM will be an added advantage Strong practical experiences with threat hunting, malware, exploits and be able to demonstrate simulation of cyber attacks Experience installing, configuring and integrating a complex Security environment Experience with Security Analytics or Threat Intel is a plus Deep understanding of Unix/Linux and Windows operating systems and scripting skill in Python/JavaScript/PowerShell is an advantage Strong problem finding and solving skills, ability to analyze complex multivariate problems and use a systematic approach to gain quick resolution 8+ years of experience in a customer facing role Strong English language skills, both oral and written - Ability to confidently present with impact to an audience in person and remotely A team player - ability to share knowledge openly, interact with integrity, embrace diversity A Self-Starter, self-motivated and a quick learner with the ability to embrace change - the Cortex portfolio is always evolving and as a technical Specialist your expertise must be at the leading edge

Posted 1 month ago

Apply

3.0 - 8.0 years

2 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

How will you fulfill your potential Work with a global team of highly motivated platform engineers and software developers building integrated architectures for secure, scalable infrastructure services serving a diverse set of use cases. Partner with colleagues from across technology and risk to ensure an outstanding platform is delivered. Help to provide frictionless integration with the firm s runtime, deployment and SDLC technologies. Collaborate on feature design and problem solving. Help to ensure reliability, define, measure, and meet service level objectives. Quality coding & integration, testing, release, and demise of software products supporting AWM functions. Engage in quality assurance and production troubleshooting. Help to communicate and promote best practices for software engineering across the Asset Management tech stack. Basic Qualifications A strong grounding in software engineering concepts and implementation of architecture design patterns. A good understanding of multiple aspects of software development in microservices architecture, full stack development experience, Identity / access management and technology risk. Sound SDLC and practices and tooling experience - version control, CI/CD and configuration management tools. Ability to communicate technical concepts effectively, both written and orally, as well as interpersonal skills required to collaborate effectively with colleagues across diverse technology teams. Experience meeting demands for high availability and scalable system requirements. Ability to reason about performance, security, and process interactions in complex distributed systems. Ability to understand and effectively debug both new and existing software. Experience with metrics and monitoring tooling, including the ability to use metrics to rationally derive system health and availability information. Experience in auditing and supporting software based on sound SRE principles. Preferred Qualifications 3+ Years of Experience using and/or supporting Java based frameworks & SQL / NOSQL data stores. Experience with deploying software to containerized environments - Kubernetes/Docker. Scripting skills using Python, Shell or bash. Experience with Terraform or similar infrastructure-as-code platforms. Experience building services using public cloud providers such as AWS, Azure or GCP.

Posted 1 month ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Work with a global team of highly motivated platform engineers and software developers building integrated architectures for secure, scalable infrastructure services serving a diverse set of use cases. Partner with colleagues from across technology and risk to ensure an outstanding platform is delivered. Help to provide frictionless integration with the firm s runtime, deployment and SDLC technologies. Collaborate on feature design and problem solving. Help to ensure reliability, define, measure, and meet service level objectives. Quality coding & integration, testing, release, and demise of software products supporting AWM functions. Engage in quality assurance and production troubleshooting. Help to communicate and promote best practices for software engineering across the Asset Management tech stack. Basic Qualifications A strong grounding in software engineering concepts and implementation of architecture design patterns. A good understanding of multiple aspects of software development in microservices architecture, full stack development experience, Identity / access management and technology risk. Sound SDLC and practices and tooling experience - version control, CI/CD and configuration management tools. Ability to communicate technical concepts effectively, both written and orally, as we'll as interpersonal skills required to collaborate effectively with colleagues across diverse technology teams. Experience meeting demands for high availability and scalable system requirements. Ability to reason about performance, security, and process interactions in complex distributed systems. Ability to understand and effectively debug both new and existing software. Experience with metrics and monitoring tooling, including the ability to use metrics to rationally derive system health and availability information. Experience in auditing and supporting software based on sound SRE principles. Preferred Qualifications 3+ Years of Experience using and/or supporting Java based frameworks & SQL / NOSQL data stores. Experience with deploying software to containerized environments - Kubernetes/Docker. Scripting skills using Python, Shell or bash. Experience with Terraform or similar infrastructure-as-code platforms. Experience building services using public cloud providers such as AWS, Azure or GCP.

Posted 1 month ago

Apply

8.0 - 13.0 years

0 - 1 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Title: Enterprise Architect Architecture Office Experience: 8-10 Years Employment Type: Full-Time Job Summary: We are looking for a seasoned Enterprise/Technical Architect to join the Architecture Office and lead end-to-end architecture design and governance for enterprise systems. This role requires a strong grasp of modern architectural frameworks, deep technical expertise, and the ability to align technology with strategic business objectives. Key Responsibilities: Design, review, and guide the architecture of enterprise-grade applications and systems. Develop and maintain architectural standards, principles, and reference models. Collaborate with business, engineering, and infrastructure teams to ensure architectural alignment. Evaluate and recommend tools, technologies, and frameworks aligned with enterprise needs. Lead architectural governance reviews and ensure compliance with best practices. Create high-level solution designs, integration patterns, and data flow models. Mentor development teams on architecture principles and implementation. Required Skills & Experience: 810 years of overall IT experience, with 3–5 years in architectural roles. Expertise in designing scalable, secure, and high-performance applications. Deep experience with microservices , cloud platforms (AWS/Azure/GCP) , API management , and containerization (Docker/Kubernetes) . Strong understanding of enterprise integration patterns , data architecture , and DevOps pipelines . Hands-on knowledge of Java/.NET , architectural modeling tools (e.g., ArchiMate, Sparx EA), and documentation frameworks. Experience with architectural frameworks like TOGAF , Zachman , or similar is preferred.

Posted 1 month ago

Apply

6.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

As a Senior DevOps Engineer, you will be responsible for enhancing and integrating DevOps practices into our development and operational processes. You will work collaboratively with software development, quality assurance, and IT operations teams to implement CI/CD pipelines, automate workflows, and improve the deployment processes to ensure high-quality software delivery. Key Responsibilities Design and implement CI/CD pipelines for automation of build, test, and deployment processes. Collaborate with development and operations teams to improve existing DevOps practices and workflows. Deploy and manage container orchestration platforms such as Kubernetes and Docker. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Participate in incident response and root cause analysis activities. Establish best practices for DevOps processes, security, and compliance. Qualifications and Experience Bachelor's degree with DevOps certification 7+ years of experience in a DevOps or related role. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI. Developemnt (JAVA or Python ..etc) - Advanced Kubernetes usage and admin - Advanced AI - Intermediate CICD development - Advanced Strong collaboration and communication skills.

Posted 1 month ago

Apply

9.0 - 14.0 years

10 - 15 Lacs

Hyderabad

Work from Office

The Product Owner III will be responsible for defining and prioritizing features and user stories, outlining acceptance criteria, and collaborating with cross-functional teams to ensure successful delivery of product increments. This role requires strong communication skills to effectively engage with stakeholders, gather requirements, and facilitate product demos. The ideal candidate should have a deep understanding of agile methodologies, experience in the insurance sector, and possess the ability to translate complex needs into actionable tasks for the development team. Key Responsibilities: Define and communicate the vision, roadmap, and backlog for data products. Manages team backlog items and prioritizes based on business value. Partners with the business owner to understand needs, manage scope and add/eliminate user stories while contributing heavy influence to build an effective strategy. Translate business requirements into scalable data product features. Collaborate with data engineers, analysts, and business stakeholders to prioritize and deliver impactful solutions. Champion data governance , privacy, and compliance best practices. Act as the voice of the customer to ensure usability and adoption of data products. Lead Agile ceremonies (e.g., backlog grooming, sprint planning, demos) and maintain a clear product backlog. Monitor data product performance and continuously identify areas for improvement. Support the integration of AI/ML solutions and advanced analytics into product offerings. Required Skills & Experience: Proven experience as a Product Owner, ideally in data or analytics domains. Strong understanding of data engineering , data architecture , and cloud platforms (AWS, Azure, GCP). Familiarity with SQL , data modeling, and modern data stack tools (e.g., Snowflake, dbt, Airflow). Excellent stakeholder management and communication skills across technical and non-technical teams. Strong business acumen and ability to align data products with strategic goals. Experience with Agile/Scrum methodologies and working in cross-functional teams. Ability to translate data insights into compelling stories and recommendations .

Posted 1 month ago

Apply

2.0 - 7.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals Develop, test, and support end-to-end batch and near real-time data flows/pipelines Demonstrate understanding in data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum of 2+ years of related experience required Experience in modeling and business system designs Good hands-on experience on DataStage, Cloud based ETL Services Have great expertise in writing TSQL code Well versed with data warehouse schemas and OLAP techniques Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate Must be a strong team player/leader Ability to lead Data transformation project with multiple junior data engineers Strong oral written and interpersonal skills for interacting and throughout all levels of the organization. Ability to clearly communicate complex business problems and technical solutions.

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Pune

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Bengaluru

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Databricks including Spark-based ETL, Delta Lake Good to have skills:Pyspark Job Summary We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe. Professional and Technical Skills 3.5-5 years of experience in Data Engineering roles with a focus on cloud platforms. Proficiency in Databricks, including Spark-based ETL, Delta Lake, and SQL. Strong experience with one or more cloud platforms (AWS preferred). Handson Experience with Delta lake, Unity Catalog, and Lakehouse architecture concepts. Strong programming skills in Python and SQL; experience with Pyspark a plus. Solid understanding of data modeling concepts and practices (e.g., star schema, dimensional modeling). Knowledge of CI/CD practices and version control systems (e.g., Git). Familiarity with data governance and security practices, including GDPR and CCPA compliance. Additional Information Experience with Airflow or similar workflow orchestration tools. Exposure to machine learning workflows and MLOps. Certification in Databricks, AWS Familiarity with data visualization tools such as Power BI (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Architecture Principles Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on various data-related tasks and collaborating with teams to optimize data processes. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Develop innovative data solutions to meet business requirements.- Optimize data pipelines for efficiency and scalability.- Implement data governance policies to ensure data quality and security. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Architecture Principles.- Strong understanding of data modeling and database design.- Experience with ETL tools and processes.- Knowledge of cloud platforms and big data technologies.- Good To Have Skills: Data management and governance expertise. Additional Information:- The candidate should have a minimum of 12 years of experience in Data Architecture Principles.- This position is based at our Bengaluru office.Education information - - A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 7.0 years

17 - 20 Lacs

Bengaluru

Work from Office

Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Accenture Qualification Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : BE Project Role :Software Development Engineer Project Role Description :Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have Skills :PySparkGood to Have Skills :No Industry SpecializationJob :Key Responsibilities :Overall 8 years of experience working in Data Analytics projects, Work on client projects to deliver AWS, PySpark, Databricks based Data engineering Analytics solutions Build and operate very large data warehouses or data lakes ETL optimization, designing, coding, tuning big data processes using Apache Spark Build data pipelines applications to stream and process datasets at low latencies Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience :Minimum of 2 years of experience in Databricks engineering solutions on any of the Cloud platforms using PySpark Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture delivery Minimum 3 year of Experience in one or more programming languages Python, Java, Scala Experience using airflow for the data pipelines in min 1 project 2 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform. Must be able to understand ETL technologies and translate into Cloud (AWS, Azure, Google Cloud) native tools or Pyspark. Professional Attributes :1 Should have involved in data engineering project from requirements phase to delivery 2 Good communication skill to interact with client and understand the requirement 3 Should have capability to work independently and guide the team. Educational Qualification:Additional Info : Qualification BE

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead data platform blueprint and design- Implement data platform components effectively- Ensure seamless integration between systems and data models Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Strong understanding of data platform architecture- Experience in data integration and data modeling- Knowledge of cloud platforms like AWS or Azure- Hands-on experience with SQL and NoSQL databases Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- An Engineering graduate preferably Computer Science graduate with 15 years of full-time education is required Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 1 month ago

Apply

7.0 - 10.0 years

11 - 16 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams

Posted 1 month ago

Apply

7.0 - 10.0 years

8 - 13 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

About Us: KPI Partners is a leading provider of data analytics and performance management solutions, dedicated to helping organizations harness the power of their data to drive business success. Our team of experts is at the forefront of the data revolution, delivering innovative solutions to our clients. We are currently seeking a talented and experienced Senior Developer / Lead Data Engineer with expertise in Incorta to join our dynamic team. Job Description: As a Senior Developer / Lead Data Engineer at KPI Partners, you will play a critical role in designing, developing, and implementing data solutions using Incorta. You will work closely with cross-functional teams to understand data requirements, build and optimize data pipelines, and ensure that our data integration processes are efficient and effective. This position requires strong analytical skills, proficiency in Incorta, and a passion for leveraging data to drive business insights. Key Responsibilities: - Design and develop scalable data integration solutions using Incorta. - Collaborate with business stakeholders to gather data requirements and translate them into technical specifications. - Create and optimize data pipelines to ensure high data quality and availability. - Perform data modeling, ETL processes, and data engineering activities to support analytics initiatives. - Troubleshoot and resolve data-related issues across various systems and environments. - Mentor and guide junior developers and data engineers, fostering a culture of learning and collaboration. - Stay updated on industry trends, best practices, and emerging technologies related to data engineering and analytics. - Work with the implementation team to ensure smooth deployment of solutions and provide ongoing support. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related field. - 5+ years of experience in data engineering or related roles with a strong focus on Incorta. - Expertise in Incorta and its features, along with experience in data modeling and ETL processes. - Proficiency in SQL and experience with relational databases (e.g., MySQL, Oracle, SQL Server). - Strong analytical and problem-solving skills, with the ability to work with complex data sets. - Excellent communication and collaboration skills to work effectively in a team-oriented environment. - Familiarity with cloud platforms (e.g., AWS, Azure) and data visualization tools is a plus. - Experience with programming languages such as Python, Java, or Scala is advantageous. Why Join KPI Partners? - Opportunity to work with a talented and passionate team in a fast-paced environment. - Competitive salary and benefits package. - Continuous learning and professional development opportunities. - A collaborative and inclusive workplace culture that values diversity and innovation. KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Join us at KPI Partners and help us unlock the power of data for our clients!

Posted 1 month ago

Apply

8.0 - 12.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Job Summary We are seeking a highly experienced and motivated Lead Software Engineer with strong expertise in .NET and C# to join our engineering team. The ideal candidate will have a deep understanding of software architecture, design patterns, and modern development practices, along with a proven track record of delivering scalable, high-quality software solutions. As a technical leader, you will guide and mentor a team of engineers, drive best practices, and play a key role in system design, development, and deployment of enterprise-grade applications. Key Responsibilities Lead the design, development, and implementation of complex software systems using .NET and C# Collaborate with cross-functional teams including Product Management, QA, DevOps, and UI/UX Provide architectural guidance and technical leadership throughout the development lifecycle Review code, mentor junior developers, and ensure adherence to coding standards and best practices Participate in sprint planning, technical design reviews, and retrospective meetings Drive continuous improvement in engineering practices, tools, and processes Ensure the performance, quality, and responsiveness of applications Troubleshoot, debug, and optimize existing systems Required Qualifications 8–10+ years of experience in software development with a focus on the Microsoft technology stack Experience in leading a team of 5 or more software engineers Strong proficiency in C#, .NET Core/.NET 6+, ASP.NET MVC, and Web API Solid understanding of object-oriented programming, design patterns, and software architecture principles Experience with RESTful services, microservices, and distributed systems Proficiency with Entity Framework, LINQ, and SQL Server or other relational databases Hands-on experience with unit testing, CI/CD pipelines, and version control systems (e.g., Git) Familiarity with cloud platforms (Azure, AWS, or GCP) Strong leadership, communication, and interpersonal skills Preferred Qualifications Experience with front-end technologies such as JavaScript, TypeScript, Angular, or React Exposure to containerization and orchestration tools (Docker, Kubernetes) Familiarity with Agile/Scrum methodologies Microsoft certifications are a plus

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

We are seeking an experienced Gen AI and LLM Developer to design, develop, and deploy generative AI models and large language models (LLMs) for advanced natural language processing (NLP) tasks. The ideal candidate will have 6-9 years of experience with LLM architectures (e.g., GPT, BERT), deep learning frameworks (PyTorch, TensorFlow), and cloud platforms (AWS, GCP, Azure). Responsibilities include training and fine-tuning models, optimizing for performance, collaborating with cross-functional teams, and staying updated on the latest AI trends. Strong technical skills in Python, NLP techniques, model optimization, and distributed training are essential.Experience designing scalable system with Generative AI" and "Exposure to full stack development on .net or Java based platform

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Responsibilities Data Pipeline Engineers are expected to be involved from inception of projects, understand requirements, architect, develop, deploy, and maintain data pipelines (ETL / ELT). Typically, they work in a multi-disciplinary squad (we follow Agile!) which involves partnering with program and product managers to expand product offering based on business demands. Design is an iterative process, whether for UX, services or infrastructure. Our goal is to drive up user engagement and adoption of the platform while constantly working towards modernizing and improving platform performance and scalability. Deployment and maintenance require close interaction with various teams. This requires maintaining a positive and collaborative working relationship with teams within DOE as well as with wider Aladdin developer community. Production support for applications is usually required for issues that cannot be resolved by operations team. Creative and inventive problem-solving skills for reduced turnaround times are highly valued. Preparing user documentation to maintain both development and operations continuity is integral to the role. And Ideal candidate would have At least 4+ years experience as a data engineer Experience in SQL, Sybase, Linux is a must Experience coding in two of these languages for server side/data processing is required Java, Python, C++ 2+ years experience using modern data stack (spark, snowflake, Big Query etc.) on cloud platforms (Azure, GCP, AWS) Experience building ETL/ELT pipelines for complex data engineering projects (using Airflow, dbt, Great Expectations would be a plus) Experience with Database Modeling, Normalization techniques Experience with object-oriented design patterns Experience with dev ops tools like Git, Maven, Jenkins, Gitlab CI, Azure DevOps Experience with Agile development concepts and related tools Ability to trouble shoot and fix performance issues across the codebase and database queries Excellent written and verbal communication skills Ability to operate in a fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances BA/BS or equivalent practical experience Skills that would be a plus Perl, ETL tools (Informatica, Talend, dbt etc.) Experience with Snowflake or other Cloud Data warehousing products Exposure with Workflow management tools such as Airflow Exposure to messaging platforms such as Kafka Exposure to NoSQL platforms such as Cassandra, MongoDB Building and Delivering REST APIs

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

You can help conduct research to build quantitative financial models and portfolio analytics that help managing most of the money of the world s largest asset manager. You can bring all yourself to the job. From the top of the firm down we embrace the values, identities and ideas brought by our employees. We are looking for curious people with a strong background in data science, quantitative research and machine learning, have awesome problem-solving skills, insatiable appetite for learning and innovating, adding to BlackRock s vibrant research culture. If any of this excites you, we are looking to expand our team. We currently have Data Scientist role with the AFE Investment AI (IAI) Team, India (Mumbai or Gurugram location). The securities market is undergoing a massive transformation as the industry is embracing machine learning and, more broadly, AI, to help evolve the investment process. Pioneering this journey at BlackRock, the team has better deliver applied AI investment analytics to help both BlackRock and Aladdin clients achieve scale through automation while safeguarding alpha generation. The IAI team combines AI / ML methodology and technology skills with deep subject matter expertise in fixed income, equity, and multi-asset markets, and the buyside investment process. We are building next generation liquidity, security similarity and pricing models leveraging our expertise in quantitative research, data science and machine learning. The models we build use innovative machine learning approaches, have real practical value and are used by traders and portfolio managers alike. Our models use cutting edge econometric/statistical methods and tools. The models themselves have real practical value and are used by traders, portfolio managers and risk managers representing different investment styles (fundamental vs. quantitative) and across different investment horizons. Research is conducted predominantly in Python and Scala, and implemented into production by a separate, dedicated team of developers. These models have a huge footprint of usage across the entire Aladdin client base, and so we place special emphasis on scalability and ensuring adherence to BlackRock s rigorous standards of model governance and control. Background and Responsibilities We are looking to hire a Data Scientist with 4+ years experience to join AFE Investment AI India team focusing on Trading and Liquidity to work closely with other data scientists/researchers to support Risk Mangers, Portfolio Managers and Traders. We build cutting edge liquidity analytics using a wide range of ML algos and a broad array of technologies (Python, Scala, Spark/Hadoop, GCP, Azure). This role is a great opportunity to work closely with the Portfolio Managers, Risk Managers and Trading team, spanning areas such as: Design, develop, and maintain data pipelines to extract, transform, and load data from various sources into our data warehouse/lake. Work with data scientists and analysts to understand data needs and design appropriate data models. Implement data quality checks and ensure the accuracy and consistency of data throughout the processing pipeline. Perform analysis of large data sets comprising of market data, trading data and derived analytics. Design and develop model surveillance framework. Automate data processing tasks using scripting languages (e.g., Python, Scala) and orchestration tools (e.g., Airflow, Luigi). Utilize cloud-based data platforms (e.g., GCP, Azure, etc.) to manage and process large datasets efficiently. Implement the ML models/analytics for Trading/Liquidity and integrate into Aladdin analytical system in accordance with BlackRock s model governance policy. Qualifications: B.Tech / B.E. / M.Sc. degree in a quantitative discipline (Mathematics, Physics, Computer Science, Finance or similar area). MS/M.Tech. / PhD is a plus. Strong background in Mathematics, Statistics, Probability, Linear Algebra Knowledgeable about data mining, data analytics, data modeling Experience with data engineering tools and technologies (e.g., Apache Spark, Hadoop). Strong understanding of relational and non-relational databases (e.g., SQL, NoSQL). Proficiency in scripting languages for data manipulation and automation (e.g., Python, Scala). Experience working with cloud platforms for data storage and processing (e.g., Azure, GCP). Ability to work independently and efficiently in a fast-paced and team-oriented environment. Previous experience or knowledge in fixed income market and market liquidity is not required but a big plus.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies