Home
Jobs

215 Gcp Cloud Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 18.0 years

50 - 55 Lacs

Hyderabad

Work from Office

Naukri logo

Extensive, hands on knowledge of Data Modelling, Data Architecture and Data Lineage Broad knowledge of banking products , financial products (ie international trade, credit physical data modelling preferably with cloud GCP

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a Senior Site Reliability Engineer to join Okta s Workflows SRE team which is part of our Emerging Products Group (EPG). Okta Workflows is the foundation for secure integration between cloud services. By harnessing the power of the cloud, Okta allows people to quickly integrate different services, while still enforcing strong security policies. With Okta Workflows, organizations can implement no-code or low-code workflows quickly, easily, at a large scale, and low total cost. Thousands of customers trust Okta Workflows to help their organizations work faster, boost revenue, and stay secure. If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, If you have to do something more than once, automate it and who can rapidly self-educate on new concepts and tools. What you ll be doing? Designing, building, running, and monitoring Okta Workflows and other EPG products global production infrastructure. Lead and implement secure, scalable Kubernetes clusters across multiple environments. Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure. Responding to production incidents and determining how we can prevent them in the future. Triaging and troubleshooting complex production issues to ensure reliability and performance. Enhance automation workflows for patching, vulnerability assessments, and incident response. Continuously evolving our monitoring tools and platform. Promoting and applying best practices for building scalable and reliable services across engineering. Developing and maintaining technical documentation, runbooks, and procedures. Supporting a highly available and large scale Kubernetes and AWS environment as part of an on-call rotation. Be a technical SME for a team that designs and builds Okta's production infrastructure, focusing on security at scale in the cloud. What you ll bring to the role? Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience with Kubernetes deployments in either AWS and/or GCP Cloud environments. Have an understanding and familiarity with configuration management tools like Chef, Terraform, or Ansible. Have expert-level abilities in operational tooling languages such as Go and shell, and use of source control. Have knowledge of various types of data stores, particularly PostgreSQL, Redis, and OpenSearch. Experience with industry-standard security tools like Nessus and OSQuery. Have knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Skilled in using Datadog for real-time monitoring and proactive incident detection. Strong ability to collaborate with cross-functional teams and promote a security first culture. Experience in the following 5+ years of experience running and managing complex AWS or other cloud networking infrastructure resources including architecture, security and scalability. 5+ years of experience with Ansible, Chef, and/or Terraform 3+ years of experience in cloud security, including IAM (Identity and Access Management) and/or secure identity management for cloud platforms and Kubernetes. 3+ years of experience in automating CI/CD pipelines using tools such as Spinnaker, or ArgoCD with an emphasis on integrating security throughout the process. Proven experience in implementing monitoring and observability solutions such as Datadog or Splunk to enhance security and detect incidents in real-time. Strong leadership and collaboration skills with experience working cross-functionally with security engineers and developers to enforce security best practices and policies. Strong Linux understanding and experience. Strong security background and knowledge. BS In computer science (or equivalent experience).

Posted 3 weeks ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Hyderabad

Hybrid

Naukri logo

Position: Cloud Architect Experience: 10 15 Years Shift Timings: 3:30 PM IST to 12:30 AM IST . Job Description - Cloud Solutions Architect Skills: Expertise in AWS, Azure, Google Cloud. Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Experience in cloud migration and optimization. Knowledge of cloud security, governance, and cost management. Understanding of multi-cloDataud and hybrid-cloud architectures. Short JD: As a Cloud Solutions Architect, you will design and implement scalable, secure, and cost-effective cloud solutions for retail IT services. You will lead cloud migration strategies, optimize cloud infrastructure, and work with teams to ensure robust security and governance across cloud environments. Interested can drop your CV to bhavanit@techprojects.com or call on 7386945761

Posted 3 weeks ago

Apply

4.0 - 8.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

Role & responsibilities : Job Title : AI Engineer (AI-Powered Agents, Knowledge Graphs, & MLOps) Location: Hyderabad Job Type : Full-time Hands-on Gen AI development in GCP and Azure stack Job Summary : We seek an AI Engineer with deep expertise in building AI-powered agents, designing and implementing knowledge graphs, and optimizing business processes through AI-driven solutions. The role also requires hands-on experience in AI Operations (AI Ops), including continuous integration/deployment (CI/CD), model monitoring, and retraining. The ideal candidate will have experience working with open-source or commercial large language models (LLMs) and be proficient in using platforms like Azure Machine Learning Studio or Google Vertex AI to scale AI solutions effectively. Key Responsibilities : AI Agent Development : Design, build, and deploy AI-powered agents for applications such as virtual assistants, customer service bots, and task automation systems using LLMs and other AI models. Knowledge Graph Implementation : Develop and implement knowledge graphs for enterprise data integration, enhancing the retrieval, structuring, and management of large datasets to support decision-making. AI-Driven Process Optimization : Collaborate with business units to optimize workflows using AI-driven solutions, automating decision-making processes and improving operational efficiency. AI Ops (MLOps) : Implement robust AI/ML pipelines that follow CI/CD best practices to ensure continuous integration and deployment of AI models across different environments. Model Monitoring and Maintenance : Establish processes for real-time model monitoring, including tracking performance, drift detection, and accuracy of models in production environments. Model Retraining and Optimization : Develop automated or semi-automated pipelines for model retraining based on changes in data patterns or model performance. Implement processes to ensure continuous improvement and accuracy of AI solutions. Cloud and ML Platforms : Utilize platforms such as Azure Machine Learning Studio, Google Vertex AI, and open-source frameworks for end-to-end model development, deployment, and monitoring. Collaboration : Work closely with data scientists, software engineers, and business stakeholders to deploy scalable AI solutions that deliver business impact. MLOps Tools : Leverage MLOps tools for version control, model deployment, monitoring, and automated retraining processes to ensure operational stability and scalability of AI systems. Performance Optimization : Continuously optimize models for scalability and performance, identifying bottlenecks and improving efficiencies. Qualifications : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3+ years of experience as an AI Engineer, focusing on AI-powered agent development, knowledge graphs, AI-driven process optimization, and MLOps practices. Proficiency in working with large language models (LLMs) such as GPT-3/4, GPT-J, BLOOM, or similar, including both open-source and commercial variants. Experience with knowledge graph technologies, including ontology design and graph databases (e.g., Neo4j, AWS Neptune). AI Ops/MLOps Expertise : Hands-on experience with AI/ML CI/CD pipelines, automated model deployment, and continuous model monitoring in production environments. Familiarity with tools and frameworks for model lifecycle management, such as MLflow, Kubeflow, or similar. Strong skills in Python, Java, or similar languages, and proficiency in building, deploying, and monitoring AI models. Solid experience in natural language processing (NLP) techniques, including building conversational AI, entity recognition, and text generation models. Model Monitoring & Retraining : Expertise in setting up automated pipelines for model retraining, monitoring for drift, and ensuring the continuous performance of deployed models. Experience in using cloud platforms like Azure Machine Learning Studio, Google Vertex AI, or similar cloud-based AI/ML tools. Preferred Skills : Experience with building or integrating conversational AI agents using platforms like Microsoft Bot Framework, Rasa, or Dialogflow. Familiarity with AI-driven business process automation and RPA integration using AI/ML models. Knowledge of advanced AI-driven process optimization tools and techniques, including AI orchestration for enterprise workflows. Experience with containerization technologies (e.g., Docker, Kubernetes) to support scalable AI/ML model deployment. Certification in Azure AI Engineer Associate, Google Professional Machine Learning Engineer, or relevant MLOps-related certifications is a plus. Preferred candidate profile Perks and benefits

Posted 3 weeks ago

Apply

3.0 - 5.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

: Job TitleGCP Data Engineer, AS LocationPune, India Corporate TitleAssociate Role Description An Engineer is responsible for designing and developing entire engineering solutions to accomplish business goals. Key responsibilities of this role include ensuring that solutions are well architected, with maintainability and ease of testing built in from the outset, and that they can be integrated successfully into the end-to-end business process flow. They will have gained significant experience through multiple implementations and have begun to develop both depth and breadth in several engineering competencies. They have extensive knowledge of design and architectural patterns.They will provide engineering thought leadership within their teams and will play a role in mentoring and coaching of less experienced engineers. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design, develop and maintain data pipelines using Python and SQL programming language on GCP. Experience in Agile methodologies, ETL, ELT, Data movement and Data processing skills. Work with Cloud Composer to manage and process batch data jobs efficiently. Develop and optimize complex SQL queries for data analysis, extraction, and transformation. Develop and deploy google cloud services using Terraform. Implement CI CD pipeline using GitHub Action Consume and Hosting REST API using Python. Monitor and troubleshoot data pipelines, resolving any issues in a timely manner. Ensure team collaboration using Jira, Confluence, and other tools. Ability to quickly learn new any existing technologies Strong problem-solving skills. Write advanced SQL and Python scripts. Certification on Professional Google Cloud Data engineer will be an added advantage. Your skills and experience 6+ years of IT experience, as a hands-on technologist. Proficient in Python for data engineering. Proficient in SQL. Hands on experience on GCP Cloud Composer, Data Flow, Big Query, Cloud Function, Cloud Run and well to have GKE Hands on experience in REST API hosting and consumptions. Proficient in Terraform/ Hashicorp. Experienced in GitHub and Git Actions Experienced in CI-CD Experience in automating ETL testing using python and SQL. Good to have API knowledge Good to have Bit Bucket How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 3 weeks ago

Apply

6.0 - 11.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

The Senior Database Administrator (DBA) will be responsible for management, maintenance and migration of secure databases on Cloud and On-Prem platforms. Primary goal is to provide Always Available secure database tier with high performance, for both backend data and frontend application accessibility for end-users. Responsible database administrator should be able to effectively communicate and troubleshoot problems, have sound technical skills and administrative aptitude to design, build, operate, secure, manage and maintain our company databases. William ONeil India is part of the O’Neil companies, a family of businesses dedicated to providing industry-leading financial services and information. Our professional teams are passionate about stock market research and the development of services that support all O’Neil brands. Responsibilities • Design, develop, install, tune, deploy, secure, migrate and upgrade DBMS installations • Monitor database performance, identify and resolve database issues • Migrate databases across Cloud and On-Prem platforms with minimal or no downtime • Provide guidance and support to Developers on design, code review, SQL query performance tuning • HA/DR setup, Replication, database encryption, Index and Filegroup management • Monitors database system for performance and capacity constraints • Regularly liaise with On-shore and Off-shore Managers, Developers, Operations, System and Database administrators • Suggest changes and improvements for management, maintenance and securing databases • Explore new database tools and stay apprised about emerging technologies and trends • Be available for on-call and weekend support as needed Database Administrator Skills and Qualifications • Working knowledge in database programming languages (in MSSQL/PostgreSQL/MySQL) • Knowledge of database backup-recovery, security and performance monitoring standards • Understanding of database concepts – ACID properties, normal forms, DDL/DML, transaction logging, log shipping, Mirroring, High-Availability etc. • Understanding of relational and dimensional data modelling • Experience in developing migration plans to migrate on-premises databases to Cloud (such as AWS/Azure) • Excellent communication skills with attention to detail and problem-solving attitude • Adapt to new process or technology changes within the organization Educational and Experience Requirements • Bachelor’s degree in computer science or related information technology field with 6 to 8 years of experience in database administration or development • Proficient knowledge in the working, maintenance of database technologies (MSSQL, PostgreSQL, MySQL etc.) • Working experience with database cloud services (AWS, Azure, GCP) including migration • Relevant database certifications on cloud technologies such as AWS, Azure • Knowledge of Windows Server, Linux systems and Cloud environments

Posted 3 weeks ago

Apply

2.0 years

27 - 42 Lacs

Pune

Work from Office

Naukri logo

The Role We’re hiring a Software Engineer to be a part of a team which will deliver these groundbreaking software solutions. In this role, you will collaborate closely with cross-functional teams, including data scientists and product managers, to build intuitive solutions that transform how clients experience AI and ML in the application and elevate their interaction with financial data. Come join us! What You’ll Do Design and deliver secure, event-driven AI applications that provide responsive, impactful chat experiences powered by LLMs. Implement and maintain engineering solutions by writing well-designed, testable code. Build scalable and resilient systems with a focus on safety, privacy, and real-time performance. Document software functionality, system design, and project plans; this includes clean, readable code with comments. Collaborate across Addepar with product teams and other stakeholders to deliver seamless, AI-powered client experiences. Who You Are Proficient with Python, Java or similar Experience with AWS, Azure or GCP cloud deployment Experience with streaming data platforms and event driven architecture Ability to write software to process, aggregate, and compute on top of large amounts of data in an efficient way. Engage with all levels of collaborators on a technical level. A strong ownership mentality and strive to take on the most important problems. Knowledge of front end development is a plus.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Pune, Chennai, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities ETL Testing,Any automation(java,Python, Selenium), Cloud- AWS/Azure/GCP,)SQL Preferred candidate profile

Posted 3 weeks ago

Apply

6.0 - 25.0 years

15 - 70 Lacs

Navi Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Navi Mumbai, Pune, Bengaluru | 6 - 25 years | Gcp Cloud, Java, Spring Boot, Microservices, Java Programming, Gke Cluster, J2Ee, JEE, SQL, Google Cloud Services, Google Cloud Platforms

Posted 3 weeks ago

Apply

12.0 - 17.0 years

13 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Designation - Technical Architect 12+ years of experience working in Java and relevant technologies. Guiding customers in designing and creating a new architecture. Significant software development experience with expertise in Java and knowledge of latest Java 9 features Strong knowledge in Microservices Design Patterns and Architecture Must have experience in GCP Cloud Excellent knowledge of Spring and SpringBoot , and proven track record of using SpringBoot to build cloud-native microservices Knowledge of synchronous and event-driven integration patterns between services Experience with multi-threading, collections, etc. Thorough experience in writing high quality code with fully automated unit test coverage (Junit, Mockito, etc.) Extensive experience in defining and applying design standards, depending on the solutions Working experience with various CI/CD tools Designing data models for different types of database solutions - Oracle and Mongo DB Working experience with web-services (REST, SOAP) and/or experience in Microservices Experience with Kafka and XML Deep knowledge of OOPS, data structure, and algorithm Working knowledge of other DevOps tools, container technologies (Docker, Kubernetes, etc.) and Cloud Good knowledge of build tools (like Maven), automated testing like Cucumber, and building apps that meet all NFRs Understanding and experience with building GCP cloud native applications Working experience creating high performing applications including profiling and tuning to boost performance Experience in Unit Testing, TDD/BDD and in Scrum/Agile Understanding of cloud infrastructures and operating procedures

Posted 3 weeks ago

Apply

5.0 - 10.0 years

17 - 30 Lacs

Noida

Remote

Naukri logo

JD - Required skills: 5+ years of industry experience in the field of Data Engineering support and enhancement. Proficient in Google Cloud Platform (GCP) services such as Dataflow, BigQuery, Cloud Storage and Pub/Sub. Strong understanding of data pipeline architectures and ETL processes. Experience with Python programming language in terms of data processing. Knowledge of SQL and experience with relational databases. Familiarity with version control systems like Git.Ability to analyze, troubleshoot, and complex data pipeline issues. Software engineering experience in optimizing data pipelines to improve performance and reliability. Continuously optimize data pipeline efficiency and reduce operational costs and reduce number of issues/failures. Automate repetitive tasks in data processing and management Experience in monitoring and alerting for Data Pipelines. Continuously improve data pipeline reliability through analysis and testing Perform SLA-oriented monitoring for critical pipelines and provide suggestions as well implement post business approval for SLA adherence if needed. Monitor performance and reliability of GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Maintain infrastructure reliability for GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Conduct post-incident reviews and implement improvements for data pipelines. Develop and maintain documentation for data pipeline systems and processes. Excellent communication and documentation skills. Strong problem-solving and analytical skills. Open to work in a 24X7 shift.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Job Description: Experienced in GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Good experience in building the pipeline of GCP Components to load the data into Big Query and to cloud storage buckets. Excellent Data Analysis skills. Good written and oral communication skills Self-motivated able to work independently

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Role & responsibilities L2 SRE / Site Reliability Engineer with GCP cloud experience Experience- 3 to 5 Years Location - Mumbai Work from Office Notice Period- Immediate to 15 Days Proficiency in GCP , monitoring. Strong knowledge of Linux and PostgreSQL. Experience in database management, troubleshooting, RCA, and application deployment using cloud platform Ability to create SLA reports and provide on-call support to clients Qualification - B.E.,/ B. Tech,/ MCA / BCA / B. Sc. IT Interested candidate please share resume to ruvina.m@futurzhr.com along with below details Total experience Current CTC Expected CTC Current Location Ready to Relocate Mumbai Note : Only Female candidate can apply Thanks Futurz Staffing Solutions Pvt. Ltd

Posted 3 weeks ago

Apply

6.0 - 10.0 years

1 - 1 Lacs

Chennai

Work from Office

Naukri logo

Location: Chennai/DLF IT Park Notice period: Immediate to 45 days We are looking for a Senior Java Engineer with experience in Cloud/GCP - strong understanding of APIs, data structures & optimization Should have very good understanding of basic concepts, Core Java good in kubernetes, orchestration.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Infrastructure As Code (IaC) Good to have skills : Google Cloud Platform ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a GCP Cloud Infrastructure Engineer, you will build and maintain scalable, reliable, and secure cloud infrastructure on Google Cloud Platform. Implement best practices for resource provisioning, monitoring, and cost management. Roles & Responsibilities:- Build and maintain scalable, reliable, and secure cloud infrastructure on Google Cloud Platform. Implement best practices for resource provisioning, monitoring, and cost management. Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the implementation of new cloud technologies- Develop and maintain cloud infrastructure as code templates- Ensure compliance with security and governance policies Professional & Technical Skills: - Cloud services, infrastructure as code (IaC), GCP,-Terraform-Ansible- Must To Have Skills: Proficiency in Infrastructure As Code (IaC)- Good To Have Skills: Experience with Google Cloud Platform Architecture- Strong understanding of cloud architecture principles- Experience in deploying and managing cloud services- Knowledge of automation tools like Terraform or Ansible Additional Information:- The candidate should have a minimum of 7.5 years of experience in Infrastructure As Code (IaC)- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Role Overview: We are looking for a Cloud Engineer who can work across the entire web development stack to build robust, scalable, and user-centric applications for our client You will play a critical role in designing and delivering systems end-to-endfrom sleek, responsive UIs to resilient backend services and APIs Whether you're just starting your career or bringing seasoned expertise, were looking for hands-on problem solvers with a passion for clean code and great product experiences Responsibilities: Design and implement secure, scalable, and cost-optimized cloud infrastructure using AWS/GCP/Azure services Automate infrastructure provisioning and management using Infrastructure as Code (IaC) tools like Terraform or CloudFormation Set up and maintain CI/CD pipelines for smooth and reliable software delivery Monitor system performance, availability, and incident response using modern observability tools (e g , CloudWatch, Datadog, ELK, Prometheus) Ensure robust cloud security by managing IAM policies, encryption, and secrets Collaborate closely with backend engineers, data teams, and DevOps to support deployment and system stability Optimize cloud costs and usage through rightsizing, autoscaling, and resource cleanups Required Skills: Hands-on experience with cloud platforms: AWS, Azure, or GCP (preferably AWS) Proficiency in IaC tools: Terraform, CloudFormation, or Pulumi Experience with containerization and orchestration: Docker and Kubernetes Strong scripting skills in Bash, Python, or similar Deep understanding of networking, firewalls, load balancing, and VPC setups Experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) and Git workflows Familiarity with monitoring and logging stacks (Prometheus, Grafana, ELK, etc) Sound knowledge of cloud security, IAM, and access control best practices Nice to Have: Exposure to serverless architecture (AWS Lambda, GCP Cloud Functions) Experience in multi-cloud or hybrid cloud environments Familiarity with cloud-native database services (eg, RDS, DynamoDB, Firestore) Awareness of compliance frameworks (SOC2, GDPR, HIPAA) and cloud governance practices Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related technical field Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 weeks ago

Apply

10.0 - 14.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Naukri logo

Azure devops, AWS EKS , Terraform, Python, Kubernetes, SRE Job Summary We are seeking an experienced Infra Ops Specialist with 10 to 14 years of experience to join our team. The ideal candidate will have expertise in Kubernetes Azure DevOps AWS EKS Elastic Beanstalk Automation Python AWS GCP SRE Ansible and Terraform. This role requires a strong background in Consumer Lending. The work model is hybrid and the shift is day. No travel is required. Responsibilities Lead the design and implementation of infrastructure solutions using Kubernetes AWS EKS and Elastic Beanstalk. Oversee the deployment and management of applications using Azure DevOps and Terraform. Provide automation solutions using Python and Ansible to streamline operations. Ensure the reliability and availability of infrastructure through SRE practices. Collaborate with cross-functional teams to support Consumer Lending applications. Monitor and optimize cloud infrastructure on AWS and GCP. Develop and maintain CI/CD pipelines for efficient software delivery. Implement security best practices and compliance standards in cloud environments. Troubleshoot and resolve infrastructure issues in a timely manner. Document infrastructure configurations and operational procedures. Mentor junior team members and provide technical guidance. Stay updated with the latest industry trends and technologies. Contribute to the continuous improvement of infrastructure processes. Qualifications Must have extensive experience with Kubernetes AWS EKS and Elastic Beanstalk. Should have strong expertise in Azure DevOps and Terraform. Must be proficient in automation using Python and Ansible. Should have a solid understanding of SRE practices. Must have experience with AWS and GCP cloud platforms. Should have a background in Consumer Lending domain.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Senior Software Engineer HUB 2 Building of SEZ Towers, Karle Town Center, Nagavara, Bengaluru, Karnataka, India, 560045 Hybrid - Full-time Company Description When you are one of us, you get to run with the best. For decades, weve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit epsilon.com/apac or our LinkedIn page. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. https://www.epsilon.com/apac/youniverse Job Description About BU The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon’s success story. Why we are Looking for You? At Epsilon, we run on our people’s ideas. It’s how we solve problems and exceed expectations. Our team is now growing, and we are on the lookout for talented individuals who always raise the bar by constantly challenging themselves and are experts in building customized solutions in the digital marketing space. What you will enjoy in this Role? So, are you someone who wants to work with cutting-edge technology and enable marketers to create data-driven, omnichannel consumer experiences through data platforms? Then you could be exactly who we are looking for. Apply today and be part of a creative, innovative, and talented team that’s not afraid to push boundaries or take risks. What will you do? We seek Software Engineers with experience building and scaling services in on-premises and cloud environments. As a Senior & Lead Software Engineer in the Epsilon Attribution/Forecasting Product Development team, you will design, implement, and optimize data processing solutions using Scala, Spark, and Hadoop. Collaborate with cross-functional teams to deploy big data solutions on our on-premises and cloud infrastructure along with building, scheduling and maintaining workflows. Perform data integration and transformation, troubleshoot issues, Document processes, communicate technical concepts clearly, and continuously enhance our attribution engine/forecasting engine. Strong written and verbal communication skills (in English) are required to facilitate work across multiple countries and time zones. Good understanding of Agile Methodologies – SCRUM. Qualifications Strong experience (3 - 8 years) in Python or Scala programming language and extensive experience with Apache Spark for Big Data processing for design, developing and maintaining scalable on-prem and cloud environments, especially on AWS and as needed with GCP cloud. Proficiency in performance tuning of Spark jobs, optimizing resource usage, shuffling, partitioning, and caching for maximum efficiency in Big Data environments. In-depth understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce. Expertise in designing and implementing scalable, fault-tolerant data pipelines with end-to-end monitoring and alerting. Using Python to develop infrastructure modules. Hence, hands-on experience with Python. Solid grasp of database systems and SQLs for writing efficient SQL’s (RDBMS/Warehouse) to handle TBS of data. Familiarity with design patterns and best practices for efficient data modelling, partitioning strategies, and sharding for distributed systems and experience in building, scheduling and maintaining DAG workflows. End-to-end ownership with definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with stakeholders. Experience in working on GIT (or equivalent source control) and solid understanding of Unit and integration test frameworks. Must have the ability to collaborate with stakeholders/teams to understand requirements and develop a working solution and the ability to work within tight deadlines and effectively prioritize and execute tasks in a high-pressure environment. Must be able to mentor junior staff. Advantageous to have experience on below: Hands-on with Databricks for unified data analytics, including Databricks Notebooks, Delta Lake, and Catalogues. Proficiency in using the ELK (Elasticsearch, Logstash, Kibana) stack for real-time search, log analysis, and visualization. Strong background in analytics, including the ability to derive actionable insights from large datasets and support data-driven decision-making. Experience with data visualization tools like Tableau, Power BI, or Grafana. Familiarity with Docker for containerization and Kubernetes for orchestration.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 - 1 Lacs

Hyderabad

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, and maintain scalable and efficient cloud infrastructure on Google Cloud Platform (GCP) using Kubernetes Engine, Cloud Run, and other services. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop automation scripts using Ansible or Terraform to deploy applications on GCP. Troubleshoot issues related to application deployment, networking, storage, and compute resources. Ensure compliance with security best practices and company policies.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

2 - 3 Lacs

Thanjavur

Remote

Naukri logo

Stack Required: React, Node.js, Mongo DB & any cloud candidate should be proficient in React, Node.js, Mongo DB & Cloud. As a Full Stack Developer, you will be responsible for developing, and maintaining web applications across the entire stack. Required Candidate profile Role: Software Development - Other Employment Type: Full Time, Permanent Role Category: Software Development Tamil Nadu candidate most preferred

Posted 4 weeks ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Job Role: Python Developer with GCP Location: Hyderabad, Bangalore Experience: 5+ years

Posted 4 weeks ago

Apply

8.0 - 12.0 years

18 - 25 Lacs

Chennai

Remote

Naukri logo

Role & responsibilities Senior App Developer with minimum 5+ years experience developing applications Experience creating REST services (APIs) keeping microservice design patterns in mind Familiarity with Spanner / SQL Server Experience creating integration services to consume/process data from other systems Familiarity with GCP PubSub / AMQP is helpful needed Able to create CI/CD pipeline for the above services (Jenkins / Terraform) Able to create relevant documentation for each of the services Perform design reviews and code reviews Experience providing real time knowledge transfer to UPS team Establish UPS best practices in design/coding/testing Provide best practices for performance tuning Familiarity with testing, automation, and BDD testing frameworks is desired as well. Provide best practices for distributed logging and aggregating them to have appropriate instrumentation for all services Work Mode: Remote Work Time: 6 PM - 3 AM IST (This is for EST time Zone)

Posted 4 weeks ago

Apply

3.0 - 5.0 years

15 - 18 Lacs

Mumbai, Pune

Work from Office

Naukri logo

Job Description Designation: AI Engineer Experience range: 2-5 years relevant experience Location: Pune Key Responsibilities: Develop and deploy scalable AI-powered applications using the Python stack. Leverage cutting-edge AI/ML technologies such as LangChain, LangGraph, AutoGen, Phidata, CrewAI Hugging Face, OpenAI APIs, PyTorch, TensorFlow, and other advanced frameworks to build innovative AI applications. Write clean, efficient, and well-documented code adhering to best practic Build and manage robust APIs for integrating AI solutions into applications. Research and experiment with emerging technologies to discover new AI-driven use cases. Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP), ensuring security, scalability, and performance. Collaborate with product managers, engineers, and UX/UI designers to define AI application requirements and ali them with business objectives. Apply MLOps principles to streamline AI model deployment, monitoring, and optimization. Solve complex problems using foundational knowledge of generative AI, machine learning, and data processing techniques. Contribute to continuous improvement of development processes and practices. Resolve production issues by conducting effective troubleshooting and root cause analysis (RCA) within SLA Work with operations teams to support product deployment and issue resolution. Requirements Educational Background: Bachelors or Masters degree in Computer Science or related fields with a strong academic track record. Must have graduated from NIT, IIIT, IIT, or BITS Pilani colleges only. Experience Technical Skills: 3+ years of hands-on experience in building and deploying AI applications on Python stack. Strong knowledge of Python and related frameworks. Good knowledge of few of the AI/ML frameworks and agentic frameworks and platforms like LangChain, LangGraph, AutoGen, Hugging Face, CrewAI, OpenAI APIs, PyTorch, and TensorFlow etc. Experience with AI/ML workflows, including data preparation, model deployment, and optimizations. Proficiency in building and consuming RESTful APIs for connecting AI models with web applications. Knowledge of MLOps tools and practices, including model lifecycle management, CI/CD for AI, and model monitoring. Familiarity with cloud platforms like AWS, Azure, or Google Cloud, including containerization (Docker) and orchestration (Kubernetes). Experience with CI/CD pipelines, version control (Git), and automation frameworks. Strong understanding of algorithms, AI/ML fundamentals, and data preprocessing techniques. Soft Skills: Passion for exploring, experimenting and implementing emerging AI technologies. Self-starter who can independently and collaboratively in a fast-paced environment. Excellent problem-solving and analytical abilities to tackle complex challenges. Effective communication skills to explain AI concepts and strategies to stakeholders Why Join Us? Be part of a forward-thinking company revolutionizing air and port cargo logistics with AI. Collaborate with a team passionate about innovation and excellence. Gain exposure to cutting-edge AI technologies and frameworks. Enjoy opportunities for professional growth and upskilling in the latest tech stack. Receive a competitive compensation and benefits package.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

GCP Looker Developer GCP Business Data Analyst Data Engineer - GCP - Big Query GCP Application Developer Required Candidate profile Good knowledge GCP Looker Developer GCP Business Data Analyst Data Engineer - GCP - Big Query GCP Application Developer

Posted 4 weeks ago

Apply

4.0 - 7.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

REPORTING TO Senior Manager REPORTING LOCATIONBangalore WORKING LOCATIONBangalore SUMMARY OF ROLE AND OBJECTIVES As a Systems Engineer, you will be responsible for managing and maintaining our infrastructure. With a focus of automation or permanent fix on issues. You must be able to work in and adapt to a fluid, fast-paced environment. You must have strong communication, collaboration, and technical skills. ROLE & OBJECTIVES : Specific Assignments: Setup and maintain cloud/OnPrem Infrastructure: Maintain existing systems in the cloud Monitor the health of all systems in the cloud & On Prem system Check that all systems are complaint to company standards Administrate the cloud permissions and support segregated Sites/Zones Work in a team of engineers distributed over multiple locations Good Knowledge on Windows/Linux Troubleshooting Monitor platforms from an availability, security and performance aspect Share your knowledge and expertise within the company Required Proficiency in Windows Server, Linux, and Patching Understanding of TCP/IP, DNS, DHCP, Knowledge of scripting languages like PowerShell, Bash, or Python to automate tasks. Basic knowledge of database systems like SQL Server, MySQL, or Oracle. Experience with monitoring tools like site 24x7 or any tool Hands on Experience in Azure Cloud, GCP is good to have Good Knowledge in VMware vSphere (ESXi and vCenter) Worked on Incident/Request management tool like Service Now Understand the applications and Dependencies in terms of services, infra resources. Ability to come up with new technologies, methods and ideas, and guiding the team for further implementations. EXPERIENCE REQUIRED: 7+ years of Infrastructure management with minimum of 2 Years in Azure/GCP Cloud Primary Skills Azure Mandatory ( Windows Linux OS Skills) Scripting/Automation VMware (Good to have and will be exempted if candidate is very strong in Azure) Secondary Skills Windows/Linux Patching Active Directory Knowledge Soft Skills Problem-SolvingAbility to diagnose and resolve server-related issues efficiently. CommunicationClear communication skills to interact with team members and users. Attention to DetailKeen eye for detail to ensure system configurations are correct. Time ManagementAbility to prioritize tasks and manage time effectively. TeamworkCollaborative mindset to work with other IT professionals and departments. Certification: Microsoft CertifiedWindows Server (various levels) VMware Certified Professional (VCP) Cloud certifications in Azure/GCp public cloud (Good to Have) Qualifications Certification ITIL Foundation V3 highly appreciated, but not mandatory LanguagesFluent English Qualifications Bachelor's degree or comparable education

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies