Home
Jobs

6805 Ansible Jobs - Page 43

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled WebLogic Administrator with deep expertise in WLST scripting, DevOps practices, and containerization technologies. The ideal candidate will be responsible for administering and modernizing WebLogic environments across on-premises and Oracle Cloud Infrastructure (OCI) setups. This includes scripting with WLST, containerizing applications with Docker, managing deployments via Kubernetes, and integrating with CI/CD pipelines. Key Responsibilities Administer and optimize Oracle WebLogic Server environments in both on-prem and cloud (OCI) contexts. Perform WebLogic upgrades to the latest supported versions (e.g., 14.x). Automate WebLogic domain creation, configuration, and deployments using WLST (WebLogic Scripting Tool). Containerize WebLogic applications using Docker and orchestrate them via Kubernetes. Manage WebLogic domains using the WebLogic Kubernetes Operator, including domain resource configuration and lifecycle events. Design and implement secure, scalable Docker networking for clustered WebLogic environments. Deploy and manage infrastructure on OCI and/or on-prem, including use of Kubernetes (OKE preferred). Build and maintain CI/CD pipelines using Jenkins, GitLab CI or GitHub Actions, or OCI DevOps for seamless deployment and updates. Implement monitoring, logging, and alerting solutions to support operational excellence. Maintain documentation and provide knowledge transfer to teams as needed. Required Skills Mandatory: 7+ years of hands-on experience with Oracle WebLogic Server administration. Mandatory: Proven expertise in WLST scripting for automating WebLogic tasks (domain creation, deployments, configurations). Mandatory: Experience with WebLogic version upgrades (e.g., 11g/12c to 14.x). Proficiency with Docker, container networking, and Kubernetes orchestration. Hands-on experience managing WebLogic domains via WebLogic Kubernetes Operator. Strong knowledge of DevOps tools and practices, including CI/CD, automation, and configuration management. Scripting skills (WLST, Shell, Python) and experience with Infrastructure-as-Code tools (Terraform, Ansible). Familiarity with both on-prem infrastructure and OCI platforms. Preferred Qualifications Experience deploying and managing workloads on Oracle Cloud Infrastructure (OCI), especially using OKE. OCI certifications (e.g., Architect Associate, DevOps Professional) and Weblogic Administration Certification Experience with Helm, ingress controllers, and Kubernetes networking. Familiarity with observability tools like Prometheus, Grafana, OCI Monitoring, or ELK. Understanding of security practices for WebLogic, containers, and hybrid environments. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description: DevOps Engineer (Onsite – Mumbai) Location: Onsite – Mumbai, India Experience: 3+ years About the Role: We are looking for a skilled and proactive DevOps Engineer with 3+ years of hands-on experience to join our engineering team onsite in Mumbai . The ideal candidate will have a strong background in CI/CD pipelines , cloud platforms (AWS, Azure, or GCP), infrastructure as code , and containerization technologies like Docker and Kubernetes. This role involves working closely with development, QA, and operations teams to automate, optimize, and scale our infrastructure. Key Responsibilities: Design, implement, and maintain CI/CD pipelines for efficient and reliable deployment processes Manage and monitor cloud infrastructure (preferably AWS, Azure, or GCP) Build and manage Docker containers , and orchestrate with Kubernetes or similar tools Implement and manage Infrastructure as Code using tools like Terraform , CloudFormation , or Ansible Automate configuration management and system provisioning tasks Monitor system health and performance using tools like Prometheus , Grafana , ELK , etc. Ensure system security through best practices and proactive monitoring Collaborate with developers to ensure smooth integration and deployment Must-Have Skills: 3+ years of DevOps or SRE experience in a production environment Experience with cloud services (AWS, GCP, Azure) Strong knowledge of CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar Proficiency with Docker and container orchestration (Kubernetes preferred) Hands-on with Terraform , Ansible , or other infrastructure-as-code tools Good understanding of Linux/Unix system administration Familiar with version control systems (Git) and branching strategies Knowledge of scripting languages (Bash, Python, or Go) Good-to-Have (Optional): Exposure to monitoring/logging stacks: ELK, Prometheus, Grafana Experience in securing cloud environments Knowledge of Agile and DevOps culture Understanding of microservices and service mesh tools (Istio, Linkerd) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

4+ years of experience as a Network Engineer Experience with VMWare Experience with creating test scripting (any language is ok) Hands on technical experience, building labs, networking skills, understanding security and some automation The role will be about 50% building labs, and will need someone who is a really good communicator both orally and written Experience with load balancing experience (preferable with F5) Knowledge of VMWare Exposure to NGINX Knowledge of Terraform and/or ansible Very articulate with both speaking and writing communications (have the ability to learn how write webinars, blogs, etc) The role is 50% lab on hands work Excellent communication skills both oral and verbal Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Information Date Opened 06/11/2025 Job Type Full time Industry IT Services City Bangalore , Hyderabad State/Province Karnataka Country India Zip/Postal Code 560001 Job Description Team is looking for a Senior DevOps Engineer with deep expertise in building and managing Infrastructure as Code (IaC) on AWS using Terraform and Terragrunt. You will play a key role in architecting scalable, secure, and highly available cloud infrastructure to support our engineering teams and production environments. Design, develop, and manage scalable cloud infrastructure on AWS using Terraform and Terragrunt. 6+ years of experience in DevOps, Cloud Engineering, or Infrastructure Engineering. Experience with multi-account AWS setups and account governance (e.g., AWS Organizations, Control Tower). This is a hands-on role that involves collaborating with developers, architects, and operations teams to automate infrastructure provisioning, optimize cloud resources, and enforce DevOps best practices. Create and maintain reusable, modular, and version-controlled IaC modules. Strong expertise in AWS, including services like EC2, VPC, S3, RDS, IAM, ECS, Lambda, CloudWatch, etc. Knowledge of infrastructure testing frameworks (e.g., Terratest, Checkov, or InSpec). Implement and enforce infrastructure standards, security best practices, and compliance policies. Proven experience developing infrastructure using Terraform and Terragrunt in production environments. Exposure to containerization and orchestration (Docker, ECS, EKS, Kubernetes). Build and manage CI/CD pipelines to automate infrastructure provisioning and deployment. Solid understanding of infrastructure design patterns, networking, and cloud security. Familiarity with configuration management tools (Ansible, Chef, Puppet). Collaborate with engineering teams to ensure seamless integration between infrastructure and applications. Experience with CI/CD tools such as GitHub Actions, GitLab CI, CircleCI, or Jenkins. Understanding of cost optimization and cloud cost analysis tools. Monitor, troubleshoot, and optimize cloud environments for cost, performance, and reliability. Proficient in scripting languages like Bash, Python, or Go for automation tasks. Provide guidance on DevOps best practices and mentor junior team members. Familiarity with version control (Git), monitoring, and logging tools. Stay current with AWS service updates and evolving DevOps tooling.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Mumbai, Maharashtra, India; Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. 10 years of experience in Big Data, Data Warehouse, Data Modelling, Data Mining and Hadoop. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience in GCP. Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. Experience architecting, developing software, or Big Data solutions in virtualized environments. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop. Ability to implement secure key storage using Key Management System. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Data and Analytics Consultant, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls, and will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. You will work with Product Management and Product Engineering teams to build and drive excellence in products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Help Google cloud customers with current infrastructure assessment, design and architect goal infrastructure, develop a migration plan, and deliver technical workshops to educate them on GCP. Participate in technical and design discussions with technical teams to speed up the adoption process and ensure best practices during implementation. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Mumbai, Maharashtra, India; Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. 10 years of experience in Big Data, Data Warehouse, Data Modelling, Data Mining and Hadoop. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience in GCP. Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. Experience architecting, developing software, or Big Data solutions in virtualized environments. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop. Ability to implement secure key storage using Key Management System. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Data and Analytics Consultant, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls, and will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. You will work with Product Management and Product Engineering teams to build and drive excellence in products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Help Google cloud customers with current infrastructure assessment, design and architect goal infrastructure, develop a migration plan, and deliver technical workshops to educate them on GCP. Participate in technical and design discussions with technical teams to speed up the adoption process and ensure best practices during implementation. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurgaon, Haryana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Mumbai, Maharashtra, India; Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. 10 years of experience in Big Data, Data Warehouse, Data Modelling, Data Mining and Hadoop. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience in GCP. Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. Experience architecting, developing software, or Big Data solutions in virtualized environments. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop. Ability to implement secure key storage using Key Management System. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Data and Analytics Consultant, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls, and will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. You will work with Product Management and Product Engineering teams to build and drive excellence in products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Help Google cloud customers with current infrastructure assessment, design and architect goal infrastructure, develop a migration plan, and deliver technical workshops to educate them on GCP. Participate in technical and design discussions with technical teams to speed up the adoption process and ensure best practices during implementation. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience. Experience coding with one or more programming languages (e.g., Java, C/C++, Python). Experience troubleshooting technical issues for internal/external partners or customers. Preferred qualifications: Experience in distributed data processing frameworks and modern age investigative and transactional data stores. Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures. Experience in big data, information retrieval, data mining. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB. Experience with Infrastructure as Code (IaC) and Continuous Integration/Continuous Delivery (CICD) tools like Terraform, Ansible, Jenkins etc. Understanding of at least one database type with the ability to write complex SQLs. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an in-depth understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering teams to build and constantly drive excellence in our products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Mumbai, Maharashtra, India; Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. 10 years of experience in Big Data, Data Warehouse, Data Modelling, Data Mining and Hadoop. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience in GCP. Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. Experience architecting, developing software, or Big Data solutions in virtualized environments. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop. Ability to implement secure key storage using Key Management System. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Data and Analytics Consultant, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls, and will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. You will work with Product Management and Product Engineering teams to build and drive excellence in products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Help Google cloud customers with current infrastructure assessment, design and architect goal infrastructure, develop a migration plan, and deliver technical workshops to educate them on GCP. Participate in technical and design discussions with technical teams to speed up the adoption process and ensure best practices during implementation. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurgaon, Haryana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Number of Open Positions - 1 Experience 2 to 4 years Industry - IT Product & Services and IT Consulting Employment Type - Full-time Work Location - Smart City, Kochi, Kerala Shift timing based on projects typically day/evening shift. Technical Competencies Experience in managing and automating large-scale VMware,OpenStack and or Kubernetes environments. Mandatory expertise into Vmware products - ESXi & vCenter along with strong Kubernetes (K8s) knowledge. Proficient in scripting- python/shell Good understanding of DevOps principles and practices, including experience with automation tools such as Ansible, Terraform. Proven work experience as an Openstack/VMware/Kubernetes Engineer. Ability to apply security patches and VM hardening. Soft Skill Competencies (Expectation based on role) Bachelor's degree in Computer Science, Information Technology, or a related field. Certifications: VMware Certified Professional (VCP) - Data Center Virtualization and Nutanix Certified Professional (NCP) (preferred) Experience: 3-5 years of experience in VMware and/or Nutanix administration. Role Description Design, implement, and maintain VMware vSphere environments for clients. Provide expertise on NSX networking solutions to ensure secure and scalable infrastructure design. Design, deploy, and maintain scalable and reliable VMware, Cloud (AWS, GCP, Azure and IBM) and Kubernetes environments. Design, develop, and implement on prem Openstack/VMware/Kubernetes solutions tailored to the specific needs of our organization. Collaborate with business unit owners to gather requirements and translate them into scalable Openstack and VMware (Kubernetes architecture standard). Implement automation tools and frameworks (CI/CD pipelines) to streamline Infrastructure operations and enable self-service workloads(VMs/Pods) lifecycle management for developers. Design and implement observability best practices for Design and implement VMware and Nutanix solutions to meet business requirements. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Key Responsibilities Lead the deployment, configuration, and ongoing administration of Hortonworks, Cloudera, and Apache Hadoop/Spark ecosystems. Maintain and monitor core components of the Hadoop ecosystem including Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, and HBASE. Take charge of the day-to-day running of Hadoop clusters using tools like Ambari, Cloudera Manager, or other monitoring tools, ensuring continuous availability and optimal performance. Manage and provide expertise in HBASE Clusters and SOLR Clusters, including capacity planning and performance tuning. Perform installation, configuration, and troubleshooting of Linux Operating Systems and network components relevant to big data environments. Develop and implement automation scripts using Unix SHELL/Ansible Scripting to streamline operational tasks and improve efficiency. Manage and maintain KVM Virtualization environments. Oversee clusters, storage solutions, backup strategies, and disaster recovery plans for big data infrastructure. Implement and manage comprehensive monitoring tools to proactively identify and address system anomalies and performance bottlenecks. Work closely with database teams, network teams, and application teams to ensure high availability and expected performance of all big data applications. Interact directly with customers at their premises to provide technical support and resolve issues related to System and Hadoop administration. Coordinate closely with internal QA and Engineering teams to facilitate issue resolution within promised Skills & Qualifications : Experience : 5-8 years of strong individual contributor experience as a DevOps, System, and/or Hadoop Domain Expertise : Proficient in Linux Administration. Extensive experience with Hadoop Infrastructure and Administration. Strong knowledge and experience with SOLR. Proficiency in Configuration Management tools, particularly Data Ecosystem Components : Must have hands-on experience and strong knowledge of managing and maintaining : Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem deployments. Core components like Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE. Cluster management tools such as Ambari and Cloudera : Strong scripting skills in one or more of Perl, Python, or Management : Strong experience working with clusters, storage solutions, backup strategies, database management systems, monitoring tools, and disaster recovery : Experience managing KVM Virtualization : Excellent analytical and problem-solving skills, with a methodical approach to debugging complex : Strong communication skills (verbal and written) with the ability to interact effectively with technical teams and : Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent relevant work experience. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Dehradun, Uttarakhand, India

On-site

Linkedin logo

Job Description Install, configure, and maintain Linux servers and workstations. Implement and manage system security measures, including user access control, patching, and hardening. Monitor system performance and resource utilization, identifying and resolving bottlenecks. Manage user accounts, permissions, and access rights. Perform regular system backups and implement disaster recovery procedures. Troubleshoot hardware and software issues related to Linux systems. Automate routine tasks using scripting languages (e.g., Bash, Python). Manage and maintain network services such as DNS, DHCP, and NFS. Collaborate with development and operations teams to support application deployments and infrastructure needs. Document system configurations, procedures, and troubleshooting steps. Stay up-to-date with the latest Linux distributions, security updates, and best practices. What You'll Bring Proven experience as a Linux System Administrator (ideally 3+ years). Strong understanding of Linux operating systems (e.g., CentOS, Ubuntu, Red Hat). Experience with system installation, configuration, and maintenance. Solid knowledge of security principles and best practices for Linux environments. Proficiency in scripting languages such as Bash and/or Python for automation. Experience with monitoring tools (e.g., Nagios, Zabbix, Prometheus). Familiarity with network services (DNS, DHCP, NFS). Excellent troubleshooting and problem-solving skills. Strong communication and collaboration Skills : Experience with virtualization technologies (e.g., VMware, VirtualBox, KVM). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their Linux services. Experience with configuration management tools (e.g., Ansible, Chef, Puppet). Knowledge of containerization technologies (Docker, Experience with log management and analysis tools (e.g., ELK stack). About Us Were an international team that specialize in building technology products & then helping brands grow with multi-channel demand generation marketing. We have in-house experience working for Fortune companies, e-commerce brands, technology SaaS companies & VC-funded startups. We have assisted over a dozen billion-dollar companies with consulting, technology, operations, and digital agency capabilities in managing their unique brand online. We have a fun and friendly work culture that also encourages employees personally and professionally. EbizON has many values that are important to our success as a company: integrity, creativity, innovation, mindfulness and teamwork. We thrive on the idea of making life better for people by providing them with peace of mind. The people here love what they do because everyone from management all way down understands how much it means living up close-to someones' ideals which allows every day feel less stressful knowing each person has somebody cheering him. Equal Opportunity Employer EbizON is committed to providing equal opportunity for all employees, and we will consider any qualified applicant without regard to race or other prohibited characteristics. Flexible Timings Flexible working hours are the new normal. We at EbizON believe giving employees freedom to choose when to work, how to work. It helps them thrive and also balance their life better. Global Clients Exposure Our goal is to provide excellent customer service and we want our employees to work closely with clients from around the world. That's why you'll find us working closely with clients from around the world through Microsoft Teams, Zoom and other video conferencing tools. Retreats & Celebrations With annual retreats, quarterly town halls and festive celebrations we have a lot of opportunities to get together. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Animaker's Growth Animaker's growth is skyrocketing. We plan to make Animaker the world's go-to place for animation & video. We look for someone who is excited to make an impact with a constant everyday effort to set a difference in the project, team, company & the industry as a whole. Were out to change the world, one video at a time. Responsibilities Building scalable web applications Integrating UI elements with server-side Handling Security and Data Protection Integrating front- end with SQL and NoSQL databases and ensuring its consistency Building reusable code and libraries Optimizing the code to improve its quality and efficiency Skill Requirements Minimum 4 years of Backend development experience in Python You have excellent programming skills in python, ORM libraries and frameworks like Django and Flask API You have experience inbuilt tools like Git, Jenkins, Ansible, Rundeck You have experience in cross-browser/cross-platform front-end development with -HTML/CSS/Javascript in Angular or similar frameworks Proficient in SQL and working with relational databases (e.g., PostgreSQL, MySQL) Working knowledge of Elasticsearch for full-text search and indexing Experience with containerization using Docker Solid understanding of CI/CD pipelines using tools like Jenkins and Docker Basic proficiency in JavaScript for integrating backend services with frontend components Familiar with version control systems (Git) and collaborative workflows (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description and Requirements "At BMC trust is not just a word - it's a way of life!" Description And Requirements CareerArc Code CA-SW Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a Senior Java Product Developer to join our AMI Cloud family working on complex and distributed software, developing, and debugging software products, implementing features, and assisting the firm in assuring product quality. At AMI Cloud, we develop high-scale and performant applications running on both z/OS mainframe as well as cloud environments. We care deeply about technology, performance, readable and clean code and developer productivity. The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Design and develop platform solution based on Java/J2EE best practices and web standards. Participate in all aspects of product development, from requirements analysis to product release. Lead features and participate in architecture and design reviews. Design enterprise platform using agile methodology. This includes creating detailed design using UML, process flows, sequence diagrams, and pseudo-code level details ensuring solution alignment. You have strong diagnostics, debugging, and troubleshooting skills. Ability to work flexible hours and stay up to date with competing technologies and passionate about adapting technology to provide business-benefiting solutions balancing with platform limitations. Provides complete documentation in the form of commented code, problem status information, and design documents. Work on complex problems where analysis of situations or data requires an in-depth evaluation of several factors. Self-learner, flexible and able to work in a multi-tasked and dynamic environment. Excellent communication skills: demonstrated ability to explain complex technical concepts to both technical and non-technical audiences. To ensure you’re set up for success, you will bring the following skillset & experience: You have 10+ years of experience with application development using Java, RESTful services, high-performance, and multi-threading. Familiarization with DevOps tools and concepts such as Infrastructure as code, Jenkins, Ansible, and Terraform. You have experience in a Web based environment utilizing React, Angular, server-side rendering, HTML, CSS, JavaScript and TypeScript. You have knowledge and experience with build tools such as Gradle and Maven. You have experience working in cloud tech such as AWS, Azure or GCP. You are familiar with modern version control system such as Git. You have knowledge of design patterns, object-oriented software development, high-performance code characteristics, SOLID principles of development, testing automation and performance at scale. You are familiar with modern Java based frameworks such as Spring Boot, Quarkus, or Micronaut. Whilst these are nice to have, our team can help you develop in the following skills: CI/CD (Jenkins) environment with popular DevOps tools Experience with Agile methodology, use of Atlassian products Jira, Confluence ) You are familiar and can take advantage of advanced IDEs such as IntelliJ, Eclipse or VSCode . BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 3,315,400 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Key Responsibilities : Architect and design OpenStack-based private cloud solutions tailored to customer needs. Drive infrastructure design across compute, storage, and network components. Work with cross-functional teams to integrate OpenStack with existing or new platforms. Define and implement best practices for cloud infrastructure, including HA, scalability, and performance tuning. Collaborate with DevOps teams to align infrastructure automation with deployment pipelines. Lead customer engagements and translate business requirements into technical architectures. Ensure compliance, security, and resiliency standards are embedded in all solutions. Key Skills & Experience Minimum 8 years of experience in cloud and infrastructure architecture. Strong experience in OpenStack components : Nova, Neutron, Cinder, Glance, Swift, etc. Deep domain expertise in at least one of the following : Cloud Infrastructure : Compute orchestration, virtualization, HA design. Storage : Block/Object storage solutions (Ceph, Swift, etc.). Networking : SDN, virtual networks, overlay networks, Neutron plugins. Proficient in container technologies and orchestration (Kubernetes is a plus). Experience with tools like Ansible, Terraform, or other automation frameworks. Familiarity with Linux internals, scripting, and system performance monitoring. Strong problem-solving, documentation, and customer-facing skills. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role : SAP Basis Consultant Key Responsibilities Manage and Maintain SAP Landscapes in BorgWarner handling all levels of projects varying from updates to Migration Responsible integrating third party applications with SAP systems (e.g. PLM, MES, CAD, EDI.) Responsible for Cyber Resiliency, Performance Tuning and Automation in SAP Administrative tasks of SAP Eco Systems. Handle SAP Basis related Tickets and resolve them as per the SLA. Preferred Candidate Profile Administration , Installation & Performance tuning of SAP (ABAP & JAVA) Application & DB Servers (HANA, Sybase & MaxDB) in ERP, GTS, BW, SRM, PPM, SolMan, GRC, HR & Portal products running on Windows and Linux Expertise in SAP Eco Systems running in cloud Knowledge in S4HANA Conversions and methodologies will be an added advantage Expertise in Automation of Basis Processes and tasks through tools like Ansible & scripting Very good Knowledge in implementing Industry best practices to improve cyber resiliency for SAP Landscapes. Controlling and monitoring of SAP landscape including subsystems like archiving and printing landscape and middleware applications. Monitoring Backup & Execution of Restore/Recovery Tasks Assures achieving the Service levels for the operation of the business application platforms Should possess hands on experience in handling Level3 SAP Basis Tickets with minimal support Perform after hour maintenance and emergency on call work as needed. 10+ years of work experience in SAP Basis Deep Knowledge and Experience in SAP Core Components, preferably certified Systematic and structured work behaviour, Self Motivated and Driven Strong English Communications skills and good problem-solving skills Should possess strong ITIL framework knowledge, preferably certified Bachelor's Degree in Computer Science or any relevant degree (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

About Markovate At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI consulting and Gen AI development to pioneering AI agents and agentic AI, we empower our partners to lead their industries with forward-thinking precision and unmatched expertise. This is a great opportunity to collaborate with top AI engineers in a fast-paced environment and gain hands-on experience across multiple AI/ML projects with real-world impact. Overview We are seeking a DevOps Engineer with experience in deployment and cloud infrastructure who can take ownership of the entire pipeline process. This role involves managing CI/CD, cloud infrastructure, automation, and Responsibilities : End-to-end pipeline management for DevOps. Automate CI/CD workflows (Jenkins, GitLab CI/CD, GitHub Actions). Manage cloud infrastructure (AWS, Azure, GCP) using Terraform, Ansible, and CloudFormation. Deploy and monitor Docker & Kubernetes environments. Set up monitoring & logging (Prometheus, Grafana, ELK, Datadog). Troubleshoot VMs with excellent Linux/Ubuntu expertise. Implement security best practices and ensure system Requirements : Minimum 3 years in DevOps and Cloud. Strong knowledge of Linux and UBUNTU. Knowledge of Python skills for automation & scripting. Hands-on experience with AWS, Azure, or GCP. Expertise in IaC (Terraform, Ansible), CI/CD, Docker, and Kubernetes. Experience in monitoring, logging, and security : Please note : This is a remote job. However, selected candidates will be expected to work from our Gurgaon office, a few times a month, if requested. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! We are building a new SaaS offering that combines ease of use consumer level user interaction with strength of enterprise IT solutions powered by AI/ML. We are looking for SaaS quality driven software engineers who can learn and adopt cutting edge technologies and tools to build best of class SaaS solutions. Primary Roles And Responsibilities Participate in all aspects of SaaS product development, from requirements analysis to product release and sustaining. Work 'in the trenches' in a team to implement large features and partner with Product Managers, UX experts, Architects and QA to develop implementation plans with a focus on innovation, quality, sustainability, and delivering value to our clients. Learn and adopt cutting edge technologies and tools to build best of class enterprise SaaS solutions. Responsible for delivery of high-quality enterprise SaaS offerings to aggressive schedules. A team member who is passionate about quality and demonstrate creativity and innovation in enhancing the product. The candidate should have excellent problem solving, debugging, analytical skills. The candidate is expected to have excellent communication skills The candidate is expected to lead and mentor other team members. Requirements To ensure you’re set up for success, you will bring the following skillset & experience: Proven track record of technical leadership in leading a team to deliver on time and on quality Over all 8+ years of enterprise software product development experience 7+ years of Java development experience 5+ years of SaaS engineering experience Core & Advanced Java (Threading, Design Patterns, Data Structures) J2EE, REST web services, Sprint framework Understanding of data structures, data modeling and software architecture Experience with GIT repository and JIRA tools Experience with test driven software development Whilst these are nice to have, our team can help you develop in the following skills: Experience with Python Experience with Casandra, Kafka , Elastic search Experience with AI/ML Experience with Kubernetes, docker, ansible, Jenkins Experience Postgres/Oracle performance and scalability Experience in using public clouds Experience in test automation CA-DNP BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,628,600 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Roles And Responsibilities Build, test, and administer highly available container application platform clusters (e.g., RedHat OpenShift, Kubernetes, Docker Datacenter, etc.) Champion security by injecting it into the existing development workflow and every stage of software development, ensuring the entire infrastructure is secure Identify normal routines and repeatable tasks that are candidates for automation, and then create and support the deployment of automation using Ansible Work within complex software systems to isolate defects, reproduce defects, assess risk, and understand varied customer deployments Assist application teams with onboarding to container application platforms in areas such as resource requirements, capacity analysis, and troubleshooting support Azure provisioning, configuration management, storage management, network management, and virtualization Create the Continuous Integration (CI) and Continuous Deployment (CD) automation infrastructure to support the project engineering team Developing and improving standards for security (via security as code) across a continuous delivery environment and cloud-based production deployments Qualifications : Your Skills & Experience Must have : Hands-on experience with terraform. Ability to write reusable terraform modules. Must have : Hands-on Python and Unix shell scripting is required. Must have : Strong understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Must have : Experience with GCP Services and writing cloud functions. Must have : Experience with GCP IAM. Must have : Knowledge of common GCP services, Logging, Log Sinks, PUB/SUB, Docker, GCS, etc. Nice to have: Good to have certification in GCP Associate or Professional certification. Nice to have: Hands-on experience with OPA Policy. Must have : Hands on knowledge of Helm charts Must have : Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Must have : Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Must have : Experience using Docker within container orchestration platforms such as GKE. Must have : Knowledge of setting up splunk Must have : Knowledge of Spark in GKE Must have : Establish connectivity and scaling elastically Must have : Knowledge of common GCP services Nice to have : Good to have certification in Kubernetes (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Experience : 8-10 years Job Title : Devops Engineer Location : Gurugram Job Summary We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating closely with software and QA teams to enable high-quality, rapid software delivery. Key Responsibilities Cloud Infrastructure & Automation : Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps. Containerization & Orchestration : Containerize applications using Docker for seamless development and deployment. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS). Monitor and optimize container environments for performance, scalability, and cost-efficiency. Security & Compliance : Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager). Conduct regular vulnerability assessments, security scans, and implement remediation plans. Ensure infrastructure compliance with industry standards and manage incident response protocols. Monitoring & Optimization : Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic). Analyze logs and metrics to troubleshoot issues and improve system performance. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations. Scripting & Tooling : Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments. Collaboration & Leadership : Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement. Communicate technical concepts effectively to both technical and non-technical : Education Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud : 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps. Advanced knowledge of Docker and Kubernetes ecosystem. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible. Proficient in scripting (Shell, Python) for automation and tooling. Experience implementing DevSecOps practices and advanced security configurations. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus. Soft Skills Strong problem-solving abilities and capacity to work under pressure. Excellent communication and team collaboration. Organized with attention to detail and a commitment to Skills : Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean). Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog). (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Key Responsibilities Network Design and Architecture : Architect and implement advanced Layer 3 to Layer 7 networking solutions for enterprise environments. Design and configure routing and switching protocols (e.g., OSPF, BGP, EIGRP, MPLS) to meet customer requirements. Develop scalable and redundant architectures to ensure business continuity and high availability. Implementation and Deployment : Lead the deployment and configuration of Cisco, Palo Alto, Fortinet, and Juniper networking and security devices. Configure and troubleshoot advanced firewall policies, VPNs, and secure remote access solutions. Implement Quality of Service (QoS) and track shaping to optimize network performance. Security and Compliance : Design and enforce network security policies using Palo Alto, Fortinet, and Cisco ASA platforms. Monitor and mitigate threats with security tools, such as Intrusion Detection/Prevention Systems (IDS/IPS) and Zero Trust Network Access (ZTNA). Ensure compliance with industry standards (e.g., ISO 27001, PCI-DSS, HIPAA) and corporate security policies. Troubleshooting and Optimization : Diagnose and resolve complex network issues across all layers, using tools like Wireshark, SolarWinds, and Splunk. Perform regular performance monitoring and optimization of routing, switching, and application delivery. Collaborate with cross-functional teams to support hybrid cloud networking and SD-WAN solutions. Collaboration and Leadership : Provide mentorship and technical guidance to junior engineers and IT teams. Serve as a technical escalation point for high-priority incidents and advanced troubleshooting. Document and present network designs, operational procedures, and incident reports to stakeholders. Continuous Improvement and Innovation : Stay current with emerging networking technologies and trends, recommending innovations to improve operational e?ciency. Conduct proof-of-concept initiatives for new hardware and software. Standardize best practices for network deployment, security, and troubleshooting. Required Experience 5+ years of hands-on experience in designing, deploying, and managing enterprise networks. Advanced expertise in routing and switching, including protocols such as BGP, OSPF, EIGRP, and MPLS. Proven experience with Cisco Catalyst, Nexus, and ASA/Firepower devices. Extensive experience with Palo Alto ?rewalls, Fortinet appliances, and Juniper SRX and EX series. Pro?ciency in Layer 7 technologies such as load balancers, application gateways, and content ?ltering solutions. Skills & Technical Expertise Knowledge of SD-WAN technologies and hybrid cloud integrations (e.g., Azure, AWS, Google Cloud). Experience in automation using Python, Ansible, or Terraform. Familiarity with network monitoring and logging tools like SolarWinds, Splunk, or PRTG. Advanced understanding of QoS, tra?c engineering, and WAN optimization. Strong analytical and problem-solving skills to diagnose complex network issues. Excellent verbal and written communication skills to interface with technical and non-technical audiences. Ability to work in a dynamic and fast-paced environment, managing multiple priorities e?ectively. Strong team player with a proactive approach to knowledge sharing and collaboration. Certification CCNA (Cisco Certified Network Associate) - Required. CCNP (Cisco Certified Network Professional) - Required. [Other relevant certifications such as Palo Alto PCNSA/PCNSE, Fortinet NSE, or Juniper JNCIA/JNCIP are a plus.] (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Must-Have Skills : Professional experience working with public cloud platforms (AWS) Expertise in Infrastructure as Code (IaC) tools such as Terraform Hands-on experience with CI/CD tools like GitLab CI/CD, GitHub Actions, or Jenkins Strong coding and scripting skills (PowerShell, Bash, Python, or equivalent) Proficiency in Configuration Management tools like Ansible, Puppet, or Chef Experience managing and troubleshooting Linux servers Strong analytical and troubleshooting skills Exposure to security best practices and remediation Familiarity with security-related tools such as Wiz and Qualys Hands-on experience in Static/Dynamic Security Testing s Penetration Testing using tools like SonarQube, CheckMarx, AppScan, BurpSuite, OWASP ZAP Proxy, WebInspect, Fortify, Veracode, Nessus, etc. Good-to-Have Skills Knowledge of System and Application Monitoring tools (Prometheus, Grafana, CloudWatch) Experience with Log Management tools (Elastic Stack, Graylog, Splunk) Working experience with relational databases (MySQL, MS SQL Server, or similar) Use of Secret Management services like HashiCorp Vault Understanding of Change Control procedures Main Responsibilities Deliver resilient application stacks via Infrastructure as Code and DevOps practices Monitor and support critical, high-revenue business applications Diagnose and resolve complex system and application issues Implement and maintain security best practices and remediation strategies Work with cross-functional teams including Development, QA, IT Operations, and Project Management Write and maintain technical and non-technical documentation (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

Exploring Ansible Jobs in India

Ansible is a popular automation tool widely used in the IT industry, and the demand for ansible professionals is on the rise in India. Job seekers with expertise in ansible can explore various opportunities in different sectors like IT services, product companies, and consulting firms.

Top Hiring Locations in India

Here are 5 major cities actively hiring for ansible roles in India: - Bangalore - Pune - Hyderabad - Chennai - Mumbai

Average Salary Range

The estimated salary range for ansible professionals in India varies based on experience: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-20 lakhs per annum

Career Path

In the ansible domain, a typical career progression may look like: - Junior Ansible Engineer - Ansible Developer - Senior Ansible Engineer - Ansible Architect - Ansible Consultant

Related Skills

Apart from ansible, professionals in this field are often expected to have or develop skills like: - Linux administration - Scripting languages (Python, Bash) - Configuration management tools (Puppet, Chef) - Cloud platforms (AWS, Azure)

Interview Questions

Here are 25 interview questions for ansible roles: - What is Ansible and how does it work? (basic) - Explain the difference between Ansible and Puppet. (basic) - How do you define playbooks in Ansible? (basic) - What is an Ansible role? (basic) - How do you handle errors in Ansible playbooks? (medium) - Explain the concept of Ansible Tower. (medium) - How do you secure sensitive data in Ansible playbooks? (medium) - What are Ansible facts? (basic) - Explain the difference between Ansible ad-hoc command and playbook. (basic) - How do you create custom modules in Ansible? (advanced) - How do you integrate Ansible with version control systems like Git? (medium) - What is dynamic inventory in Ansible? (medium) - How do you handle dependencies between tasks in Ansible playbooks? (medium) - Explain the use of Ansible Vault. (medium) - How do you troubleshoot issues in Ansible automation? (medium) - What are some best practices for writing Ansible playbooks? (medium) - How do you scale Ansible for large infrastructure? (advanced) - Explain the concept of idempotency in Ansible. (basic) - How do you handle network devices with Ansible? (advanced) - What is the purpose of Ansible Galaxy? (basic) - How do you automate infrastructure provisioning using Ansible? (advanced) - Explain how Ansible communicates with remote servers. (basic) - How do you test Ansible playbooks? (medium) - What is Ansible Container and how is it used? (advanced) - How do you monitor Ansible tasks and jobs? (medium)

Conclusion

As the demand for ansible professionals continues to grow in India, job seekers should focus on enhancing their skills and preparing for interviews confidently. By mastering ansible and related technologies, you can open up exciting career opportunities in the IT industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies